I saw a thread this week about how there was not enough talk about @canva + @MelanieCanva...let's change that! Canva just announced they raised $200M (valued @ $40B)- a long way from starting w/no business experience + getting rejected by 100+ investors🧵
2017: The company announced that it had already gained a $1.86M net profit after only launching five years earlier.
More from All
How can we use language supervision to learn better visual representations for robotics?
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)
Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)
Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
You May Also Like
#24hrstartup recap and analysis
What a weekend celebrating makers looks like.
A thread
👇Read on
Let's start with a crazy view of what @ProductHunt looked like on Sunday
Download image and upload
A top 7 with:
https://t.co/6gBjO6jXtB @Booligoosh
https://t.co/fwfKbQha57 @stephsmithio
https://t.co/LsSRNV9Jrf @anthilemoon
https://t.co/Fts7T8Un5M @J_Tabansi
Spotify Ctrl @shahroozme
https://t.co/37EoJAXEeG @kossnocorp
https://t.co/fMawYGlnro
If you want some top picks, see @deadcoder0904's thread,
We were going to have a go at doing this, but he nailed it.
It also comes with voting links 🖐so go do your
Over the following days the 24hr startup crew had more than their fair share of launches
Lots of variety: web, bots, extensions and even native apps
eg. @jordibruin with
What a weekend celebrating makers looks like.
A thread
👇Read on
Let's start with a crazy view of what @ProductHunt looked like on Sunday
Download image and upload
A top 7 with:
https://t.co/6gBjO6jXtB @Booligoosh
https://t.co/fwfKbQha57 @stephsmithio
https://t.co/LsSRNV9Jrf @anthilemoon
https://t.co/Fts7T8Un5M @J_Tabansi
Spotify Ctrl @shahroozme
https://t.co/37EoJAXEeG @kossnocorp
https://t.co/fMawYGlnro
If you want some top picks, see @deadcoder0904's thread,
We were going to have a go at doing this, but he nailed it.
It also comes with voting links 🖐so go do your
#24hrsstartup was an amazing event
— Akshay Kadam(A2K) \U0001f47b (@deadcoder0904) November 19, 2018
I never went to a hackathon but this just felt like one even though I was just watching \U0001f440
Everyone did great but there were a few startups that I personally loved \U0001f496
Some of my favorites are in the thread below\U0001f447
Over the following days the 24hr startup crew had more than their fair share of launches
Lots of variety: web, bots, extensions and even native apps
eg. @jordibruin with
\U0001f3a8\U0001f3c3\u200d\u2640\ufe0f DrawRun just launched on Product Hunt! Idea to App Store to Product Hunt in 68 hours!\u2070\u2070https://t.co/mxnLZ8FRSu
— Jordi Bruin (@jordibruin) November 20, 2018
Thanks for the motivation @thepatwalls @arminulrich @_feloidea