https://t.co/k3sGW1m1d6
A list of cool websites you might now know about
A thread š§µ
https://t.co/k3sGW1m1d6
This website is made by me š
https://t.co/inO0Uig6NQ
https://t.co/Bf1Ph0xxYv
Made by Me and @PinglrHQ
https://t.co/kEP8EnvKqO
Made by @Prajwxl and @krishnalohiaaa
Timeless articles from the belly of the internet. Served 5 at a time
Creator: @louispereira
one of my fav website
https://t.co/5ZR0tvr26b
Thanks @RK_382922 for suggesting!
https://t.co/Pi9Dpjo8A9
one of my fav website
https://t.co/gM20ZVT4lS
A list of cool websites you might now know about
— Sahil (@sahilypatel) August 14, 2021
A thread \U0001f9f5
This is cool let me add some more
— Aditya Bansal (@itsadityabansal) August 14, 2021
A continuation thread \U0001f9f5... https://t.co/vwmNZmslaY
More from All
How can we use language supervision to learn better visual representations for robotics?
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
š§µš(1 / 12)
Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
š§µš(1 / 12)

Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
You May Also Like
1/ I wanted to show you some sneak peek this week, but instead we DEPLOYED TO PRODUCTION š„š
If youāre a creator, get an invite here š https://t.co/D8H6g8TL9o
Week 2 highlights: our first ever podcast š, meeting @Jason š¦, shipping @BREWdotcom alpha š¢ & laptop stickers!
2/ First off, thanks for the mind-blowing response last week (120k+ views š² omgwtfasdasd!)⦠absolutely pushed us to get the product out there.
also, thereās something magical about watching people try a buggy product and fixing it on the go š¤
3/ Thanks @JasonDemant for inviting us to grab some behind the scenes at @LAUNCH.
As a huge fan and avid listener of the @TWistartups showš, it was great watching @Jason do his thing live!
4/ š@domainnamewire invited us to chat about acquiring https://t.co/GOQJ7L2faV domain and that was officially our first podcast ever. Check it out here: https://t.co/eusVCOlUSb.
You nailed it your first time, Maddy! š» Thanks for having us on the show, Andrew.
5/ Great news: Brew partnered with @Tipalti to enable payouts for creators everywhere (unlike @kickstarter which only support 26 countries).
Platforms like Twitch use Tipalti to payout instantly and via multiple methods like Check, PayPal, local bank transfer, etc.
If youāre a creator, get an invite here š https://t.co/D8H6g8TL9o
Week 2 highlights: our first ever podcast š, meeting @Jason š¦, shipping @BREWdotcom alpha š¢ & laptop stickers!

2/ First off, thanks for the mind-blowing response last week (120k+ views š² omgwtfasdasd!)⦠absolutely pushed us to get the product out there.
also, thereās something magical about watching people try a buggy product and fixing it on the go š¤
1/ \U0001f44b Excited to share what we\u2019ve been building at https://t.co/GOQJ7LjQ2t + we are going to tweetstorm our progress every week!
— Jijo Sunny (@JijoSunny) November 6, 2018
Week 1 highlights: getting shortlisted for YC W2019\U0001f91e, acquiring a premium domain\U0001f4b0, meeting Substack's @hamishmckenzie and Stripe CEO @patrickc \U0001f929
3/ Thanks @JasonDemant for inviting us to grab some behind the scenes at @LAUNCH.
As a huge fan and avid listener of the @TWistartups showš, it was great watching @Jason do his thing live!

4/ š@domainnamewire invited us to chat about acquiring https://t.co/GOQJ7L2faV domain and that was officially our first podcast ever. Check it out here: https://t.co/eusVCOlUSb.
You nailed it your first time, Maddy! š» Thanks for having us on the show, Andrew.
5/ Great news: Brew partnered with @Tipalti to enable payouts for creators everywhere (unlike @kickstarter which only support 26 countries).
Platforms like Twitch use Tipalti to payout instantly and via multiple methods like Check, PayPal, local bank transfer, etc.
