10 Google courses with FREE Certifications

📌 Programming
📌 Data Structures and Algorithm
📌 Web Development and Android Development
📌 Digital Marketing
📌 Data Science and Artificial Intelligence and many more

A Thread🧵👇

⚡ Google Web Developers Training
https://t.co/ZRWYnnu5Ss
⚡ Fundamentals of Digital Marketing:

https://t.co/VZLXsUPUYF
⚡ Google Analytics Academy Courses
https://t.co/WufkVqAWIB
⚡ Google Ads Certification
https://t.co/lON8oCJ6F7
⚡ Google Android Development Training
https://t.co/EF2i44SoWa
⚡ Google Digital Garage
https://t.co/tLclokjmq7
⚡ Udacity-Google Partnership Courses
https://t.co/KAYgYQUl0d
⚡ Youtube Management & Growth
https://t.co/WEZFEfmiZW
⚡ Google AI
https://t.co/DvHSJ29kfJ
⚡ Data Structures and Algorithms
https://t.co/7iQbchfjU6
I hope you find this thread helpful. If you liked it make sure you :

➡ Follow me
@adiig7

➡ Retweet the first tweet

Happy Learning!🚀

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like