Ankitsrihbti's Categories

Ankitsrihbti's Authors

Latest Saves

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)
Understanding NeRF or Neural Radiance Fields 🧐

It is a method that can synthesize new views of 3D scenes using a small number of input views.

As part of the @weights_biases blogathon (https://t.co/tRddw6jXeA), here are some articles to understand them
1/


Want to dive head first into some code? 🤿
Here is an implementation of NeRF using JAX & Flax
https://t.co/pKO5NDSDqv.

The Report used W&B to track the experiments, compare results, ensure reproducibility, and track utilization of the TPU during the experiment.
2/

Mip-NeRF 360 is a follow-up work that looks at whether it's possible to effectively represent an unbounded scene, where the camera may point in any direction and content may exist at any distance.

https://t.co/QNY6VuN8zd
3/


Training a single NeRF does not scale when trying to represent scenes as large as cities.

To overcome this challenge, Block-NeRF was introduced which yields some amazing reconstructions of San Francisco. Here's one of Lombard Street.
4/


They built their implementation on top of Mip-NeRF, and also combine many NeRFs to reconstruct a coherent large environment from millions of images.

🐝Read more here: