In 2 minutes, I'll give you 13 tips for beating procrastination...

1. Forgive yourself for procrastinating in the past.
2. Minimize distractions in your environment. Put down the phone and get to work.
3. Keep a to-do list and put it in order of priority.
4. Create a timeline with specific deadlines to accomplish tasks.
5. Enjoy the small wins, they all compound towards the bigger goal.
6. Don't focus on perfection, just get the work done.
7. Break down your goals into smaller chunks to avoid overwhelming yourself.
8. Stop making excuses for yourself.
9. Surround yourself with like-minded individuals.
10. Take control of your self-talk and avoid telling yourself negative things.
11. Manage your energy, not your time.
12. Do other easier tasks until your energy levels recover when you hit a block.
13. Remember that you are capable.
If you enjoyed this thread, please retweet the first tweet and follow me:

@Ant_Philosophy

I created this account to help:

• You become the best version of yourself.
• Provide inspiration and motivation.
• You learn alongside me on my journey.

Have an amazing day :)

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like