There are more than 3,000 TED Talks.

Here are the top 10 TED Talks that will change your life:

1. How to make stress your friend

https://t.co/6R1gbsZT3S
2. Inside the mind of a master procrastinator

https://t.co/AopkkLPxCg
3. Your body language may shape who you are
https://t.co/zRpJ6JoRrg
4. The puzzle of motivation

https://t.co/8InIj6e09s
5. How to speak so that people want to listen

https://t.co/tXmmLWl4yZ
6. 10 ways to have a better conversation
https://t.co/UKh0hYURjU
7. What makes a good life? Lessons from the longest study on happiness

https://t.co/NIIYhf2j9N
8. The power of introverts

https://t.co/RkOXqTqBB9
9. Grit: the power of passion and perseverance

https://t.co/UuwIImvBRz
10. Do schools kill creativity?

https://t.co/7YJWWJC96S
Thanks for taking the time to check out this thread

If you enjoyed this thread RT the first tweet to spread the knowledge, and follow me @FredosRippleEF for more

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like

And here they are...

THE WINNERS OF THE 24 HOUR STARTUP CHALLENGE

Remember, this money is just fun. If you launched a product (or even attempted a launch) - you did something worth MUCH more than $1,000.

#24hrstartup

The winners 👇

#10

Lattes For Change - Skip a latte and save a life.

https://t.co/M75RAirZzs

@frantzfries built a platform where you can see how skipping your morning latte could do for the world.

A great product for a great cause.

Congrats Chris on winning $250!


#9

Instaland - Create amazing landing pages for your followers.

https://t.co/5KkveJTAsy

A team project! @bpmct and @BaileyPumfleet built a tool for social media influencers to create simple "swipe up" landing pages for followers.

Really impressive for 24 hours. Congrats!


#8

SayHenlo - Chat without distractions

https://t.co/og0B7gmkW6

Built by @DaltonEdwards, it's a platform for combatting conversation overload. This product was also coded exclusively from an iPad 😲

Dalton is a beast. I'm so excited he placed in the top 10.


#7

CoderStory - Learn to code from developers across the globe!

https://t.co/86Ay6nF4AY

Built by @jesswallaceuk, the project is focused on highlighting the experience of developers and people learning to code.

I wish this existed when I learned to code! Congrats on $250!!