Startups are all about learning fast. Founders are busy.

My weekly roundup of the week's top 5 impactful threads to help accelerate personal and business growth ...

Here we go ...

1/ 7 no BS tactics to become a world-class negotiator by @barrettjoneill

Why: Better negotiation is the key to getting what you want.

https://t.co/fQf9wrFmdF
@barrettjoneill 2/ 9 ways to be an authentic leader by @IAmCoachClint

Why: Authentic leaders matter in a world where it's rare.

https://t.co/UooZRHltwK
@barrettjoneill @IAmCoachClint 3/ 9 way to increase your cash flow by @KurtisHanni

Why: Cash flow management is key to startup success.

https://t.co/1o4gEn0Oa8
@barrettjoneill @IAmCoachClint @KurtisHanni 4/ 15 marketing tactics every startup should know by @bbourque

Why: Every business needs a marketing edge.

https://t.co/p0M1i1qfLB
@barrettjoneill @IAmCoachClint @KurtisHanni @bbourque 5/ Using Stay Interviews to retain top talent by @EvergreenMEP

Why: The Great Resignation is causing startups to lose great talent. Go on the offensive to retain them with this simple approach.

https://t.co/SzrY9UljDD
@barrettjoneill @IAmCoachClint @KurtisHanni @bbourque That's a wrap!

If you enjoyed this thread:

1. Follow me @EvergreenMEP for more of these
2. RT the tweet below to share this thread with your audience https://t.co/iaweVhNCKJ

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like