Bad writers sell products.

Good writers sell stories.

Here's how to tell one in 5 simple steps 🧵⬇️

Step #1: Introduce The Protagonist

Every story has a hero.

That might be…

- Your reader
- A past customer
- A past client
- An organization
- Or you

Choose which one based on what your goal is.
Step #2: Introduce The Conflict

Every great hero needs something to overcome.

That might be…

- A problem
- A person
- An organization

Again, choose which one based on what your goal is.
Step #3: Describe The Battle

It’s time for your hero to fight the enemy.

Basic examples:

- A customer facing his health problems
- A client facing her low profitability problems

Whatever it is, share details about the struggle.

That’s how you inspire emotion in your reader.
Step #4: Describe The Victory

The next step in every great story is the protagonist winning.

In marketing, that’s usually with the help of whatever you’re selling.

Your health product led to getting healthier, your service led to increasing profits, etc.
Step #5: Describe The Transformation

You want your reader to *feel* the victory.

So, go into detail about how much the protagonist transforms and how amazing their life becomes because of it.

Sell the transformation.
Conclusion

Storytelling is an essential piece of writing and marketing.

Learn the basics, and you’ll be able to grab attention, cultivate it, and turn it into $$.

Hit the link below for more long-form education like this:

https://t.co/HKiiTn3j5Y

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like