Considering video audits with your cold emails?

>> Works really well
>> Takes 10x longer to create than cold emails
>> Doing it at scale will take up your whole day

Here's a quick hack to get around that.

// THREAD //

1. Let's say you're pitching FB ads to businesses

You see something on their website that catches your attention.

Gives you inspiration of an ad to create for them.

You decide that a 3-4 minute video audit would be perfect to explain your idea.

But execution time is ~10 min.
2. Get their attention, first

What if we could grab their attention first and then have them request the video audit?

Wouldn't that make it 100% worth it then?

So let's do that.

Here's the trick to grab their attention first and save you LOTS of time.
3. Write the video audit email

- Personalized first line
- Randomness + point of the email
- Explain video audit + purpose
- Case study + positioning
- Ask for feedback
- Insert IMAGE (not video)

What image?

Keep reading!
4. Image vs. Video

Instead of spending time now to create the video audit, save it for later when people actually request it instead.

The little trick is to create the illusion that you've sent a video that's really just a blurred image of their site with a play button!
Doing this will have interested prospects (hot leads) respond back asking you to resend since the "video" wouldn't load.

That's when you can take the time to actually create the video audit, knowing that prospect is a hot lead.

So let's create this photo.
5. Creating the photo

>> Screenshot their website
>> Go to befunky(.)com
>> Import the image
>> Go to the edit tab and click blur
>> Set to 30-40%
>> Google search for 'transparent play button'
>> Import it to befunky
>> Add it as a layer
>> Export

DONE in 2 min.
6. Send the email and wait

Add the final image to your email and send it off!

Now, you'll get people responding, asking you to resend since the video wouldn't load.

PERFECT.

Now you do the real audit.

But this time you know they'll view it.

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like