A child who wasn’t able to emotionally develop, becomes the adult who: takes everything personally, is highly defensive, & struggles to voice what they actually feel.

HERE’S WHY (🧵):

Our emotional development happens beginning at birth & through childhood, where we learn: how to identify and regulate our emotions.

Emotional maturity comes from this process.

In order to learn this, we need to be modeled it by a parent figure.
If we’re raised in a home where we are parentified (made to be the emotional caretaker for a parent), where a parent is too busy or overworked, or where a parents rage or emotional instability runs the climate of the home— we don’t get to emotionally develop.
The sole focus becomes staying safe in the environment.

So, we cope with hypervigilance.

Hypervigilance is the attunement to the environment. Meaning, we sense everyone else’s emotions or shift in facial expressions or behavior.
We know when a parents mood is going to shift & how that will impact us, when we might be blamed or shamed, or when a parent might withdraw from us completely (ie: the silent treatment.)
We learn & adapt quickly to caretaking the emotions of those around us. Or managing those emotions the best we can as children.

Sometimes this is mistaken as empathy— it’s not.
It’s a survival mechanism.

Long term hypervigilance creates nervous system dysregulation.

We become high reactive those around us because we’ve learned that people are not safe & we must defend ourselves.
Everything feels personal, because at one time in our lives: it was.

With our awareness on the external, this leaves little time for self awareness, self reflection, or emotional regulation.

The result: we are emotionally immature.
Unable to know what we feel, how to express it, or if it’s even ok to feel what we feel (many of us have been shamed for our emotions: “stop being dramatic,” “don’t be so sensitive” “man up.”)

In earliest years we were made responsible for adult emotions.
This is never the role of a child.

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like