
1/ questions raised by linked thread: What are some other significant traits of liberal thinking? One is the tendency to think WEIRDly. Another is the tendency to employ only 1/2 of the evolved psychological mechanisms of social cognition (and of those mostly just care/harm)
1/n Two interesting findings thus far from my analysis of Pew's March 2020 COVID-19 survey. First, white (and especially 'very') liberals are far more likely than all other ideological-racial subgroups to report being diagnosed with a mental health condition. pic.twitter.com/RynS9lk0jR
— Zach Goldberg (@ZachG932) April 11, 2020


Have we been looking at the partisan divide all wrong all along?
Aren’t all of these merely symptoms of deeper causes? Shouldn’t we be looking for THEM?
What we know now is that if the 1st principle of social psychology is “Intuitions come first, strategic reasoning follows,” then
“Psychology (or psychological profile) comes first, intuitions follow.”
We ARE wrong to be thinking, categorizing, analyzing, and concluding based on outdated anachronisms based on which side of the room proponents sat (i.e., left or right),
What if, they asked, reason DIDN’T evolve to help us find truth?
What if, instead, it DID evolve to help us WIN ARGUMENTS; persuade others that OUR intuitions are the RIGHT ones?
More from All
How can we use language supervision to learn better visual representations for robotics?
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)
Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)
Introducing Voltron: Language-Driven Representation Learning for Robotics!
Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z
🧵👇(1 / 12)

Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.
Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)
The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).
The secret is *balance* (3/12)
Starting with a masked autoencoder over frames from these video clips, make a choice:
1) Condition on language and improve our ability to reconstruct the scene.
2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)
By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.
Why is the ability to shape this balance important? (5/12)