I was reading something that suggested that trauma "tries" to spread itself. ie that the reason why intergenerational trauma is a thing is that the traumatized part in a parent will take action to recreate that trauma in the child.

This model puts the emphasis on the the parent's side: the trauma is actively "trying" to spread.
This is in contrast to my previous (hypothetical) model for IGT, which puts the emphasis on the child's side: kids are sponges that are absorbing huge amounts of info, including via very subtle channels. So they learn the unconscious reactions of the people around them.
(I say "hypothetical", because all while this sort of thing is in my hypothesis space, I haven't seen clear enough evidence that intergenerational trauma is a meaningful category that I solidly believe it is real.

More like, "here's a story for how this could work.")
On first glance, I was skeptical of this "active trauma" story.

Why on earth would trauma be agenty, in that way? It sounds like too much to swallow.
It seems like you'll only end up with machinery for replication like that if there is selection pressure of some sort acting on the entities in question.
But on second thought, it's pretty obvious that there would be some selection pressure like that.

If some traumas try to replicate them selves in other minds, but most don't pretty soon the world will be awash in the replicator type.
And it isn't that crazy that one coping mechanism for dealing with some critically bad thing is to cause others around you to also deem that thing critically bad.
So, if the fidelity of transmission is high enough, you SHOULD end up with psychological damage that is basically a living, reproducing, entity.

Its unclear how high the fidelity of transmission is.
I've been engaging with Critical Rationalists lately.

Thinking through this has given me a new appreciation of what @DavidDeutschOxf calls "anti-rational memes". I think he might be on to something that they are more-or-less at the core of all our problems on earth.
Except, that, to me at least, "anti-rational memes" suggests "beliefs", that are mostly communicated verbally.

Where I think that most of the action might be in something like implicit aliefs and mental-action patterns that are only visible through things like "vibe."
(To be clear, I think this is just a problem of my reading comprehension. David makes a point to talk about implicit ideas, all over the place. I think(?) he knows that anti-rational memes don't need to be explicit, and indeed might most be inexplicit.)

More from Eli Tyre

My catch all thread for this discussion of AI risk in relation to Critical Rationalism, to summarize what's happened so far and how to go forward, from here.

I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with


So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.

(Links to some of those that are public at the bottom of the thread.)

But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.

Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
CritRats!

I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.

Anyone want to hash this out with


For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.

There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).

And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.

(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."

More from Culture

I just finished Eric Adler's The Battle of the Classics, and wanted to say something about Joel Christiansen's review linked below. I am not sure what motivates the review (I speculate a bit below), but it gives a very misleading impression of the book. 1/x


The meat of the criticism is that the history Adler gives is insufficiently critical. Adler describes a few figures who had a great influence on how the modern US university was formed. It's certainly critical: it focuses on the social Darwinism of these figures. 2/x

Other insinuations and suggestions in the review seem wildly off the mark, distorted, or inappropriate-- for example, that the book is clickbaity (it is scholarly) or conservative (hardly) or connected to the events at the Capitol (give me a break). 3/x

The core question: in what sense is classics inherently racist? Classics is old. On Adler's account, it begins in ancient Rome and is revived in the Renaissance. Slavery (Christiansen's primary concern) is also very old. Let's say classics is an education for slaveowners. 4/x

It's worth remembering that literacy itself is elite throughout most of this history. Literacy is, then, also the education of slaveowners. We can honor oral and musical traditions without denying that literacy is, generally, good. 5/x

You May Also Like

The YouTube algorithm that I helped build in 2011 still recommends the flat earth theory by the *hundreds of millions*. This investigation by @RawStory shows some of the real-life consequences of this badly designed AI.


This spring at SxSW, @SusanWojcicki promised "Wikipedia snippets" on debated videos. But they didn't put them on flat earth videos, and instead @YouTube is promoting merchandising such as "NASA lies - Never Trust a Snake". 2/


A few example of flat earth videos that were promoted by YouTube #today:
https://t.co/TumQiX2tlj 3/

https://t.co/uAORIJ5BYX 4/

https://t.co/yOGZ0pLfHG 5/