And this sometimes leaves me wondering if my standards are unreasonable.
There's a particular kind of romantic partnership, with a certain sort of person, that I've wanted since I became a self-aware, directed agent at around age 15.
And this sometimes leaves me wondering if my standards are unreasonable.
These days, I'm rarely directly or viscerally in contact with it.
[I might elaborate on the wrongness sometime.]
All of this is still reasonably fluid.)
All the other "options" pale markedly in comparison. It becomes obvious that they're not what I want.
It seems so...obvious, or in some sense, ordinary. It feels like it should be...well, not easy, but not crazily hard.
And then I feel confused that I've been going around in the world for more than a decade and I've had so few hits. It's not clear if I've ever gotten close.
No, it's the world that's crazy, the thing that I want is obviously Good and obviously worth guarding.
The thing that I'm longing for is real. I'm not just hallucinating.
There's something that my love is FOR.
I SHOULDN'T follow false gods because maybe they're the only ones there are. There's something that's not false.
And I'll stubbornly persist in half-heartedly seeking the thing, without really feeling why, mostly on the basis of a trust that the other time-slices of me are on to something real.
What should I do? What should I try?
Maybe I should maintain resolve and never give up until I find a way.
Maybe I should be "optimistic." Trust that it will work out, and doors will open.
Maybe I need to be coming from a place of Surrender. (That IS what the trope book says.)
Which can be brutal, and is off the table if the person can't own their end of the connection.
More from Eli Tyre
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
CritRats!
— Eli Tyre (@EpistemicHope) December 26, 2020
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with me?https://t.co/Sdm4SSfQZv
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
ELI'S PARAPHRASE OF THE CRIT RAT STORY ABOUT AGI AND AI RISK
— Eli Tyre (@EpistemicHope) January 5, 2021
There are two things that you might call "AI".
The first is non-general AI, which is a program that follows some pre-set algorithm to solve a pre-set problem. This includes modern ML.
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
In general, I am super up for short (1 to 10 hour) adversarial collaborations.
— Eli Tyre (@EpistemicHope) December 23, 2020
If you think I'm wrong about something, and want to dig into the topic with me to find out what's up / prove me wrong, DM me.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
More from For later read
@Daoyu15 @lab_leak @walkaboutrick @ydeigin @Ayjchan @franciscodeasis @TheSeeker268 @angie_rasmussen
28. Before moving on to DARPA, let's look at DTRA:
— Billy Bostickson \U0001f3f4\U0001f441&\U0001f441 \U0001f193 (@BillyBostickson) July 31, 2020
A must read!
It is astonishing the number of pies they had their dirty little fingers poking into:
Note John Epstein and Kevin Olival from EcoHealth Alliance are key figures in DTRA:https://t.co/O4QwVWrm7m pic.twitter.com/cnNGZ7AApj
@Daoyu15 @lab_leak @walkaboutrick @ydeigin @Ayjchan @franciscodeasis @TheSeeker268 @angie_rasmussen
24. DTRA Network for Collection of Viruses
— Billy Bostickson \U0001f3f4\U0001f441&\U0001f441 \U0001f193 (@BillyBostickson) January 9, 2021
7. DTRA - Metabiota - One Health - Ecohealth
Bat Research Networks and Viral Surveillance: Gaps and Opportunities in Western Asia pic.twitter.com/SOqSSXF3pa
@Daoyu15 @lab_leak @walkaboutrick @ydeigin @Ayjchan @franciscodeasis @TheSeeker268 @angie_rasmussen
That is the key question
— Billy Bostickson \U0001f3f4\U0001f441&\U0001f441 \U0001f193 (@BillyBostickson) January 5, 2021
1. DARPA/DTRA use NGOs like Ecohealth or Metabiota to collect new pathogens
2. They are sent to US labs (Mailman, Rocky Mountain, Atlanta CDC, UNC, USAMRIID) for GOF work by Lipkin, Nichols, Rasmussen, Baric, Dension, Munster, etchttps://t.co/wqhHK7uZO6
@Daoyu15 @lab_leak @walkaboutrick @ydeigin @Ayjchan @franciscodeasis @TheSeeker268 @angie_rasmussen
1. I wonder why Dr. Angela Rasmussen is so so upset & full of almost palpable venom about a Hypothesis and a "What if" question by @nicholsonbaker8 in the @NYMag https://t.co/a6lxtJLpKR
— Billy Bostickson \U0001f3f4\U0001f441&\U0001f441 \U0001f193 (@BillyBostickson) January 5, 2021
Did I hear someone say "DARPA"?
Did I hear someone say "DTRA"?https://t.co/i27mpxJDw2 pic.twitter.com/x4X3QPnTMS
You May Also Like
For three years I have wanted to write an article on moral panics. I have collected anecdotes and similarities between today\u2019s moral panic and those of the past - particularly the Satanic Panic of the 80s.
— Ashe Schow (@AsheSchow) September 29, 2018
This is my finished product: https://t.co/otcM1uuUDk
The 3 big things that made the 1980's/early 1990's surreal for me.
1) Satanic Panic - satanism in the day cares ahhhh!
2) "Repressed memory" syndrome
3) Facilitated Communication [FC]
All 3 led to massive abuse.
"Therapists" -and I use the term to describe these quacks loosely - would hypnotize people & convince they they were 'reliving' past memories of Mom & Dad killing babies in Satanic rituals in the basement while they were growing up.
Other 'therapists' would badger kids until they invented stories about watching alligators eat babies dropped into a lake from a hot air balloon. Kids would deny anything happened for hours until the therapist 'broke through' and 'found' the 'truth'.
FC was a movement that started with the claim severely handicapped individuals were able to 'type' legible sentences & communicate if a 'helper' guided their hands over a keyboard.