And this sometimes leaves me wondering if my standards are unreasonable.
There's a particular kind of romantic partnership, with a certain sort of person, that I've wanted since I became a self-aware, directed agent at around age 15.
And this sometimes leaves me wondering if my standards are unreasonable.
These days, I'm rarely directly or viscerally in contact with it.
[I might elaborate on the wrongness sometime.]
All of this is still reasonably fluid.)
All the other "options" pale markedly in comparison. It becomes obvious that they're not what I want.
It seems so...obvious, or in some sense, ordinary. It feels like it should be...well, not easy, but not crazily hard.
And then I feel confused that I've been going around in the world for more than a decade and I've had so few hits. It's not clear if I've ever gotten close.
No, it's the world that's crazy, the thing that I want is obviously Good and obviously worth guarding.
The thing that I'm longing for is real. I'm not just hallucinating.
There's something that my love is FOR.
I SHOULDN'T follow false gods because maybe they're the only ones there are. There's something that's not false.
And I'll stubbornly persist in half-heartedly seeking the thing, without really feeling why, mostly on the basis of a trust that the other time-slices of me are on to something real.
What should I do? What should I try?
Maybe I should maintain resolve and never give up until I find a way.
Maybe I should be "optimistic." Trust that it will work out, and doors will open.
Maybe I need to be coming from a place of Surrender. (That IS what the trope book says.)
Which can be brutal, and is off the table if the person can't own their end of the connection.
More from Eli Tyre
My catch all thread for this discussion of AI risk in relation to Critical Rationalism, to summarize what's happened so far and how to go forward, from here.
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
CritRats!
— Eli Tyre (@EpistemicHope) December 26, 2020
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with me?https://t.co/Sdm4SSfQZv
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
ELI'S PARAPHRASE OF THE CRIT RAT STORY ABOUT AGI AND AI RISK
— Eli Tyre (@EpistemicHope) January 5, 2021
There are two things that you might call "AI".
The first is non-general AI, which is a program that follows some pre-set algorithm to solve a pre-set problem. This includes modern ML.
CritRats!
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
In general, I am super up for short (1 to 10 hour) adversarial collaborations.
— Eli Tyre (@EpistemicHope) December 23, 2020
If you think I'm wrong about something, and want to dig into the topic with me to find out what's up / prove me wrong, DM me.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."