I think because the reward structure of being a bio-ethicist rewards saying level-headed sounding, cautious-sounding, conventional wisdom?
Though I'm not sure why that is.
But this is starting from a place of "The world could be vastly better. How do we get there?"
And there are not enough people who are up for that, to reach a consensus?
To do it right, bioethics would have to, at least in many areas, _assume_ utilitarianism.
And this works. It isn't necessarily philosophically grounded, but it works.
But having correct beliefs about what beliefs are turns out to not be the same thing as having solid irrefutable arguments for your belief about beliefs.
(To be clear, I DON'T think that I definitely understood the points they made, and I may be responding to a straw-man.)
https://t.co/TfCQgRqUtv
Culturally, I am clearly part of the "Yudkowsky cluster". And as near as I can tell, Bayes is actually the true foundation of epistemology.
— Eli Tyre (@EpistemicHope) December 13, 2020
But my personal PRACTICE is much closer to the sort of thing the critical rationalists talk about (assuming I'm understanding them).
What crit rats can\u2019t get past re bayesianism is what then justifies the specific probabilities? Isn\u2019t it an infinite regress?
— Cam Peters (@campeters4) December 14, 2020
I have an 80% confidence in X. Does one then have a 100% confidence in the 80% estimate?
For many things, we can't prove that it works, but it does work, and it is more interesting to move on to more advanced problems by assuming some things we can't prove.
Getting back to the original question, I think my answer was incomplete, and part of what is happening here is some self-selection regarding who becomes a bio-ethicist that I don't understand in detail.
Basically, I imagine that they tend towards conventional-mindedness.
More from Eli Tyre
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
In general, I am super up for short (1 to 10 hour) adversarial collaborations.
— Eli Tyre (@EpistemicHope) December 23, 2020
If you think I'm wrong about something, and want to dig into the topic with me to find out what's up / prove me wrong, DM me.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
CritRats!
— Eli Tyre (@EpistemicHope) December 26, 2020
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with me?https://t.co/Sdm4SSfQZv
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
ELI'S PARAPHRASE OF THE CRIT RAT STORY ABOUT AGI AND AI RISK
— Eli Tyre (@EpistemicHope) January 5, 2021
There are two things that you might call "AI".
The first is non-general AI, which is a program that follows some pre-set algorithm to solve a pre-set problem. This includes modern ML.
More from Society
1. There is an issue with hostility some academics have faced on some issues
2. Another academic who himself uses threats of legal action to bully colleagues into silence is not a good faith champion of the free speech cause
How about Selina Todd, Kathleen Stock, Jo Phoenix, Rachel Ara, Sarah Honeychurch, Michele Moore, Nina Power, Joanna Williams, Jenny Murray, Julia Gasper ...
— Matt Goodwin (@GoodwinMJ) February 17, 2021
Or is it only Eric you pop at?
Are they all making it up too Rob?
Are they "beyond parody"? https://t.co/drQssTD0OL
I have kept quiet about Matthew's recent outpourings on here but as my estwhile co-author has now seen fit to portray me as an enabler of oppression I think I have a right to reply. So I will.
I consider Matthew to be a colleague and a friend, and we had a longstanding agreement not to engage in disputes on twitter. I disagree with much in the article @UOzkirimli wrote on his research in @openDemocracy but I strongly support his right to express such critical views
I therefore find it outrageous that Matthew saw fit to bully @openDemocracy with legal threats, seeking it seems to stifle criticism of his own work. Such behaviour is simply wrong, and completely inconsistent with an academic commitment to free speech.
I am not embroiling myself in the various other cases Matt lists because, unlike him, I think attention to the detail matters and I don't have time to research each of these cases in detail.