I think because the reward structure of being a bio-ethicist rewards saying level-headed sounding, cautious-sounding, conventional wisdom?
Though I'm not sure why that is.
But this is starting from a place of "The world could be vastly better. How do we get there?"
And there are not enough people who are up for that, to reach a consensus?
To do it right, bioethics would have to, at least in many areas, _assume_ utilitarianism.
And this works. It isn't necessarily philosophically grounded, but it works.
But having correct beliefs about what beliefs are turns out to not be the same thing as having solid irrefutable arguments for your belief about beliefs.
(To be clear, I DON'T think that I definitely understood the points they made, and I may be responding to a straw-man.)
https://t.co/TfCQgRqUtv
Culturally, I am clearly part of the "Yudkowsky cluster". And as near as I can tell, Bayes is actually the true foundation of epistemology.
— Eli Tyre (@EpistemicHope) December 13, 2020
But my personal PRACTICE is much closer to the sort of thing the critical rationalists talk about (assuming I'm understanding them).
What crit rats can\u2019t get past re bayesianism is what then justifies the specific probabilities? Isn\u2019t it an infinite regress?
— Cam Peters (@campeters4) December 14, 2020
I have an 80% confidence in X. Does one then have a 100% confidence in the 80% estimate?
For many things, we can't prove that it works, but it does work, and it is more interesting to move on to more advanced problems by assuming some things we can't prove.
Getting back to the original question, I think my answer was incomplete, and part of what is happening here is some self-selection regarding who becomes a bio-ethicist that I don't understand in detail.
Basically, I imagine that they tend towards conventional-mindedness.
More from Eli Tyre
I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with
CritRats!
— Eli Tyre (@EpistemicHope) December 26, 2020
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with me?https://t.co/Sdm4SSfQZv
So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.
(Links to some of those that are public at the bottom of the thread.)
But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.
Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
ELI'S PARAPHRASE OF THE CRIT RAT STORY ABOUT AGI AND AI RISK
— Eli Tyre (@EpistemicHope) January 5, 2021
There are two things that you might call "AI".
The first is non-general AI, which is a program that follows some pre-set algorithm to solve a pre-set problem. This includes modern ML.
I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.
Anyone want to hash this out with
In general, I am super up for short (1 to 10 hour) adversarial collaborations.
— Eli Tyre (@EpistemicHope) December 23, 2020
If you think I'm wrong about something, and want to dig into the topic with me to find out what's up / prove me wrong, DM me.
For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.
There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).
And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.
(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."
More from Society
(A thread for whoever feels like reading)
Neighborhood gents, what\u2019s something you\u2019ve learned about feminism (or gained a better understanding of) that you think other men should know?
— feminist next door (@emrazz) February 19, 2021
Note - the quoted is a friendly/good faith replier. https://t.co/048kuxxX6q
I have observed feminists on Twitter advocating for rape victims to be heard, rapists to be held accountable, for people to address the misogyny that is deeply rooted in our culture, and for women to be treated with respect.
To me, very easy things to get behind.
And the amount of pushback they receive for those very basic requests is appalling. I see men trip over themselves to defend rape and rapists and misogyny every chance they get. Some accounts are completely dedicated to harassing women on this site. It’s unhealthy.
Furthermore, I have observed how dedicated these misogynists are by how they treat other men that do not immediately side with them. There is an entire lexicon they have created for men who do not openly treat women with disrespect.
Ex: simp, cuck, white knight, beta
All examples of terms they use to demean a man who respects women.
To paraphrase what a wise man on this app said:
Some men hate women so much, they hate men who don’t hate women
https://t.co/eXLNam2gv4

Good. Fuck Rush Limbaugh, and let the celebration about his death be a reminder to the rest of the racists and bigots that we\u2019ll happily dance on your graves too.
— Chris Kluwe, Irredeemable Pudgy Nobody (@ChrisWarcraft) February 17, 2021
You May Also Like
As a dean of a major academic institution, I could not have said this. But I will now. Requiring such statements in applications for appointments and promotions is an affront to academic freedom, and diminishes the true value of diversity, equity of inclusion by trivializing it. https://t.co/NfcI5VLODi
— Jeffrey Flier (@jflier) November 10, 2018
We know that elite institutions like the one Flier was in (partial) charge of rely on irrelevant status markers like private school education, whiteness, legacy, and ability to charm an old white guy at an interview.
Harvard's discriminatory policies are becoming increasingly well known, across the political spectrum (see, e.g., the recent lawsuit on discrimination against East Asian applications.)
It's refreshing to hear a senior administrator admits to personally opposing policies that attempt to remedy these basic flaws. These are flaws that harm his institution's ability to do cutting-edge research and to serve the public.
Harvard is being eclipsed by institutions that have different ideas about how to run a 21st Century institution. Stanford, for one; the UC system; the "public Ivys".