I think because the reward structure of being a bio-ethicist rewards saying level-headed sounding, cautious-sounding, conventional wisdom?

Though I'm not sure why that is.

@JeffLadish I guess if you want to radically improve the world, you mostly don't go into a field that is about opining on other people's work, you go into something like Engineering and do the work?
@JeffLadish I note that Nick Bostrom is what a Bioethicist should be: he thinks hard about tradeoffs and risks, and crystalizes concepts like the Unilateralist's curse and black ball technologies.

But this is starting from a place of "The world could be vastly better. How do we get there?"
@JeffLadish I don't know why bioethics doesn't look more like a field of (less smart) Bostroms.
@JeffLadish I guess because everyone's disgust reactions are triggered by the actually good proposals, and in order to make those proposals the consensus of the field a lot of people have to bite the bullet and stick their neck out saying "I know this sounds crazy / absurd / vaguely evil...
@JeffLadish ...to our intuitions / etc, but it is actually the right thing to do."

And there are not enough people who are up for that, to reach a consensus?
@JeffLadish But economics, as a field, _does_ have this property. Economists are famously fine with policies that are abhorrent to the untrained intuition, but that they are confident are actually better on net. That's a good chunk of what economics is.
@JeffLadish But maybe economics has a solid enough theoretical underpinning, that it can manage to be that kind of field, while, the state of consensus in philosophy, and ethics in particular is confused?
@JeffLadish Like, if you have people still fighting over deontology vs. utilitarianism, it is pretty hard to build to a field wide consensus that we should obviously do covid variolation, because it will save lives on net.
@JeffLadish In which case the problem traces back to "Philosophy has bad feedback loops: you don't get to clearly know that you got the right answer. Which means philosophers are incentivized to make interesting and counterintuitive arguments, instead of to steer towards the actual truth."
@JeffLadish Ok. I think this is the answer to the original question: by the standards of science and engineering, you can settle a question to a sufficient degree of precision, and then move on to higher level questions using your answer to the first question as a foundation.
@JeffLadish But by the standards of philosophy, you can practically never settle a question. There is always space for more counter argument. If you settled the question, you wouldn't be able to argue about it any more!
@JeffLadish Which means that you don't build a foundation of answers to basic questions that you can assume, in trying to answer more complicated questions.

To do it right, bioethics would have to, at least in many areas, _assume_ utilitarianism.
@JeffLadish But if you try to assume utilitarianism, a bunch of philosophers will jump on you with many counterarguments and paradoxes and bullets to bite, all of which erode the ability of a bioethics field, as a whole, to stick to its guns about ideas that are counter-intuitive.
@JeffLadish One thing to note here is that the philosophical standards also challenge the foundations of science and engineering. Philosophy as a whole holds that we don't have rock solid reasons to trust our "obvious" answers to questions of epistemology and metaphysics.
@JeffLadish (Like whether there is an external world or whether knowledge is possible.)
@JeffLadish But the scientist and engineers just ignore the philosophers and assume those things anyway, at least for the purposes of doing their science.

And this works. It isn't necessarily philosophically grounded, but it works.
@JeffLadish And the difference is, I think, that the goal of science is to actually land on our all things considered most-correct answer (in part, so that engineering can build cool things like spaceships), while the goal of philosophy is to have air-tight ARGUMENTATION for our answer.
@JeffLadish If you're mostly trying to build spaceships, you don't really care about gettier problems, unless they're fucking up your ability to build working spaceships.
@JeffLadish And if you you just want to know how the sun sines, you don't really care that epistemology isn't grounded, because while you might be confused about the finer points, the finer points of epistemology are not going to get in the way of your getting correct beliefs about the sun.
@JeffLadish It turns out that you can generally build spaceships, and (more controversially) end up with correct beliefs about the sun, without having a solid irrefutable proof about the nature of "beliefs."
@JeffLadish Now it does turn out that having correct beliefs about what beliefs are, is pretty useful for getting more correct beliefs.

But having correct beliefs about what beliefs are turns out to not be the same thing as having solid irrefutable arguments for your belief about beliefs.
@JeffLadish This, by the way, is my main answer to the objection that I am projecting onto some folks that I talked to about critical rationality, recently.

(To be clear, I DON'T think that I definitely understood the points they made, and I may be responding to a straw-man.)
@JeffLadish But, I said recently that I thought that Bayes is the foundation of epistemology.

https://t.co/TfCQgRqUtv
@JeffLadish Different people gave different objections to that, but one objection was (if I understand it correctly) "But, where do your probabilities come from? Either you're infinitely certain of them, or you have an infinite stack of probabilities about the probability below them."
@JeffLadish https://t.co/8joOxuxaPp
@JeffLadish I think I could give some more detailed arguments about how this works, but I also want to dispute frame of the point being argued (if I understand it) a bit.
@JeffLadish It is totally possible to have a functioning brain / epistemology, that actually works for producing knowledge and steering through the world, which does not justify itself on its own terms.
@JeffLadish Like the question of "Does a Bayesian learning mechanism, work, in practice" is a separate question from "How do we justify that it works?"
@JeffLadish In general, I think many questions of the "but how do we justify it?" stripe, just don't matter very much.

For many things, we can't prove that it works, but it does work, and it is more interesting to move on to more advanced problems by assuming some things we can't prove.
@JeffLadish That is not to say that ALL questions of justification are irrelevant. Most of the time, even, it is very practically important to ask "how do we / can we know this is true?"
@JeffLadish But my goal in asking that is to figure out what's true, to the best of my ability, and to my own satisfaction, not to have an airtight argument that something is true.
@JeffLadish ...
Getting back to the original question, I think my answer was incomplete, and part of what is happening here is some self-selection regarding who becomes a bio-ethicist that I don't understand in detail.

Basically, I imagine that they tend towards conventional-mindedness.
@JeffLadish @threadreaderapp unroll

More from Eli Tyre

My catch all thread for this discussion of AI risk in relation to Critical Rationalism, to summarize what's happened so far and how to go forward, from here.

I started by simply stating that I thought that the arguments that I had heard so far don't hold up, and seeing if anyone was interested in going into it in depth with


So far, a few people have engaged pretty extensively with me, for instance, scheduling video calls to talk about some of the stuff, or long private chats.

(Links to some of those that are public at the bottom of the thread.)

But in addition to that, there has been a much more sprawling conversation happening on twitter, involving a much larger number of people.

Having talked to a number of people, I then offered a paraphrase of the basic counter that I was hearing from people of the Crit Rat persuasion.
CritRats!

I think AI risk is a real existential concern, and I claim that the CritRat counterarguments that I've heard so far (keywords: universality, person, moral knowledge, education, etc.) don't hold up.

Anyone want to hash this out with


For instance, while I heartily agree with lots of what is said in this video, I don't think that the conclusion about how to prevent (the bad kind of) human extinction, with regard to AGI, follows.

There are a number of reasons to think that AGI will be more dangerous than most people are, despite both people and AGIs being qualitatively the same sort of thing (explanatory knowledge-creating entities).

And, I maintain, that because of practical/quantitative (not fundamental/qualitative) differences, the development of AGI / TAI is very likely to destroy the world, by default.

(I'm not clear on exactly how much disagreement there is. In the video above, Deutsch says "Building an AGI with perverse emotions that lead it to immoral actions would be a crime."

More from Society

A long thread on how an obsessive & violent antisemite & Holocaust denier has been embraced by the international “community of the good.”

Sarah Wilkinson has a history of Holocaust denial & anti-Jewish hatred dating back (in documented examples) to around 2015.


She is a self-proclaimed British activist for “Palestinian rights” but is more accurately a far Left neo-Nazi. Her son shares the same characteristics of violence, racism & Holocaust denial.

I first documented Sarah Wilkinson’s Holocaust denial back in July 2016. I believe I was the 1st person to do so.

Since then she has produced a long trail of written hate and abuse. See here for a good summary.


Wilkinson has recently been publicly celebrated by @XRebellionUK over her latest violent action against a Jewish owned business. Despite many people calling XR’s attention to her history, XR have chosen to remain in alliance with this neo-Nazi.

Former Labour Shadow Chancellor John McDonnell MP is among those who also chose to stand with Wilkinson via a tweet.

But McDonnell is not alone.

Neo-Nazi Sarah Wilkinson is supported and encouraged by thousands of those on the Left who consider themselves “anti-racists”.

You May Also Like