Does artificial anthropocentric intelligence lead to superintelligence?

When I use the term Artificial General Intelligence, my meaning of 'General' comes from the psychology definition of the G-factor that is tested in general intelligence tests.
It is an anthropocentric measure. The question that hasn't been explored in depth is whether a "human-complete" synthetic intelligence leads to a superintelligence. The prevailing assumption is that this expected.
I am going to argue that this assumption may not be true.
The assumption that goes into AGI automatically exploding into a superintelligence is driven by the bias that human intelligence is perceived at the pinnacle of all intelligence.
Humans, like all other living things with brains, have their cognition forged by their umwelt and their environment. Our cognition is the way it is because it is what is suited for the niche we evolved into.
We have not evolved into high speed symbolic and math processors because in most of the 200,000 years of human existence, this skill wasn't that important. Computers are certainly better than us in many tasks of a cognitive nature.
Instead of evolving to achieve a capability, humans have the ability to create tools to compensate for a lack of ability. We have invented computers because we are very error-prone and slow computers.
A synthetic general intelligence is a kind of automation that is able to understand our intentions. It is also something that is autonomous and understands the social context that it is in. It is not something that computes fast, we already have that in computers.
These are cognitive skills that are useful for humans, but they aren't necessarily the same skills that might be needed for solving all kinds of complex problems.
As an illustration, AlphaZero is better than its predecessor AlphaGo because it trained from scratch without human gameplay as its training set. It plays in a way that is not encumbered with the bias of human play.
Human cognition is loaded with a lot of excess baggage that evolved over eons. A human-complete synthetic would also share these biases. After all, to understand a human one has to be in a framework of being a human. Human biases and all.
Like AlphaGo, these biases may hinder performance. In Star Wars, there's this droid C3PO that is meant is billed as a protocol droid for 'human cyborg relations'. That is what a synthetic AGI will likely be.
That bridge between humans and yet another kind of specialized intelligence.
Therefore, for a superintelligence, an AGI is more like a module that's required for relating to humans. It's an appendage and not core functionality.
But... this can't be true! Humans are the pinnacle of intelligent life, our cognition cannot be something meant only for the periphery. I'll say the objection here is a consequence of our all too human anthropocentric bias.
@threadreaderapp unroll

More from Carlos E. Perez

It's a very different perspective when we realize that our bodies consist of an entire ecology of bacteria and viruses that are also passed to our ancestors. Mammals rear their young and as a consequence transfer the microbiome and virome to their offspring.


What does it mean to treat our individuality as ecologies? We are all ecologies existing in other ecologies. Nature is constantly performing a balancing act across multiple scales of existence.

There are bacteria and viruses that are unique to your ancestry as that of your own DNA. They have lived in symbiosis with your ancestor and will do so for your descendants.

It is an empirical fact that the microbiome in our stomach can influence not only our own moods but also our metabolism and thus our weight and health.

It is also intriguing to know that brains evolved out of stomachs and that our stomachs contain hundreds of millions of neurons. Humans can literally think with their gut.
Nice to discover Judea Pearl ask a fundamental question. What's an 'inductive bias'?


I crucial step on the road towards AGI is a richer vocabulary for reasoning about inductive biases.

explores the apparent impedance mismatch between inductive biases and causal reasoning. But isn't the logical thinking required for good causal reasoning also not an inductive bias?

An inductive bias is what C.S. Peirce would call a habit. It is a habit of reasoning. Logical thinking is like a Platonic solid of the many kinds of heuristics that are discovered.

The kind of black and white logic that is found in digital computers is critical to the emergence of today's information economy. This of course is not the same logic that drives the general intelligence that lives in the same economy.

More from Tech

(1) Some haters of #Cardano are not only bag holders but also imperative developers.

If you are an imperative programmers you know that Plutus is not the most intuitive -> (https://t.co/m3fzq7rJYb)

It is, however, intuitive for people with IT financial background, e.g. banks

(2)

IELE + k framework will be a real game changer because there will be DSLs (Domain Specific Languages) in any programming language supported by K framework. The only issue is that we need to wait for all this

(3) Good news is that the moment we get IELE integrated into Cardano, we get some popular langs. To my knowledge we should get from day one: Solidity and Rust, maybe others as well?

List of langs:
https://t.co/0uj1eBfrYj, some commits from many years ago..

@rv_inc ?

#Cardano

(a) Last but not least, marketing to people with Haskell, functional programming with experience and decision makers in banks is a tricky one, how do you market but not tell them you want to replace them. In the end one strategy is to pitch new markets, e.g. developing world

(b) As banks realize what is happening they maybe more inclined to join - not because they would like to but because they will have to - in such cases some development talent maybe re-routed to Plutus / Cardano / Algorand / Tezos

You May Also Like