Does artificial anthropocentric intelligence lead to superintelligence?

When I use the term Artificial General Intelligence, my meaning of 'General' comes from the psychology definition of the G-factor that is tested in general intelligence tests.
It is an anthropocentric measure. The question that hasn't been explored in depth is whether a "human-complete" synthetic intelligence leads to a superintelligence. The prevailing assumption is that this expected.
I am going to argue that this assumption may not be true.
The assumption that goes into AGI automatically exploding into a superintelligence is driven by the bias that human intelligence is perceived at the pinnacle of all intelligence.
Humans, like all other living things with brains, have their cognition forged by their umwelt and their environment. Our cognition is the way it is because it is what is suited for the niche we evolved into.
We have not evolved into high speed symbolic and math processors because in most of the 200,000 years of human existence, this skill wasn't that important. Computers are certainly better than us in many tasks of a cognitive nature.
Instead of evolving to achieve a capability, humans have the ability to create tools to compensate for a lack of ability. We have invented computers because we are very error-prone and slow computers.
A synthetic general intelligence is a kind of automation that is able to understand our intentions. It is also something that is autonomous and understands the social context that it is in. It is not something that computes fast, we already have that in computers.
These are cognitive skills that are useful for humans, but they aren't necessarily the same skills that might be needed for solving all kinds of complex problems.
As an illustration, AlphaZero is better than its predecessor AlphaGo because it trained from scratch without human gameplay as its training set. It plays in a way that is not encumbered with the bias of human play.
Human cognition is loaded with a lot of excess baggage that evolved over eons. A human-complete synthetic would also share these biases. After all, to understand a human one has to be in a framework of being a human. Human biases and all.
Like AlphaGo, these biases may hinder performance. In Star Wars, there's this droid C3PO that is meant is billed as a protocol droid for 'human cyborg relations'. That is what a synthetic AGI will likely be.
That bridge between humans and yet another kind of specialized intelligence.
Therefore, for a superintelligence, an AGI is more like a module that's required for relating to humans. It's an appendage and not core functionality.
But... this can't be true! Humans are the pinnacle of intelligent life, our cognition cannot be something meant only for the periphery. I'll say the objection here is a consequence of our all too human anthropocentric bias.
@threadreaderapp unroll

More from Carlos E. Perez

Nice to discover Judea Pearl ask a fundamental question. What's an 'inductive bias'?


I crucial step on the road towards AGI is a richer vocabulary for reasoning about inductive biases.

explores the apparent impedance mismatch between inductive biases and causal reasoning. But isn't the logical thinking required for good causal reasoning also not an inductive bias?

An inductive bias is what C.S. Peirce would call a habit. It is a habit of reasoning. Logical thinking is like a Platonic solid of the many kinds of heuristics that are discovered.

The kind of black and white logic that is found in digital computers is critical to the emergence of today's information economy. This of course is not the same logic that drives the general intelligence that lives in the same economy.

More from Tech

You May Also Like