Thursday, January 22, 2026

Natural and artificial intelligence

As we saw in the previous post, the book Free Agents by Kevin Mitchell deals with the origins of human consciousness and free will. In a brief epilogue, the book addresses the topic of strong artificial intelligence—the real kind, which doesn't yet exist—and formulates some hypotheses about the possibility of its becoming feasible.

It emphasizes that one of the most active branches of research in AI, especially in recent years, is the field of artificial neural networks, which has led to advances such as Large Language Models (LLMs). It compares these neural networks in our programs with those that exist in our brains and in the brains of many animals more or less similar to us. It says that we are witnessing impressive advances in fields such as image recognition, text prediction, speech recognition, and language translation, based on the use of deep learning, remotely inspired on the architecture of the cerebral cortex.

These systems respond spectacularly to certain types of requests, but fail (also spectacularly) with other types of questions. Since ChatGPT appeared a little over three years ago, many of us have witnessed some of these failures, several of which I've mentioned in this blog. The questions that break down these systems are those that present novel scenarios, which are not represented in the training data. In contrast, for humans, this kind of questions is easy to answer.

Why is this? According to Mitchell, because our programs have been designed in a completely different way than living beings, and are subject to limitations inherent in that design. To approach general artificial intelligence, we shouldn't focus on current applications, but rather on the intelligence of animals, which are capable of facing novel and uncertain environments and applying their past experience to predict the future—a future that includes the outcomes of their own possible actions. This is very different from what LLMs do, which just predict the next word.

Another characteristic of natural intelligence is that it requires few resources. A living being is small; even a blue whale is small compared to our gigantic data centers. A living being uses very little energy, far less than the megawatts of our data centers. An animal cannot train itself with millions of data points, nor does it have the time to exhaustively computing its behavior. If it does, it risks being captured by a predator or losing its prey.

One of the most useful characteristics of natural intelligence is the ability to relate cause and effect. What does it take to do this? Could machines do it?

To understand causality, a living being notices that an event X is always followed by an event Y. But this can be due to two reasons: either X is the cause of Y, or they are statistically correlated (see my blog post titled Correlation or causality). How can we distinguish between them? By acting on the world, modifying the conditions, blocking event X, and checking if event Y disappears. If this happens, it is very likely that there is causality. Otherwise, what existed was statistical correlation. Mitchell expresses it this way: The hypothesis has to be tested. Causal knowledge thus comes from causal intervention in the world.

It is clear that current AI programs are not capable of causally interacting with the world. Therefore, they are not prepared to behave even like intelligent animals, much less like human beings. If we want to move towards strong artificial intelligence, a radical paradigm shift will be necessary.

[A]rtificial general intelligence will not arise in systems that only passively receive data. They need to be able to act back on the world and see how those data change in response. Such systems may thus have to be embodied in some way: either in physical robotics or in software entities that can act in simulated environments.

The two systems he proposes are precisely those I have used in my science fiction novels related to strong artificial intelligence. Intelligent robots appear in two of the novels in my Solar System series (Operation Quatuor and Operation Viginti). And in Jacob's Ladder there are intelligent software entities that operate in simulated environments. Not bad, for an author that doubts that this goal can be achieved, at least in the short term.

This is the conclusion of the epilogue:

In summary, evolution has given us a roadmap of how to get to intelligence: by building systems that can learn but in a way that is grounded in real experience and causal autonomy from the get-go. You can’t build a bunch of algorithms and expect an entity to pop into existence. Instead, you may have to build something with the architecture of an entity and let the algorithms emerge. To get intelligence you may have to build an intelligence. The next question will be whether we should.

The epilogue of Mitchell’s book is only five pages long, but it is worth reading.

Thematic Thread about Natural and Artificial Intelligence: Previous Next

Manuel Alfonseca

No comments:

Post a Comment