Isaac Asimov |
Isaac Asimov was a prolific science fiction and
popular science writer who published in the 40s a series of stories about
robots, later compiled in the I, Robot collection. In these stories he
invented a word that has become a part of the technological vocabulary, as the name
of a discipline: Robotics. He also formulated the three famous
laws of Robotics, which in his opinion should be implemented in every robot to
make secure our interactions with these machines that, when Asimov formulated
the laws, were simple future forecasts.
The three laws of Robotics are the following:
First Law: A robot may not harm
a human being, or through inaction allow a human being to come to harm.
Second Law: A robot must obey
any order given by a human being, except those that conflict with the first
law.
Third Law: A robot must protect
its own existence as long as such protection does not conflict with the first
two laws.
These laws seem quite reasonable, but things are not
as simple as they seem. In fact, almost all the robot stories written by Asimov
show situations where the laws conflict with each other, or even a law
conflicts with itself. This gives rise to difficult situations where the
characters must act to solve the problem. However, in all Asimov stories it is
assumed as a starting point that the laws are in effect in every robot appearing
in those stories.HAL9000 |
In the stories by other authors, however, this does
not always happen. For example, in the film 2001, a Space Odyssey, written
by Arthur C. Clarke, the HAL9000 computer tries to exterminate the human crew
in a spaceship travelling to Jupiter. When Asimov attended the premiere of the
film, he was outraged at what was happening, until a person next to him made
him see that other authors had no obligation to obey his laws.
All this leads us to the following question: Is it possible
to implement these laws in practice?
I think we all agree that the first law is most
important. The other two refer to it. Therefore, we’ll begin by asking ourselves
whether Asimov’s first law of Robotics can be implemented in a robot.
The problems that can be offered to a computer are
classified into several groups:
- Simple problems, which any computer can solve with ease.
- NP-hard and NP-complete problems, which can be
solved by an ordinary computer if they consist of a few data, but become
intractable when the number of data is large. In fact, the computation time
grows (almost) exponentially, as a function of the number of data to be
processed. These are the kind of problems where quantum computing could possibly
make a revolutionary breakthrough, by solving now intractable problems in
reasonable times.
- Intrinsically difficult problems, which in
principle can be solved, but any ordinary or quantum computer would need a
time greater than the age of the universe. One of these problems is the
perfect chess player, able to always find in any game a winning sequence
of moves.
- Non-computable problems, which cannot be solved, either in an
ordinary or in a quantum computer, whose impossibility is proved by mathematics
or by reasoning.
Alan Turing |
The first non-computable problem was raised by Alan
Turing, who also proved that it cannot be solved. This is the Turing machine halting problem,
which turned out to be equivalent to Gödel’s first incompleteness theorem.
In an
article published in arXiv, I and my five co-authors have shown that
Asimov’s first law is equivalent to the Turing machine halting problem, which
means that it is not computable. Therefore, Asimov’s first law cannot be
implemented in robots, not just now, but never. It is not, therefore, one of
those problems out of the reach of current technology, that may not be feasible
now, but could become so in the future. It has been proved that implementing this law is totally
impossible.
The same post in SpanishThematic thread on Natural and Artificial Intelligence: Preceding Next
Manuel Alfonseca
No comments:
Post a Comment