First, we must differentiate three different concepts:
a) Technological
singularity: the apparently exponential increase of our technological advances
will tend to infinity in a very short time. By then, anything we may want to
do, will be possible.
b) Transhumanism:
the amelioration of the human species by means of technology.
c) Posthumanism: the generation of a new species as a hybrid of human beings and technology.
In September 2015, a course was imparted at
the Menéndez Pelayo International University (UIMP) with the title Technological singularity, human improvement and neuroeducation.
It dealt with transhumanism, and the ethical problems it would bring up. In
2016, several lectures of this course were published in book
form with the title Humanity ∞. Ethical
challenges of emerging technologies, coordinated by Albert
Cortina and Miquel-Àngel Serra. The introduction to the book begins with the
following words:
For Google engineer Ray Kurzweil, the technological
singularity... is about to happen. Our species is about to artificially evolve
into something other than it has always been. Are we ready to face this?
Note that the authors of the introduction believe
implicitly that what Kurzweil predicts will happen. (He, by the way, has been
saying the same thing for more than thirty years, and each time he pushes the predicted
date a few years forward). They do not doubt that the technological singularity
is about to arrive, and what worries them is whether we are ready to face it.
But I think there is a previous question to be answered: is there any chance
that the 21st century will witness the fulfillment of these predictions?
I think not. Gordon Moore, famous for his
law on the evolution of computer hardware, put it this way in 2003: No exponential is forever. In other words, all
apparently exponential growths end sooner or later by the action of natural
causes, or by the exhaustion of resources, or because practical limits are
reached. Every curve that at first seems exponential, in the end becomes a logistic curve, whose growth reaches an
inflection point and then slows until it reaches a maximum.
Those who believe in the imminence of
transhumanism tend to set milestones. If these are close to us, they quickly
turn into failed predictions. For instance, the
Avatar 2045 Project, proposed in 2011, which hopes that cybernetic
immortality will be feasible by that date, expects to reach its goal in four
stages. The second is this:
The second stage would consist of developing a system for the
preservation and maintenance of the brain, outside the human body, thus making
it possible to transplant it to a robot and keep it working. The project
establishes for the period 2020-2025 the creation of an autonomous life support
system linked to a robot, "Avatar", which will save people whose body
is completely worn out or irreversibly damaged.
As can be seen, the forecasts for this
project are very ambitious. In just two years we should be able to take the
brain of a sick person and put it in a robot, where it would continue working as
if nothing had happened. It doesn't look like this is going to happen, so the
problem of the forecasters won’t be solved by a few years delay. By the way,
the first stage (supposed to finish by 2020) has not been achieved either.
Transhumanists foresee two types of improvements
for the human species:
- Technological improvements related to strong artificial
intelligence and its possible fusion with humans.
They tend to start from a materialistic monist philosophy. If that
philosophy were false, as
I believe, all these predictions will automatically fail. But
this is pointed out by just one of the participants in the book, the
geneticist biologist and bioethicist Nicolás Jouve.
- Improvements related to human biology,
of which there are two types: genetic engineering applied to human beings
(DNA reprogramming), and improvements in the working of the body (the limbs,
the senses, and the brain). The second type is in turn divided into two groups:
improvements applicable to people with difficulties or disabilities (artificial
eyes, neurological aids for people with Parkinson's...); and those that
would be applied to healthy people to increase their capacities. The
improvements of the first group could be feasible in the not very long
term. The others are probably as far away (or as impossible) as strong
artificial intelligence. In the words of Nicolás Jouve: many of the actions being proposed as an extension
of transhumanist currents under the lure of "human enhancement"
[are] technically impossible to achieve. And Elena Postigo asserts that ethical
questions associated to human improvement shoud be studied on a
case-by-case basis.
Ramón López de Mántaras |
A few years ago, I heard a speaker say
that by 2030 we would have robots smarter than humans. The speaker was not a technician.
To support his statement, he said that young
people have told me, and they know. But when you ask real
experts, who have been working on these issues for years, we tend to be less
optimistic (or less pessimistic, depending on the point of view), regarding
artificial intelligence. The article by Ramón López de Mántaras in this book is
an example. The same is true of genetic reprogramming, as Nicolás Jouve points out: …this approach can permeate thinkers, philosophers and
opinion makers with little scientific training, who are close to a utilitarian
perspective.
I think we ought to analyze the ethical problems associated with technological advances. But perhaps we should start at the beginning: are those technological advances feasible, although they are presented as imminent? If they are not, as is very likely, the study of their consequences becomes secondary.
The same post in Spanish
No comments:
Post a Comment