Norbert Wiener |
In 1948, Norbert Wiener created the term Cybernetics, applicable to a new technology, which he defined as follows:
The study of control and communication in the animal and the machine
Cybernetics has a lot to do with Robotics
and with the use of computers and microprocessors to control and communicate;
in other words, to do almost everything we use them for.
But what is being talked about right now, rather than Cybernetics, is Cyberethics: ethical issues related to the use of computers, social networks, and most tools that modern technology puts within our reach.
As I have said in previous posts,
tools are neither good nor bad: what is good or bad is the use we make of
them. From this statement two immediate corollaries follow: the first
is worrying; the second, if carried out, could help alleviate that concern. They
are these:
- Every
tool will be misused
- It
is our duty to try to prevent the misuse of tools
It is obvious that Artificial
Intelligence, even in the weak form we have it now, is a very powerful
tool. Therefore, according to the previous corollary, its misuse could cause
great problems and misfortunes. Let us consider a few:
•
Dissemination of incomplete, biased,
or simply false information. We have seen that tools like
ChatGPT and its successors and competitors do just this. Having been trained
with large amounts of information taken from the Internet, as this information
is often incomplete, tendentious or false, these characteristics are
automatically transferred to the answers they give to the questions addressed
to them. In
previous posts I pointed out that these tools are useful mainly for those who
know the answer to the questions they pose, i.e., for those who don’t need them.
Those who need them should not trust the answers they receive, for the tools
are programmed to answer something, even if they can’t find the answer.
•
Substitution of human workers by
machines. This has been going on since the start of the
industrial revolution, more than two centuries ago, but some think it could now
happen on a massive scale, putting millions of people out of work in a very
short time. In an
article published in 2013, two Oxford professors, Carl Benedikt Frey and
Michael Osborne, predicted that 47% of jobs
in the United States are in risk of being replaced by robots by
2030. Although apparently, a few years later, the authors qualified their
prediction, this fear has been growing lately, as a result of the new AI tools,
some of which are now playing the role of advisor or member of the board of
directors of a company.
•
Some applications of artificial
intelligence increase the risks that threaten us in ordinary life. Research on automated car driving is advanced, but
practical progress is slow, not because of technology, but because there is a
lack of legislation on the problems caused when accidents occur, which they
certainly will. Autonomous weapons,
which are being used increasingly, raise the risks of those armed conflicts
that, unfortunately, have not disappeared. And for decades now, portfolio management algorithms have been
causing undesirable effects on stock prices.
On the other hand, fears that AI
applications will become conscious and take over the world, are unfounded. The
first may not be possible, and if it were, it would be in the very long term.
The latter could happen, but only if we are foolish enough to replace human
beings in key positions by computer tools.
But, as the Future of Life Institute
points
out, AI doesn't need
consciousness to pursue its goals, any more than heat-seeking missiles do.
This concern has prompted this institute to sponsor a gathering
of signatures to request that research
on applications such as ChapGPT, GPT4 and their followers and competitors be
suspended for six months, while the unfavorable ethical consequences that may be
caused by the improper use of these tools are considered. This initiative
has collected almost 30,000 signatures. But I doubt that the six-month pause
will be enough.
In parallel with this initiative, the
CAIDP (Center for AI and Digital Policy) has denounced
OpenAI (the company that created ChatGPT and GPT4) before the United States
Federal Trade Commission (FTC), accusing GPT4 of being a deceptive product,
a risk for public safety, and requesting the suspension of future
versions.
These steps may give us more time to
consider some of the problems. There is the precedent of research in genetic
manipulation, which can also cause considerable risks, some of which still
threaten us. Faced with such important issues, we shouldn’t be too daring. It’s
better to be cautious.
In his famous book Cybernetics (1948, 1961), which gave its name to
the discipline, Norbert Wiener also recommends caution. And he does so by
mentioning a horror story written at the beginning of the 20th century by the
Englishman W.W. Jacobs: The monkey's paw.
After summarizing the story, Wiener ends with these words:
In these stories the point is that the agencies
of magic are literal-minded... The new agencies of the learning machine are
also literal-minded. If we program a machine... and ask for victory and do not
know what we mean by it, we shall find the ghost knocking at our door.
In a survey carried out in 2016 and 2022
by the AI Impacts project among more
than 700 AI experts, the results obtained are those of the figure, obtained
from economist.com.
In six years, the rather optimistic forecasts of 2016 (45% good, 20% neutral,
15% bad) have become clearly worse: 30% good, 15% neutral, 15% bad. It is
curious that the 20% lost by the good and neutral forecasts have not been
transferred to the bad forecasts, but (I assume) to those who don’t know or don’t
want to answer.
But perhaps the question is deeper. Perhaps
someone is trying to take advantage of advances in AI to better control humans. Basically,
this is the real problem: not whether we will be controlled by machines (or
not), but whether a few human beings will control others. That has always been
(and continues being) the goal of every dictator. We must not forget that in
today’s world there are many more potential or actual dictators than those everyone
knows about.
Thematic Thread about Natural and Artificial Intelligence: Previous Next
Manuel Alfonseca
No comments:
Post a Comment