John McCarthy |
It looks like the fate of the field of computer technology, wrongly called artificial intelligence, is to alternate between excessive optimism and unbridled pessimism. Here is a sketch of the history of this technology:
- At the Dartmouth College summer school in 1956,
the name artificial intelligence was proposed for computer
programs that could perform tasks that had traditionally been considered
exclusively human, such as playing chess and translating from one human
language to another. The attendees, led by John McCarthy, predicted that
within ten years these two problems would be solved. They hoped that by
1966 there would be programs capable of beating the world chess champion,
and others that could translate perfectly between any two human languages.
When these objectives were not achieved so early, research into artificial
intelligence stopped. At universities, research topics in this field were
frowned upon, because they were thought to have no future.
- The only field where research continued was
artificial neural networks. In 1972, research in this field stagnated when
Marvin Minsky and Seymour Papert published a
book (Perceptrons: An Introduction to
Computational Geometry), where they demonstrated that a
two-layer perceptron (the artificial neural networks of the time) is not
capable of solving the exclusive-or function, one of the
simplest in existence. A few years later, when a third layer was
introduced into the neural network, and with the invention of the
backpropagation algorithm, research in the field of neural networks
advanced again.
- We can recall the rise of expert systems during
the 1970s and 1980s. But expert systems have never reasoned like people:
that is why research in this field has almost stopped.
Beginning in the
1990s, a series of advances in artificial intelligence research once again
provoked an explosion of optimism. Among these advances we can mention the
following:
- In 1997, 30 years late, the prediction that a
program would be able to beat the world chess champion was
finally fulfilled. See this
post.
- In the last two decades, 60 years late, the
prediction regarding machine translation has also been
fulfilled. Current translations between the most widely used languages
are almost perfect, although they still need to be carefully reviewed,
because sometimes spectacular errors slip through.
- Automatic driving has advanced quickly since the
1990s, and as a result it was predicted that by 2030 all cars would be
self-driving. Today, just over five years from that date, this does not
seem very likely. On the one hand, legal problems have arisen, rather than
technological ones, about who should bear the responsibility in the event
of accidents. On the other, see this recent
news item published in IEEE Spectrum with this headline: Partial automation doesn’t make vehicles safer.
Self-driving tech is better treated as a convenience, not a safety
feature.
Finally, let us
recall the reaction in 2023 when the first language generators (LLMs) were
announced: ChatGPT and its competitors, such as Google's Gemini.
It was said (and is still being said) that we are close to strong artificial
intelligence, the real kind, with machines as intelligent as us, or even more.
Well, there are some signs that this bubble is beginning to deflate, much
sooner than expected:
- True experts in artificial intelligence have
always said that language generators do not pave the way to strong AI
(also called general AI), although they will undoubtedly find
application in many fields. It is becoming increasingly clear that we were
right. A recent study published in Nature asserts that the new
versions of the LLM are less reliable than the former, and more prone to
generate invalid answers.
- Language generators must be trained with a huge
amount of data from the Internet, and they consume so much energy that
climate objectives could be jeopardized. See this
article in the New York Times. Some of these companies are considering
setting
up dedicated nuclear reactors to obtain this energy.
- Along the same lines, the water consumption
associated with the data centres where these programs are run is enormous.
According to an
estimate by the University of California, the total water demand
associated with AI by 2027 could exceed half of the UK's annual water
extraction.
- AI companies are seeing funding
for their projects begin to decline.
All of the above
articles, and a few more, such as the one below, have been published in the
last three months. This
article in Spanish summarizes several of these problems with this headline:
Has the end of the era of LLM AI arrived? Are
we witnessing the bubble bursting?
Thematic Thread about Natural and Artificial Intelligence: Previous Next
Manuel Alfonseca
No comments:
Post a Comment