In 1956, John McCarthy and colleagues, in a seminar that took place at Dartmouth College in Hanover (USA), defined the term Artificial Intelligence, so abused now. On the same year, Arthur Samuel, working at IBM, built the first computer program capable of playing checkers. This program kept information about the games it had played and used it to modify its future plays. In other words, it “learned.” After a certain number of games, the program was able to defeat its creator and play reasonably well in official championships.
At first sight, this seemed to go in the good direction. The creators of the term Artificial Intelligence had predicted that ten years later (that is, around 1966) we would have programs capable of performing perfect translations between any two human languages and playing chess better than the world champion. And this would only be the beginning. We would soon be able to build machines capable of behaving with equal or more intelligence than man. The old dream of building artificial men would have come true.
Almost three quarters of a century later,
scientific predictions in this field are still similar and equally exaggerated.
Since then, much progress has been made,
although more slowly than McCarthy and friends thought. Let's look at a few examples:
- Chess turned out to be a much harder nut to crack than checkers. In 1958, Alex Bernstein of IBM built a program capable of playing beginner-level chess, but we had to wait until 1997 for the original prediction to come true, when IBM's Deep Blue defeated the world champion (Gary Kasparov) in a six-game duel. The latest advance in this field was made by the company DeepMind, in the Google business organization, which in 2017 announced that its AlphaZero program had reached a first-rank level after training for nine hours against other copies of the same program, during which it played 44 million games, storing information about those games in a deep artificial neural network with representation learning. Another version of AlphaZero was modified to fit the rules of shogi (Japanese chess) and appears to have been a success (at least that's what DeepMind claims).
The first attempt to build a program for the Chinese game GO was made by Albert L. Zobrist for his doctoral thesis in 1968. But we had to wait until 2016 before DeepMind’s program AlphaGo beated the world champion (Lee Sedol). A year later, another variant of the AlphaZero program, trained in the same way as programs playing chess and shogi, managed to surpass AlphaGo.
Backgammon was easier. One of the first attempts, programmed in 1980 by Hans Berliner (Carnegie Mellon University), managed to beat the world champion (Luigi Villa). During the following quarter of a century, other successor programs also reached the highest level, but since 2005, apparently, no further progress has been made.
- Jeopardy! is not a game, but a TV contest. The
contestants must correctly answer a certain number of questions, and answering
speed is rewarded. In 2011, IBM’s Watson computer
managed to beat the two most successful contestants in the history of Jeopardy!,
by searching for the answers to questions in a huge database and offering
the solutions before their opponents did.
These programs that beat the greatest
human experts in intelligence games are not
intelligent. They consist of the combination of two components:
an algorithm that knows the rules of the game in question, and a huge amount of information, obtained from the games played by the algorithm, which now
is usually stored in an artificial neural network. This information is obtained
from games played against human beings, other programs, or other copies of
itself. Since today’s computers are very fast, these programs can play millions
of games in a very short time and accumulate the necessary information so their
algorithm will reach high play levels.
However, precisely because they are not
intelligent, these programs can be fooled, to the point that they can be beaten
by a human beginner in the game, for they have weak points and blind spots.
This may happen in complex games such as chess and GO, which have huge configuration
spaces, so programs cannot be trained for all possible situations.
In November 2022, a group of researchers
published an article on arXiv describing
a procedure that allowed them to defeat KataGo
(a successor to AlphaZero) playing GO. This procedure discovered
an intrinsic vulnerability in these programs, which, although it can be
corrected, could arise again in another way. The authors of the article put it
this way:
Our adversaries do not win by playing GO well. Instead, they trick
KataGo into making serious blunders... Our results demonstrate that even
superhuman AI systems may harbor surprising failure modes.
As I said at the beginning, calling these
programs Artificial Intelligence
is an abuse of language.
Thematic Thread about Natural and Artificial Intelligence: Previous Next
Manuel Alfonseca
No comments:
Post a Comment