Alan Turing |
In 1950, in an article published in the
Mind
magazine, Alan Turing wrote this:
I believe that in about fifty years' time it will be possible to
programme computers, with a storage capacity of about 109, to make
them play the imitation game so well that an average interrogator will not have
more than 70 per cent. chance of mating the right identification after five
minutes of questioning.
Why precisely 70 percent? Because studies conducted, where
some persons tried to deceive about their sex another person who couldn’t see them,
gave that result. In seventy percent of the cases, the persons who had to guess
if they were being cheated found the correct answer. In other words, what Turing
said was this:
If the machine were able to deceive human beings, posing as human, with
the same ease with which a human being can deceive another, it should be
considered intelligent.
For many years, beyond the fifty foreseen by Turing, no
program came close to solving the Turing test. The most interesting one was
ELIZA, which posed as a psychiatrist who talks with his supposed patients. Only
the most naive patients were fooled: it was enough to exchange half a dozen
sentences to discover that one was talking to a computer, because of its rigid questions,
although on occasion they were surprising, as Carl Sagan pointed out in his
book The dragons
of Eden.
In a test conducted in 2014, Turing’s prediction
appeared to have been fulfilled 14 years late, when a chatbot (a program that
takes part in a chat) called Eugene Goostman managed to convince
33% of its fellow chatters, after just five minutes conversation, that he was a
13-year-old Ukrainian boy. However, some analysts do not see things clear. The
fact that the program posed as a foreign teenager rather than an adult of the
same country increased the credulity level of the participants in the chat.
Commenting on this result, Evan Ackerman wrote:
The problem with the Turing Test is that it's not really a test of
whether an artificial intelligence program is capable of thinking: it's a test
of whether an AI program can fool a human. And humans are really, really
dumb.
John Searle |
That the Turing test is not enough
to detect intelligence had already been pointed out in 1980 by the philosopher
John Searle, with his mental
experiment of the Chinese room. Let’s see what it is:
- Assume we have a computer
program able to successfully pass the Turing test by dialoguing (for
example) with a Chinese woman. In the conversation, both the woman and the
computer communicate by means of Chinese characters through a teletype.
The computer, which is inside a room so that the woman cannot see it, works
so well that it deceives her, so the woman will believe that she is dialoguing
with a human being who knows the Chinese language.
- Now Searle takes the computer
out of the room, and in its place he places himself.
He does not know Chinese, but he takes with him the listing of the program
used by the computer to dialogue with the woman. In principle, using that
program, Searle would be able to dialogue with her in Chinese as well as
the computer (although obviously more slowly). Each time he receives a
text written in Chinese, he follows the program listing and writes the
signs of the answer that the computer would have given.
- But in Searle’s case there is a
difference. Since he does not know Chinese, he has not understood his conversation
with the woman, although that conversation has deceived her,
making her think that she was dialoguing with a human being that knows the
Chinese language.
- It is clear that the computer
does not understand its conversation with the woman, since its performance
has been identical to Searle’s. But presumably the computer is not aware
that it does not understand, while Searle is.
Therefore it is not enough for a
computer to pass the Turing test, so that we can consider it as intelligent as
a human being. One needs two more things: it must understand at least sometime what
it is reading and writing, and it must be aware (conscious) of the situation.
While those things do not happen, we cannot speak of artificial intelligence. And this is obviously
much more distant in the future (assuming that it is possible) than the
solution of the Turing test.
The same post in Spanish
Thematic thread on Natural and Artificial Intelligence: Preceding Next
Manuel Alfonseca
No comments:
Post a Comment