Von Neumann’s architecture, which applies to almost every computer ever built during the history of computing, builds computers from two clearly separate parts: the processing unit, where instructions are executed, and the memory, where data is stored. Consequently, almost all the programs we run on our computers are divided into two different sections: the algorithm (the executable instructions) and the data that provides the information needed by the algorithm (its input).
Such a clear separation reminds the difference between the two concepts in the title of this post:
- Information: The set of data available.
In biology, DNA is the primary storehouse of genetic
information. In information theory, this term refers to the
content of a message or text in a journal or everywhere else.
- Intelligence: The ability to manipulate
the available information and create new information. In this context, we can
talk about understanding, reasoning, planning,
imagination, creativity, critical
thinking, and problem solving, among other terms
related to this concept, so difficult to define.
In real life, information and intelligence
are often separated. Let’s look at an example:
- The
United States Library of Congress has often
been used as an example of a large amount of information. As it contains
about 32 million books, each of which, if digitized, would occupy, on
average, over half a Megabyte, the total information contained in the
books in that library is estimated at about 20 Terabytes (20 trillion
bytes). To this we should add many other sources of information, such as
maps, manuscripts, newspapers, comics, sheet music, and image and sound
recordings. But this great building, when empty, (at night, for instance),
does not contain intelligence. When it is open it’s different, for
intelligence can be present as human beings who work there or come to
consult the information in the library.
John Von Neumann |
Let’s look at another example: let’s
consider Google Translate, which provides automatic translations between any two
languages, chosen among 133. In 2018 I put it to the test by having it
translate one fable of Phaedrus, known as the fox
and the grapes, which in the original Latin says this:
Fame coacta vulpes alta in vinea uvam
appetebat summis saliens viribus. Quam tangere ut non potuit, discedens ait
: Nondum matura est ; nolo acerbam sumere. Qui facere quae non
possunt verbis elevant, adscribere hoc debebunt exemplum sibi.
This is my translation of the fable:
Driven by hunger, the fox wanted some grapes that were high in
the vineyard and jumped with all its strength. As it could not reach them, it went
away saying: it’s not yet ripe; I don't want to eat it bitter. He who, being
unable to do something, humiliates it with words, should apply this example to
himself.
In 2018, Google Translate gave me a
totally absurd translation. It even included the name of Caesar, which does not
appear in the fable. I asked a Google expert and he told me that the artificial
neural network behind the translator had probably been trained with very few
Latin texts, and many of those that had been used contained the name of Caesar,
so this name would appear in almost all translations, whether relevant or not.
In October 2023 I asked Google Translate
again to translate the same text, and this was the result:
Compelled by hunger, the fox, high in the vineyard, sought the
grapes, leaping with all his might. As he was not able to touch it, he said as
he was leaving: It is not yet ripe; I don't want to take it bitter. Those who do what they cannot express in words, will have to ascribe this example to themselves.
It will be noted that this translation is
much better than the previous one. Only the part marked in blue is incorrect.
Why has the result improved so much in three years? Is Google Translate smarter
today than it was then?
Probably not. What has changed is the
amount of information available to the translator. There must now be many more
Latin texts on the Internet, which the translator has been able to use to
expand its database. In other words: what has improved is not its intelligence,
but the information used to train it.
As I explained in another
post, ChatGPT does not build the texts with which it answers our questions
because it is intelligent, but because its artificial neural network has been
trained with a large amount of information. As I said there, its “intelligence”
can be compared to a program with just 18 instructions written in the APL
language (i.e. very small), while the amount of information it handles is huge.
And we should not forget that the instructions of a program are not intelligent
either. The programmer is intelligent.
No comments:
Post a Comment