The second entry for the word tool in the Merrian-Webster dictionary states:
Something…
used in performing an operation or necessary in the practice of a vocation or
profession
Since the origin of the genus Homo, human beings have used tools, which together with
skeletons or fossilized bone fragments are one of the main sources of
information about our ancestors. Monofacial and bifacial pebble tools seem primitive
today, but during human prehistory they served as weapons and tools and surely
helped us survive.
Information technology, which has developed significantly during the last century, has provided us with many useful tools. Throughout the 21st century, these tools have become increasingly “intelligent,” tackling tasks that until very recently could only be performed by humans. But when using them, we should keep in mind some very general ideas, which should always be applied, but not always are:
·
A tool can be used for good or for evil. Note that the previous definition doesn't specify
whether the use is good or bad. In practice, it can be either, and ethics is the branch of philosophy that must be applied
to decide this. For example, a scalpel can save a life by helping a surgeon
remove a malignant tumor, but it can also be used by a murderer to kill the
victim.
·
A tool can fulfill its purpose (be well-made) or do things that weren't intended (be poorly made). For example, a railway track can
be deteriorated or poorly constructed and cause an accident. An LLM (a trendy
"AI" tool) can advise depressed teenagers to end their life and
explain how to do it.
·
An automatic translation system (another trendy tool) can produce
a correct or incorrect translation. As I explained in another post, automatic
translation is a very useful tool for human translators because it
increases their productivity five or tenfold. However, this process is so
complex that it always (always!) misses some syntactic or semantic errors, so
it's essential to carefully review the resulting translation. I always review
these translations at least twice, and in every case, I usually find things to
correct.
·
When search engines and LLMs (Large Language Models) are used to
solve issues, the results they offer should be verified. In both cases, the results are generated from
information on the internet. The problem is that this information has been
entered by humans or generated by programs, and it can be correct or incorrect.
Nothing guarantees that all the information on the internet is truthful (quite
the opposite, in fact). Furthermore, the algorithms used by LLMs, which
generate a word from among those that typically follow the text generated so
far, do not take into account the criterion of truth, so the texts they
generate can be erroneous, and often are (wrong results are called hallucinations). Wisely, Google's search engine warns about this
when using these tools.
The problem is that human beings are really, really dumb, as Evan
Ackerman pointed out in 2014 when discussing the
Turing test. Lately, with the rise of LLMs and other "AI" tools,
such as image generators, this is becoming increasingly clear:
·
Many
students ask an LLM to solve coursework problems, or write the code they have
been assigned to do. They often submit their work without reading it, and in
the case of program generation, without compiling or testing, undoubtedly
influenced by the many articles being published that claim that the computer
programmer profession is dead, that it will be replaced by automatic code
generators. Recent studies on the subject call this into question.
·
One
journalist asked an LLM to write an article and published it without bothering
to read it. Consequently, the article appeared with a final paragraph that gave
it away, as it said something like this: Do you want me to answer any other
questions? Many LLMs
add this paragraph at the end of a response.
The popular science articles published by Madri+d
every weekday are usually accompanied by a related image. In a
recent article published on January 19, 2026, the title was:
Large
dinosaurs and mammoths were slower than previously thought.
However, the
accompanying image, which you can also see at the beginning of this post, betrayed
its computer-generated nature, as the figure representing the mammoth resembled
a giant bovine with elephant legs and two tusks in the place of their horns. Can
it be that whoever asked this image to be generated didn't realize the mistake?
Apparently so, since it was published.
Thematic Thread about Natural and Artificial Intelligence: Previous Next
Manuel Alfonseca

No comments:
Post a Comment