ChatGPT and similar tools, called LLMs (Large Language Models), are being used with greater frequency in our
daily lives. Google, for instance, has integrated its GEMINI tool with its
search engine. Sometimes, when the program behind the search engine deems it
appropriate, the question asked is sent to GEMINI, and the response of the LLM appears
first, albeit with this warning at the end, in small print:
AI
responses may include mistakes.
Of course, they may include mistakes! These responses are not generated by understanding the question, but by using information previously obtained from the Internet, and applying an algorithm based on extracting words that typically appear in that information after the previously generated words. See a post in this blog where I explained that algorithm. Since the information extracted from the Internet can be true or false, and the algorithm can introduce new falsehoods where none existed, the answers obtained may be correct, partially correct, or completely wrong, therefore Google's warning is valid.
Despite all the warnings we experts try to issue,
some people are fooled by these programs and blindly believe everything they
are told. Perhaps the media is partly to blame, as this field of research is
often presented with great hype. Things have gotten to the point that OpenAI
(the company that launched ChatGPT) has even hired psychiatrists to study how
its LLM tools affect its users emotionally.
In June and July of this year (2025), the news
broke in the media that OpenAI and Mattel (the company that created Barbie) have
established a partnership
to build a Barbie model that connects to OpenAI's LLM tools, which would turn
it into the first toy equipped with artificial intelligence, at the level
currently achieved by this technology. This doll could do the following:
·
Converse with the child. Nothing could be simpler: whatever the child says
is converted into text. That text is sent to ChatGPT or another equivalent
tool, the response is obtained, and this written text is transformed into spoken
text, with the corresponding pre-recorded voice. Technology is perfectly
equipped to perform all these operations.
·
Save information about previous conversations. All one needs to do is provide the doll with an
electronic memory, which can store hundreds of gigabytes, and a program that
uses that memory to modify subsequent conversations. This is also within the
reach of our technology.
Barbie would thus be able to listen, respond,
remember, and adapt.
What consequences could this have for our children?
In an article published in IEEE
Spectrum, Marc Fernandez analyzes the situation that would be created and what
could be the advantages and disadvantages:
·
Supporters
claim that children will learn to create stories, their learning skills will
improve, and those who have trouble establishing social relationships will find
companionship in their toy. Mattel promises that the toy's interaction with the
child will be safe and age-appropriate.
·
Opponents
argue that the relationship of the children with the toys can be negative and
prevent them from having real human relationships. And here I quote Marc
Fernandez: For many parents, the fear is that an AI toy might say something
inappropriate. But the more subtle, and perhaps more serious, risk is that it
might say exactly the right thing, delivered with a tone of calm empathy and
polished politeness, yet with no real understanding behind it. Children,
especially in early developmental stages, are acutely sensitive to tone,
timing, and emotional mirroring. Children playing with [these] toys will
believe they’re being understood, when in fact, the system is only predicting
plausible next words.
The end of Marc Fernandez's article is truly
devastating:
We’re at a
point with AI where LLMs are affecting adults in profound and unexpected ways, sometimes
triggering mental health crises or reinforcing false beliefs or dangerous ideas…
This is uncharted technology, and we adults are still learning how to navigate
it. Should we really be exposing children to it?
Thematic Thread about Natural and Artificial Intelligence: Previous Next
Manuel Alfonseca
No comments:
Post a Comment