ChatGPT and similar tools, called LLMs (Large Language Models), are being used with greater frequency in our
daily lives. Google, for instance, has integrated its GEMINI tool with its
search engine. Sometimes, when the program behind the search engine deems it
appropriate, the question asked is sent to GEMINI, and the response of the LLM appears
first, albeit with this warning at the end, in small print:
AI
responses may include mistakes.
Of course, they may include mistakes! These responses are not generated by understanding the question, but by using information previously obtained from the Internet, and applying an algorithm based on extracting words that typically appear in that information after the previously generated words. See a post in this blog where I explained that algorithm. Since the information extracted from the Internet can be true or false, and the algorithm can introduce new falsehoods where none existed, the answers obtained may be correct, partially correct, or completely wrong, therefore Google's warning is valid.