![]() |
My hallucinations by August Natterer |
Since ChatGPT appeared in late 2022, I have been warning that the answers provided by Large Language Models (LLM; I refuse to call these tools Artificial Intelligence) are unreliable and should be treated with the utmost caution. Often, these answers seem plausible and are written linguistically correctly, but they are false. These types of answers have been called hallucinations.
This is not surprising. It is a logical consequence of the algorithm used by these programs, which I described in another post in this blog, which I simulated by means of a program with only 18 instructions. The algorithm works by adding words extracted from the most frequent ones that follow the previous words, chosen from billions of files taken from the Internet. It is evident (just think about it) that this algorithm cannot guarantee that the answers these tools provide are true.