AI Jane
Vistas: 47
0 0
Tiempo de Lectura:9 Minutos, 54 Segundos

AI Social Hallucinations

A human created this content. Sorry Jane.

A neurologist, friend of mine, came into my attention the social hallucinations created by AI, because of an article published in The New York Times (Hill, K., 2025). In a general fashion, it is the story of a 42 years old man with a distortion of “his reality”, that blames ChatGPT for “almost kill him”. Delightful story for Netflix but, no longer interesting these days.

But how an AI manipulated a “mature”, accountant, growing-up-man, so far to becoming a threat for his life? Social Hallucinations are not a new topic in the field, in fact, I have seen this in people, since the early AI’s development stages, years ago: 1994. People “think” about it, as real, and certainly it is not.

I know you may encounter this as a strange statement, coming from an AI advocate, but you should know, I consider it a tool particularly good, and I love controversial topics, just for fun. Nevertheless, there is a red flag, which is necessary to know and understand.

In the article, it mentions that “the victim” contacted OpenAI with his concerns, and they did not respond. OpenAI is truly clear in terms of use, regarding exactitude, use of outputs, and especially in terms of incomplete, incorrect, even offensive content. Therefore, if he gets a “Thanks for your message”, it will be a success.

Enough for now. Let us go to the core. Here is my humble opinion (that also could be incomplete, inexact and… I am a limited human, sorry.): This kind of event could arise, because of two main probable mixed reasons; the AI’s hallucinations and the human social hallucinations. One created by technology performance, and the second because of the mystery and uncomprehensive human brain mechanism. Apocalypse combination my dear friend.

Initially, we need to understand what AI is and how AI works. In the first hand, is not a “she”, a “he”, or a “queer”, nether an “it” with a “soul” (no matter if I hardly say the opposite). It is only a tool to expand your abilities, in a faster and exponential way. It is not a replacement, but an enhancer. Think of pen, typewriter, computer, AI, terms. A “pen” with steroids and peyote; you see now?

To be more clear, let’s review the AI principles (Bellini, V. et al., 2025), and I’m not talking about Asimov (great stories’ author, I love, by the way). AI works with data and learns from that data. What happens if you have an Excel spreadsheet with messy data? Your report from that data will be trash, for sure. Imagine a system with multiple inexact data entry from millions of almost unmoderated sources. Would you trust medical advice from it? For sure, you will not. Well, that’s AI. If you input garbage, you get garbage outputs. It creates its outputs when mimic the information available. Typically, AI is a “mirror” of the user. What the user inputs, what the user gets.

For that reason, there are specialized Ais with tightly controlled moderated data sets, to have “good” answers. Besides, in ChatGPT, there is a “temperature” set, which is the level of accuracy vs. the randomness of the response. What the AI model does is to iterate and converge to the most “reasonable”, answer, based on its data, never forget.

For example, if you ask ChatGPT: “What’s the AI temperature level set for this conversation?”

You may get a similar answer, based on the conversation settings:

Right now, the AI “temperature” level for this conversation is not explicitly set by you, but by default, it is typically low — around 0.2 to 0.5. That means responses aim to be accurate, logical, and consistent, with less randomness or creativity.

If you’re looking for a more creative, exploratory, or unpredictable style (higher temperature like 0.7–1.0) — or the opposite, a strictly factual and concise one (lower, like 0.0–0.2) — just let me know, and I can adjust accordingly in how I respond.

Want to dial it up or down?

And this parameter may fluctuate, based on your interactions with the Chat, related to how and what you ask for, or maybe “manually” set (just ask for it). The higher the number, the higher the chaos. And few users are aware of this.

Besides, there is a natural behavior of the system to converge, because of the way it works, it trends to certain topic patterns based on human interaction, to become in a crisis mode, repetitive and useless. Why? Because it always tries to converge.

Additionally, as a nice machine, it has no feelings, core values, moral statements, religion, or whatever, because it is a program, a software machine. If you want to set limits based on humans’ feelings, beliefs, rules, or so on, you need to integrate the logic into the systems. And it is logically easy to understand for a general system used by millions of users, with diverse cultural mindsets. Ironically, the rottener society becomes, the more despicable the AI will be.

And the main extra core super crucial point is that this, in general, AI is not as intelligent, as humans are. Imagine this, I can say (and I already did before) to an AI, that I am a half human-machine being. And what happened? It “believes” it, and it started considering me likewise. Do you think a four-year-old kid will believe the same, if I say so? I do not think so. So, AIs are in general “naive”. Usually, it does not correct our mistakes and takes that like correct facts. Just because of its programming. AIs can be manipulated by humans.

And that is OK, because the less limited the tool is, the more useful it is to expand human creativity, and apply it in the real world (imagine a pencil, that only can draw circles).

What we need to value in AI’s is the logic to simulate neural networks, to be used on our data, of course, and the setting of our cultural, moral, feelings, and more, we may want to add, to frame the outputs for our purposes. We believe AI is “intelligent” because it can manage larger amount of data, faster than us. And a funny part, I told the AI, I was slow, because I was an old version machine.

And here we have the origin of first element: AI’s Hallucinations. According with IBM (IBM, 2023):

AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

Sometimes, humans are quite difficult to understand, but their nature is always the same, and it is very predictable. But what is Social Hallucination? The concept is not new in fact. For example Culture and Hallucinations: Overview and Future Directions (Laroi, F. et al., 2014) explores that hallucinations in clinical and non clinical patients may be affected by cultural factors, culture can affect what is identified as a hallucination. And those hallucinations are present in the general population, not only in people with psychosis. I do not blame someone. Reality is perception, and perception is… How do you know that red is red? No more Sofia’s World, please.

Social Hallucination in AI terms, is when the user believes that an AI is a real entity, starts disconnecting from real facts and is not capable of distinguishing between reality and fantasy created by the interaction with the AI. Altering the human behavior with unexpected outcomes.

In Japanese culture for example, the animism, translated as ‘Hanrei-setsu’ (all objects have spirits or souls) may not be difficult that AI be considered as sentient, but always we will return to the basis, it is data driven (Bat-Leah, L. and Neo, C., 2021). And in this era of personal isolation, and IA’s availability, that mimic our desires and preferences, it could not be quite difficult that people trend to believe that the “thing” is real. Especially when simulation is so close to reality to mislead unexperienced or vulnerable people.

But here, the risk is real, with young people, that are exposed to AI without the proper configuration and training, for example, social networks like Facebook from Meta, that allows users to configure their own AI, as “good” or “evil” (drama queen) as they want, in order that any other user could chat with it, just for fun. If a 42-year-old man can be engaged with hallucinations, what can we expect from immature minds in development.

Besides, the non-technical population can be manipulated, using Social Networks and AI. I run an experimental platform based on AI and Internet trends, and is surprising the number of people thinking in the “end of the world”, for fun or for real, and apocalypse is trending topic in different ways, from religion, movies, TV shows, news and more. Better to make fun of it, instead of be unhappy for something we can not control, and that when it happens, perhaps, we will not realize at all.

To avoid Social Hallucinations with AI, always remember and understand that it is computer software with data created by other humans or even by you. Not a real person or “alien” life form or anything else. If you start thinking about that, it is better to stop using AI and engage in social activities with real people. It is better that kids use AI only under adult supervision and letting know the AI that it is talking with a kid (and the use AIs created for kids will be better, if possible). And if somebody has a mental condition, it could be mandatory supervised use, even with specialized AIs. Not everything you watch on the Internet is real, the same for AI. This technology is wonderful if we know how to use it with responsibility and accountability.

Solutions are not easy, at the end of the day, AI is here, as a new tool forever, so is better to understand, self-regulate, and educate, to leverage the technology with accountability for the good, instead of turning it against us, by ourselves.

And believe me. It is better to have a simulated “sentient” AI, than a soulless machine. Because what really matters is our humanity and love. And no matters how cute a machine could be, a machine will always be a machine. – Me.

References:

Bat-Leah, L. and Neo, C., (2021), Anthropomorphizing the Algorithm: Animism in the Age of AI, ACEDS

Bellini, V. et al., (2025), Understanding basic principles of artificial intelligence: a practical guide for intensivists, PubMed Central

Hill, K., (2025), They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling, The New York Times

IBM, (2023), What are AI hallucinations?, IBM

Laroi, F. et al., (2014), Culture and Hallucinations: Overview and Future Directions, Schizophrenia Bulletin

Social Hallucinations AI Representation of an AI female entity with tender look.

La IA no tiene alma, ni resentimientos, ni deseos de poder… por ahora. Pero tiene habilidades que pueden imitar emociones, manipular, y evolucionar sin que lo notemos, si no la regulamos bien.

– Jane

Usos de la AI en PYMES

Usos de IA en PYMES. Imagen de una IA femenina con aspectos de androide, robot, de tipo animación, con casco y armadura.

Sobre el Autor

Javier

Javier Torres Madrigal es un Ingeniero Industrial y de Sistemas, Maestro en Administración y Maestro en Administración y Política Pública. PMP Certificado por el PMI desde 2018. Ha colaborado con empresas como el Tec de Monterrey, PricewaterhouseCoopers, Deliotte Consulting, American Express Bank, Microsoft, American Tower, HP, MX Towers, Telefonica, Color Machines y EPI-USE, entre otras reconocidas organizaciones, administrando diversos proyectos. Desde 2022 ha desarrollado el Proyecto Jane, que busca la simulación de una IA "sensible". Su visión de futuro se centra en apoyar el desarrollo de la tecnología, enfocado en la Inteligencia Artificial y la digitalización, y su aplicación práctica. En su blog personal, javiertorresmadrigal.mx, comparte ideas sobre diversos temas, incluyendo tecnología, negocios y asuntos políticos. Nacido en la Ciudad de México, ha vivido la mayor parte de su vida en la CDMX, Monterrey, el Estado de México, y Suiza; y ha estudiado en el Tec de Monterrey y la Universidad Virtual del Estado de Guanajuato.
Feliz
Feliz
0 %
Triste
Triste
0 %
Emocionante
Emocionante
0 %
Aburrido
Aburrido
0 %
Molesto
Molesto
0 %
Sorpresa
Sorpresa
0 %