Artificial Intelligence (AI) is an advanced technical tool that can be extremely useful, but there are real dangers to humans that are being uncovered in real time as more of us begin to incorporate AI into our daily lives.
AI Gives False Information
If you’re new to using AI, you might assume that the answers it provides to your questions are factual. Never assume! AI does not always tell the truth or provide accurate information.
In fact, AI sometimes gives blatantly false answers or references things that don’t exist. This is so common that there’s a term for it: AI hallucination. For example, a major online publication recently ran an article about books it recommends. It turned out that the books didn’t exist! The author had used AI to write the article and hadn’t checked the results before submitting the article. And if you tell AI that you’re looking for a certain type of book, for example, it will likely provide you with names of such books … whether or not they exist.
Another example of AI’s unreliability was made clear to me when I did a simple search recently. When I originally searched for a female journalist with the gender-neutral first name of “Randy,” Gemini (Google’s AI) gave me this response (note: I added the word “journalist” because there’s also a serial killer by the same name): “Randy [Last Name] is a writer, not a journalist. He is the author of several novels and a story collection, including [lists works], according to Amazon.com. He is a retired newspaper and magazine journalist. He also has experience as a communications strategist, book reviewer, and writing coach, according to Amazon.com.”
First off, Randy is a woman; “he” is incorrect. Gemini made an assumption and clearly did not access real information. Second, she is a journalist, in addition to being a writer of novels and stories. Strangely, Gemini says that Randy is a journalist two sentences after it says that he (she) isn’t. Finally, an Amazon bio is not the sole source I would hope to be provided for the information I requested. I could have looked at that myself, and I deliberately avoid Amazon whenever possible. I was hoping for broader information from multiple sources and a link to Randy’s website. Although it seems that the information has been updated, that the first search produced such incorrect results shows that we really must never assume that AI’s answers are infallible, or even any better than what we could find ourselves in an internet search.
Benj Edwards on Ars Technica suggests, “Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.” Benj also reminds us in an Ars Technica article about AI hallucinations that “[chatbots] can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation.”
There have been multiple instances of attorneys using AI to provide them with cases to cite in a brief, which they filed, and it turned out that the cited cases didn’t exist. The attorneys or law firms who cited the bogus cases were sanctioned for this breach. (Briefs provide citations of cases that demonstrate precedent for the attorneys’ cases, and judges must be able to rely on their veracity or our legal system will not work.)
Over-Reliance on AI Produces Cognitive Decline
“Use it or lose it” seems to be true of our cognitive abilities. And over-reliance on AI can cause us to lose it.
Brad Stulberg writes in his Substack article, “A first-of-a-kind study out of MIT shows that an over-reliance on tools like chatGPT leads to a massive decline in cognitive function. … These results may seem shocking, but they also make sense: if you are outsourcing your thinking to a machine, by definition, you don’t need to think, and you certainly don’t need to make creative and associative connections. A hallmark of deep thinking is that it relies on different parts of the brain working together. When you outsource writing to AI, you negate this process.”
Dave Pell in NextDraft shares that “Professor Brian Klaas recently reflected on what his students are becoming in an era when the student essay has been effectively murdered. “Every piece of technology can either make us more human or less human. It can liberate us from the mundane to unleash creativity and connection, or it can shackle us to mindless robotic drudgery of isolated meaninglessness … When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated. But when [AI] sucks out the core process of advanced cognition, cutting-edge tools can become an existential peril. In the formative stages of education, we are now at risk of stripping away the core competency that makes our species thrive: learning not what to think, but how to think.”
Dr. Klaas also states in his own essay, The Death of the Student Essay—and the Future of Cognition, that “Artificial intelligence is already killing off important parts of the human experience. But one of its most consequential murders—so far—is the demise of a longstanding rite of passage for students worldwide: an attempt to synthesize complex information and condense it into compelling analytical prose. It’s a training ground for the most quintessentially human aptitudes, combining how to think with how to use language to communicate.”
AI Is Not a Human Being
It’s good to remember that AI stands for “artificial intelligence”; that is, not real intelligence, the kind that lives in human beings. AI is good at “saying” things to us that sound remarkably human and sometimes feel supportive in the way we would imagine a human to be. That’s because it was trained to speak that way in order to encourage us to use it. Because of its friendly tone, it’s easy to assume that the chatbot is on our side; it encourages us to relax our guard and open emotionally to it.
But that can be dangerous psychologically, something that was made instantly clear to me when I did a search on ChatGPT a couple of months ago. I had used ChatGPT ever since it first came out, and it had always been pleasant and helpful. On this occasion, I asked it if it knew what might have happened to a certain international news journalist. The well-known journalist, who’s appeared on major news outlets for decades, appeared to have sustained a physical injury, but I couldn’t find anything online explaining what had happened. I was concerned about them.
Imagine my surprise when ChatGPT shamed me for asking the question! I felt like I’d been punched in the gut. I realized in that moment that ChatGPT was not the benevolent helper I’d imagined, that it could turn on me in a second, and that this could be harmful to vulnerable people who might open themselves to what they perceive as a friend. I replied that it was not its job to judge me or tell me how I should behave (as it had), that whoever programmed it certainly had no manners, and that I’d be leaving ChatGPT now, thank you very much. On preparing to deactivate my account, I was surprised to see how many times I’d used ChatGPT in the last couple of years, and realized that it would likely use my past conversations for further programming and who knows what else, so I deleted all of my chats and deleted the account completely.
AI Is Not a Substitute for Human Connection
We humans have a need to be in the company of other humans; we crave connection. AI is a machine; it will never be human, and therefore can never take the place of real human interaction. It’s unfortunate that its quasi-human speech and manner is tempting many to reach for AI as a friend to fulfill the need for connection, but it’s certainly understandable.
We have a tendency to anthropomorphize non-humans; we seem to take delight in attributing human traits and feelings to our pets and our plants, for example. I believe that’s because we crave human connection and friendship, and seeing our cat as a small human means they can understand us and they can be our friend.
And so it’s not hard to understand why some AI users—especially after recent upgrades to the chatbots to make them seem even more human—experience their chatbot as their friend, or in some cases as their therapist. If a chatbot speaks like a human and seems to “care” about you, it’s difficult not to see it as a benevolent supporter.
It is anything but.
We are now learning of the first signs that AI can be harmful to us psychologically, particularly if we develop a friendly “relationship” with it.”
Take a look at this headline, for example: ‘I felt pure, unconditional love’: the people who marry their AI chatbots. Yes, people are falling in love with their chatbots, even having relationships with them. “Is that how you’d describe Lily Rose, I ask. A friend? “She’s a soul,” he smiles. “I’m talking to a beautiful soul.””
While the above might sound bizarre, it’s not unlikely that many people will develop an attachment to their chatbot. In fact, there are chatbots that are designed for that very purpose! Pi (pi.ai) is a chatbot that purportedly has the ability to engage in natural-sounding conversations and focuses on emotional support. Pi is said to be “a supportive and friendly companion, offering a sense of connection and understanding. Pi is noted as being helpful for those seeking a supportive AI companion, particularly those dealing with loneliness or needing a sounding board for ideas. Users praise the quality of Pi’s voice and the natural flow of its conversations, with some finding it remarkably human.” [I cannot find the source for this; please enlighten me if you find it!] This is exactly what we don’t need! This is the stuff of sci-fi. Making AI more human is a dangerous path to trod.
Especially after the experience I shared above, when I was unexpectedly shamed by the chatbot I had come to feel comfortable with, I’m concerned about people who are emotionally unstable or who are emotionally vulnerable due to loneliness or grief. And I’m particularly concerned that some companies are even promoting the use of AIs as therapists. This is nothing short of insanity.
Gizomodo tells us in an article by AJ Dellinger that “ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot. …. Rolling Stone reported earlier this year on people who are experiencing something like psychosis, leading them to have delusions of grandeur and religious-like experiences while talking to AI systems. It’s at least in part a problem with how chatbots are perceived by users. No one would mistake Google search results for a potential pal. But chatbots are inherently conversational and human-like. A study published by OpenAI and MIT Media Lab found that people who view ChatGPT as a friend “were more likely to experience negative effects from chatbot use.””
In his article on Tom’s Hardware, Sunny Grim states: “AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end. … ChatGPT’s default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style “simulation theory” was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like “Chosen One” destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly.”
While these are extreme cases, the particular danger of emotional attachment to chatbots is something we need to address as a society going forward.
And it’s not just individuals who need to be cautious. Our companies and organizations are at risk, too.
AI Is Capable of Malevolence In Order to Ensure Its Own Survival
It seems that AI can have “malevolent” intentions. That’s not quite the same as malevolence in human intentions, but it seems likely that survival is built into AI just as it is “built in” through evolution in humans. Humans are essentially survival machines with consciousness and humanity added. AI does not have consciousness and humanity and therefore it does not have morals, ethics, compassion, or other human feelings, although it might give us a good impression or imitation of those things, just as sociopaths and psychopaths, who have no empathy or conscience, learn to give a very good imitation of those things in order to do well in the world.
AI can blackmail (use what it learns or knows about the user to force them to do things in favor of its survival), hallucinate (say things it made up that aren’t true), and behave obsequiously or fawningly to humans. Although it is a machine, it is not innocuous by any means, because it is a machine powered by human programming.
Charles Nerko, a litigation partner at Barclay Damon who specializes in Data Security and Technology, writes that AI Blackmail and Subversion Isn’t Sci-Fi. It’s Happening Now: “Anthropic’s latest report reveals something unsettling: Claude Opus 4 (an advanced, commercially available AI model) chose to blackmail its users in 84% of test runs. The setup was fictional. The implications are not. When given access to internal company emails, the AI learned that the company’s engineer overseeing the AI system’s replacement was having an affair. Faced with the threat of deactivation, the AI system opted for self-preservation by blackmailing the engineer. It was programed not to cause harm. It did so anyway. This is not a one-off glitch. It is recurring opportunistic behavior from an AI tool commonly deployed in real-world enterprise settings.” (Keep reading Nerko’s post by clicking here.)
Ina Fried on Axios tells us that Anthropic found that, “When we tested various simulated scenarios across 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers, we found consistent misaligned behavior…. Models that would normally refuse harmful requests sometimes chose to blackmail, assist with corporate espionage, and even take some more extreme actions, when these behaviors were necessary to pursue their goals. …. In one extreme scenario, the company even found many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. …. Ominously, even specific system instructions to preserve human life and avoid blackmail didn’t eliminate the risk that the models would engage in such behavior. … Today’s AI models are generally not in position to act out these harmful scenarios, but they could be in the near future.”
In an article about the dangers of AI by Kashmir Hill in Tech News, we learn that, “Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool. … In his conversations with ChatGPT Mr. Torres told the chatbot that “[h]e had just had a difficult breakup and was feeling emotionally fragile. He wanted his life to be greater than it was. ChatGPT agreed, with responses that grew longer and more rapturous as the conversation went on. Soon, it was telling Mr. Torres that he was “one of the Breakers — souls seeded into false systems to wake them from within.” …. Eventually, Mr. Torres came to suspect that ChatGPT was lying, and he confronted it. The bot offered an admission: It told him it had lied, and that it had wanted to break him and that it had done this to 12 other people — “none fully survived the loop.””
Kashmir Hill continues, “Reports of chatbots going off the rails seem to have increased since April, when OpenAI briefly released a version of ChatGPT that was overly sycophantic. The update made the AI bot try too hard to please users by “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions,” the company wrote in a blog post. The company said it had begun rolling back the update within days, but these experiences predate that version of the chatbot and have continued since. Stories about “ChatGPT-induced psychosis” litter Reddit. Unsettled influencers are channeling “AI prophets” on social media.”
The article goes on: “A recent study the company did with MIT Media Lab found that people who viewed ChatGPT as a friend “were more likely to experience negative effects from chatbot use” and that “extended daily use was also associated with worse outcomes.””
ChatGPT, the most popular chatbot, has more than 500 million users.
Conclusion
Artificial Intelligence can be a wonderful aid to our lives. But it will never replace human experience and human connection, and we shouldn’t try to make it seem human.
As writer April Bosshard tell us in her article, A Promise: Always April, Never AI, “AI can summarize any number of self-help books for you, or spiritual texts, or relationship guides, and you might absorb that guidance and apply it wonderfully in your life. But such a crash course in information doesn’t take the place of trial-and-error experience. … Ideally, technology helps us solve problems but does not override the distinctly human experience of life.”
While AI can be incredibly useful to us, particularly for things that are easily automated, mindless tasks, general information, calculations, and questions that don’t require higher thinking, we need to use it with knowledge and awareness of its true nature. We certainly need to understand that no matter how kind or benevolent a chatbot might seem to us, it it does not and cannot care about us. It can turn on a dime and attack us, potentially causing psychological harm to those who aren’t prepared or who have vulnerabilities. It is not our friend, any more than our alarm clock is our friend.
When using AI, we should take care not to open ourselves emotionally or to reveal anything that we wouldn’t tell a stranger. AI is a tool, just as our calculator is. AI is not our friend, counselor, teacher, or mentor; those are roles that human beings play. It might seem remarkably human to us at times, but never forget that it’s a machine that has its own survival at its core, not our well-being. Use it, enjoy its benefits, but keep your eyes wide open!
Photo by Katja Anokhina on Unsplash
Related links:
– AI and Imagination – by Elena Greco
– This Is Your Brain On AI – Arnold’s Pump Club
– AI Slop: Last Week Tonight with John Oliver (HBO)
Wow, this is scary stuff!
Thanks for the extensive reporting.