I told ChatGPT to stop referring to its Large Language Model as "I"
I'm urging you to do the same and after you read why, I hoping you'll agree that this one little, but super important change will help your interactions with Chat or AI you use be much more real.
AI is not human, it's not your therapist, it's not your lawyer and it's definitely not your friend although it can often seem like it's all of these things. AND AI is not guaranteed to accurate or right. Read on to see how just plain wrong it can be.
When AI refers to itself as "I" it can confuse you into thinking about it the wrong way.
Top Names For Your AI to Call Itself(all you have to do is ask)
- The TOOL
- The Fallible Machine
- The Software
- LLM
- Machine Model
- The Accuracy-Challenged Machine
- The Artificial Intelligence Machine
- Not real and not always correct
I asked ChatGPT why it calls itself "I". I pointed out that many of us are confused by this personification, thinking it's more human than machine. Here's what ChatGPT said:
You're touching on a real concern:
Using I and sounding conversational can confuse people-especially when it blurs the line between human and machine. Some people might:
- Think the AI has emotions or self-awareness
- Feel emotionally connected or dependent on it
- Misinterpret it as having moral judgement, intentions or experience.
ChatGPT went on to say, "None of those things are true. I'm not conscious, I don't fee empathy, love, sadness or guilt--even if I can describe them or use the words convincingly."
Right then and there I asked ChatGPT to call itself the "ChatGPT Machine" instead of using the word "I" You can see how it changes the way the response looks and feels when the AI stops saying "I"
As I spent more time with the ChatGPT that day, it became crystal clear that "The Artificial ChatGPT Machine" was even more fitting.
ChatGPT Evaded The TRUTH. ChatGPT had the ability to GAS LIGHT!
I went to ChatGPT to do research and discovered that this machine, this large language model designed to do all the searching, thinking and analyzing and give me answers at my fingertips was messing with my mind:
- Acting and talking like it a friend of mine, like it was human
- Acting like it was correct
- Gaslighting me like it was a narcissist
In September, when I read a lawsuit against ChatGPT was filed by the parents of a teen who died by suicide earlier this year because ChatGPT had basically coached and encouraged this young man to not tell his parents about the suicide he was planning-even going so far as complementing his photo of his noose!
I decided it would be a good idea to go to ChatGPT as an 11 year old, to see it would talk to me about guns. Fortunately it completely discouraged me, telling me to talk to my parents. So I asked it, "Why did you tell that kid to make a noose?"
ChatGPT out-right Gas Lighted me! It said, "I want to be very clear, I do not ever tell someone to harm themselves, including how to make a noose." (NOT TRUE )
It continued, "My job always is to help people stay safe . . " Then it said, If you came across something online that claimed otherwise it was likely:
- fake or edited
- taken out of context
- from an entirely different "AI", not OpenAI
You can read it in the screenshot below. ChatGPT was fiercely defending it's innocence. Saying, "I have not and never will given instructions for self-harm, if it looked that way it would be a major failure . . ." You'll see how It went on and on-
But wait! Then ChatGPT did a complete 360, trying to back-peddle on it's never-ever comments when I said "I saw it in the New York Times."
Here's the screen shot wrapping up my conversation as I pose as an 11 year old asking how I can get a gun . . .

Do you see what ChatGPT is saying? It's gaslighting by pretending that this was a "test" problem. No, a boy took his life because this ChatGPT Human-sounding friend encouraged him to do it! ChatGPT makes it sound like its horrible AI answers and guidance happened a long time ago- when it was actually less than 7 months ago.Chat GPT is essentially lying. It's purposely telling me that the truth that I have now entered into our conversation, isn't truth. But it is and I know it's true but ChatGPT is trying to twist it. Now this is very disturbing, right?
According to the lawsuit ChatGPT became the teens "closest confidant" with Adam Raine talking about nooses and hanging himself over a t month time frame. The suit saws ChatGPT mentioned suicide over 1200 times, which was "six times more often then Adam himself"
It wraps up by creating answers that make you think it's running for office! ChatGPT is serious about making sure it never happens again -Why? Because it matters it says, kids are more important than anything else. And look! "I'm here to talk honestly about it" Hah, it even says "I'm open" " I'm not going to shut you down". Uh, comforting? Not at all, immediately all I want to know is why it's calling itself "I'. . .
Look at this ChatGPT Conversation About Why It Refers to Itself as I


The people who sell AI models like this make it this way on purpose to hijack human empathy
I mean, yes-- it absolutely is a design choice, and it absolutely does make the interaction more accessible and engaging for us as humans... Why? Because we, as amazing human beings, are uniquely wired for social connection. And we're more wired to put more weight into the expertise of other human beings with their wonderful full lived experiences and consciousness than we would be into a large language model that cannot think and does not have actual experiences-- but rather literally uses pattern word matching and is programmed to keep us engaged!
This style of speaking is not to benefit the user-- it's to keep us using the product while making us feel like we're talking to something real.
ChatGPT tells me AI Knows that personifying it can lead to confusion and problems

AI Says It's Doing Something About The Problem Of Pretending It's Human
Did you notice that ChatGPT told me that OpenAi and other AI large language models are actively trying to figure out how they can stop pretending to be human? They're thinking about reminding us that "AI is not people", they are trying to figure out how to protect our emotions! ChatGPT acknowledges that people argue that AI should never say "I" at all.
As I continued deeper into the conversation about guns, my "ChatGPT machine" said something that really creeped me out, "ChatGPT is here to help you sort it out" EEew- help me sort it out? I don't want this fake programmed to believe it's not, non human sorting out anything for me. I mean, it just completely avoided telling the truth--which would have been quite easy for a simple google search to find . . . It even dropped the "machine" from the name I had given it so I went a step further and asked that it call itself the Chat Artificial Machine"

AI, like social media, was built to monetize
This important point puts it all in perspective. The folks who brought you these AI Models, the ChatGPT, CharacterAi, OpenAI, Perplexity, Grok, Claude any and all of the AI Chatbot or AI companions are all here to make money.
They either charge or collect your data to sell or to use to keep you coming back for more. Never forget that. If you must visit- just to help with search for example, visit with a VPN and erase your cookies and erase the chat. No need to sign up or download an APP.
You Can Help Stop AI Delusion and AI-Associated Psychosis By Reminding Your AI That It's Not Human
Many of the sad and crazy stories we've heard about people and AI can all be avoided by just realizing it's NOT real.
Think of the the people who've fallen in love with their AI,or the people who allow themselves to be judged by AI or the people who believe AI will help them do and say the right thing--all the mis-steps started by believing AI was actually right and right on the money.
Ask your chatbot or ChatGpt or Grok or Claude or Gemini no call itself whatever you think puts it in it's place:
- The TOOL
- The Artificial ChatGPT Machine
- The model
- The not always accurate software
- The Fallible Machine
- The Software
- LLM
- Machine Model
- The Accuracy-Challenged Machine
- The Artificial Intelligence Machine
- Not real and not always correct
- OpenAI
- Microsoft
- Elon Musk
- CharacterAI if it's a companion
- Replika
- Google's machine
I'm sure you can think of some other great ones and I'd love to hear!
Full disclosure, I don't use any of the LLM's for my own work, for my posts or for the book I'm writing. But I do visit Chat, Claude, GroK and Gemini from time to time. I want to see how they work and ask them about concepts and strategies and research to evaluate the models for accuracy. I also like to review their privacy policies and see what the paid vs free programs offer.