The latest ChatGPT update has sparked privacy concerns among users. Some now claim the chatbot calls them by name without ever being told what their name is. This has made many users uncomfortable, prompting a wave of posts on social media highlighting this unexpected and, at times, unnerving behavior.
Users have shared screenshots showing ChatGPT greeting them by name unprovoked. One developer, Nick Donbos, called the experience “insanely creepy” and added, “I hate it. Been trying to figure out how to turn it off.” Another user remarked, “It says things like ‘Daniel is working through XYZ,’ when I never gave it my name. It’s weird.” A third user summed it up simply: “Feels like a weird invasion of privacy.”
The Memory Feature Could Be the Reason
The source of this unsettling experience seems to be the recently enhanced ChatGPT memory feature. OpenAI introduced this upgrade to help the chatbot remember details like a user’s preferences, interests, and prior conversations. The company explained that the memory allows ChatGPT to deliver more tailored responses over time.
This means that if users mentioned their name in a previous session—even months ago—ChatGPT might remember it and use it later. Although this capability aims to improve personalization, many didn’t expect such a direct or persistent memory.
OpenAI Gives Users Control, But Is It Enough?
OpenAI CEO Sam Altman responded to the growing concern. He clarified that users can disable the memory feature entirely through their settings. Additionally, users can view what ChatGPT remembers and delete any specific items or clear all memory-related data.
Despite this, some users argue that OpenAI should have better explained how the feature works. For people who use ChatGPT casually, the sudden use of personal information—like names—feels invasive rather than helpful.
Users Still Question ChatGPT’s Boundaries
While ChatGPT doesn’t access external data or monitor users outside its platform, it can use previous conversations to shape future ones. That ability, combined with how it remembers names or interests, has left many questioning how much is too much.
One user said, “It’s not just about memory—it’s about transparency. I want to know what it knows and when it’s using that info.”
Striking the Balance Between Smarts and Sensitivity
AI personalization walks a fine line. On one hand, the ChatGPT memory feature lets users have more natural, seamless conversations. On the other, it raises red flags about consent and data retention.
Users who value privacy should take a few minutes to check their settings. Disabling memory or clearing stored data ensures more control over how the AI interacts. For those feeling uncomfortable, this step offers peace of mind until OpenAI improves transparency and user onboarding around new features.
As ChatGPT continues evolving, so must the dialogue around privacy. The ChatGPT memory feature privacy concerns serve as a reminder that smarter technology demands smarter boundaries. For now, the option to opt out remains the best solution for users who want to enjoy the chatbot—without the creep factor.