ChatGPT remembers you and I'm not sure that's a good thing
Hello again, old friend
(Update 04/30/24): In an updated blog post, OpenAI have announced that the new Memory feature of ChatGPT is now live for all paying ChatGPT Plus users outside of Europe or Korea.
The new Memory feature has has some minor tweaks since its testing stages, with the chatbot now making it more clear when information is stored by using a "Memory updates" label at the top of responses.
Clicking this new label will also allow you to quickly manage saved memories, choosing which to keep and which to make OpenAI's LLM 'forget.' You can even give ChatGPT the Men in Black Neuralyzer treatment if you wish to, clearing ChatGPT's entire memory with a single button to start over anew with your AI assistant.
ChatGPT makers OpenAI have announced that they're testing a new feature for the AI chatbot that allows it to remember things mentioned in past conversations. The chatbot is poised to start banking its memory in two different ways: passively and by instruction.
The latter method is, as it sounds, instigated by the user. Think, "Hey ChatGPT, from now on, explain everything to me like I'm 5." This primes the AI to keep this in mind whenever you ask a question and ensure it talks down to you with as many monosyllables as it can.
Passive memories will see ChatGPT adopt information into its knowledge bank intuitively. For example, you may mention the name of your partner or children, speak of a preferred hobby, or divulge information relating to future travel plans. ChatGPT will pick up on these details and stash them away for future recall and contextual replies.
On the surface, this seems like a completely obvious progression as companies seek to develop AI agents that can mimic human engagement to ever closer degrees. OpenAI sees this as a way to improve the user experience, and no doubt foster a more personalized environment for ChatGPT users where the AI is a more familiar presence — unlike its current presentation as a blank slate, disconnected from every other interaction.
Not every deal is worth a squeal. Get only the good stuff from us.
The deal scientists at Laptop Mag won't direct you to measly discounts. We ensure you'll only get the laptop and tech sales that are worth shouting about -- delivered directly to your inbox this holiday season.
And that's a problem
There's somewhat of an unexplored issue with AI that few of us properly address. Trust. The anthropomorphizing of AI algorithms into Large Language Models (LLMs) like ChatGPT has been fascinating to see, and even more impressive to experience. But, at times, more than a little jarring.
Chatbots like OpenAI's have captivated users, often blurring the lines between software and "someone." LLMs are so good at interacting via the medium of human communication, that some people treat them as such. Myself included.
When I talk to ChatGPT, for whatever reason, I say please and thank you. Yet I don't offer the same pleasantries to my Google Assistant, nor do I thank my oven after I remove a cooked meal from it. Somehow, LLMs garner far more trust from users by default and even some level of acknowledgement as something more than a "thing."
This could be a result of generations of people having shifted a large portion of their socializing to text interfaces: SMS, instant messengers, forums, chatrooms, and social media. It's far easier to comprehend that a real person could be at the end of every message, especially when it doesn't come across like it was written by the Horse eBooks Twitter/X account.
Algorithms aren't people
It would seem that this digital communication adoption has primed us to register anything human-like in the digital space as "human-enough" to at least give the benefit of the doubt. It's similar to how many people end up responding to those spam messages from attractive women in their DMs, or blindly clicking links in dodgy phishing emails. But with LLMs like ChatGPT, the conceivability is multiplied tenfold.
This isn't something constrained to the easily misled and gullible either, just last year a Google Engineer claimed that the company's LaMDA AI was sentient. Even Oxford philosopher Nick Bostrom believes AI is taking steps towards sentience.
There are people in my life right now who 80-90% of my interactions with have been mostly text-only, a few of whom I'd even call close friends. These are people I may have revealed personal information to, told secrets, and shared honest opinions with, all while comfortable and confident that all of those conversations are kept safe by the unwritten contract of a friendship. It would be easy for many to fall victim to embracing AI as something similar.
But AI isn't your friend. It can't be. Not in its current form, at least. Not by any metric. No matter how convincing or emergent a chatbot might seem, it's an algorithm and a tool, nothing more. Something more and more people are willing to overlook as they seek out digital waifus, programmed psychologists, generative boyfriends, and even my own quest for a chatbot sidekick.
The reason people need to not overlook such a blatant and obvious truth is simple. What may have started off as a tool to promote natural language interactions and provide helpful answers and information, could very easily be converted into a tool used to garner information and provide helpful answers to others about you.
Today AI is a tool, but tomorrow, that tool could be used to harvest incredible amounts of your personal data.
You can't trust an algorithm, nor the people that created it
Whether you think that the "Alt" in OpenAI CEO Sam Altman's name stands for altruism or not, the fact that ChatGPT will soon be able to record slices of information about you intuitively bares a striking resemblance to how companies like Facebook collect data on its users. The difference is, that you're far more likely to over-share with a friendly, human-like chatbot in a supposed private space than you are in the public comment section of a Facebook page.
Add to this, AI's unique ability to garner personal information about you simply from the way and things you type, and you have one of the world's most valuable marketing tools ever conceived. Bigger still, you have a fount of information stored about you sitting within OpenAI's servers, which have already been subject to credential theft resulting in user data being breached, and have also seen ChatGPT regurgitate sensitive information from chats with others.
In the digital age, data equals dollars, and maintaining an AI like ChatGPT is a costly business. That's not to say that it's a guarantee that OpenAI will eventually seek to cash in on a swell of user data. They may well be able to continue in the manner they have been, though hopefully with improved security measures.
However, ChatGPT isn't the only AI chatbot out there, and nor are OpenAI the only ones creating them. Other chatbots make absolutely no bones about their desire to profit from your data and time as they inject ads or recycle conversations for training material. What's to stop them building profiles of users to pass on to marketing companies and earn a few more bucks along the way?
As one salient internet user once said: "If you are not paying for it, you're not the customer; you're the product being sold."
Outlook
OpenAI have made it clear that you can always ask its chatbot to "forget" information, with much of the data stored existing only on ChatGPT's settings page. You can also make use of a Temporary Chat, for an incognito-like fix of your AI assistant without risk of it making a note of some of your more candid details along the way.
The company has also stated that it will not use information from these chat to train its future models, and that the feature can be disabled entirely from the same aforementioned ChatGPT settings panel.
However, is this the start of a trend that will be adopted by other chatbots, and will those companies offer the same level of privacy supporting settings? That remains to be seen. But if we aren't careful, we could end up selling ourselves down the river just because we let a chatbot do the Googling for us.
More from Laptop Mag
- What is AI? LLMs, GPTs, and chatbots explained
- AI is boring — How to jailbreak ChatGPT
- No, AI isn't coming for your job — it's coming for your salary
Rael Hornby, potentially influenced by far too many LucasArts titles at an early age, once thought he’d grow up to be a mighty pirate. However, after several interventions with close friends and family members, you’re now much more likely to see his name attached to the bylines of tech articles. While not maintaining a double life as an aspiring writer by day and indie game dev by night, you’ll find him sat in a corner somewhere muttering to himself about microtransactions or hunting down promising indie games on Twitter.