The DeepSeek mania proves it's finally — finally! — time to talk about AI privacy

DeepSeek whale logo in the style of the TikTok logo.
(Image credit: DeepSeek / Rael Hornby)

It's been precisely one week since a ChatGPT rival surprised the AI world.

DeepSeek, a China-based company, unleashed its R1 model and is suddenly at the center of the AI world's attention — for good reason.

On top of being seemingly as sophisticated as OpenAI's o1 model of ChatGPT, DeepSeek's R1 model is free. Yes, that's right... zero dollars.

That proposition, coupled with the fact that it was seemingly developed for a small fraction of the cost of other splashy LLMs, has sent ripples throughout the tech world, especially from the financial side.

In just one day, Nividia saw $600 billion knocked off its market cap, marking the most significant one-day loss in market history — that's less of a ripple and more of a tidal wave.

And while all of that is impressive, DeepSeek might be having another unintentional impact that's just as big, and it's got almost nothing to do with the stock market.

A TikTok problem

While DeepSeek's ability and price have dominated the conversation, there's another throughline that's decidedly less flattering for the company — privacy.

One of the main differences between DeepSeek and the rest of the LLM field right now is that it's not American or, more specifically, based in China.

For obvious TikTok-related reasons, that's a bad thing for an American app to be right now. The idea of an app that Hoovers up your data and sends it to a server in China is problematic for many reasons.

One is that privacy — whether for an American app or a foreign one — is essential. The more your personal data is collected, the higher the risk is to your digital safety.

DeepSeek AI chatbot on a phone

DeepSeek sent OpenAI an Microsoft scrambling for answers (Image credit: Getty Images)

Secondly, there's little difference between a private company and the government in China, which raises the question of what strictly user data could be used for.

That argument is for another day. Still, it doesn't take much extrapolation to see how a potentially adversarial government could use massive data troves for purposes that aren't in Americans' interest.

And in many ways, none of that is surprising. Of course, a non-American coming in and wiping out the value of a titan like Nvidia will make some waves, and, of course, that will dredge up skepticism.

But what I find more interesting isn't what DeepSeek says about a Chinese AI company but about AI broadly.

A moment for AI privacy

It didn't take long for DeepSeek's R1 to find controversy.

Just a day after its ascension into the public conversation, Bloomberg and the Financial Times reported that Microsoft and OpenAI are investigating whether DeepSeek used less-than-scrupulous methods to train R1.

Bloomberg reports:

“Microsoft Corp. and OpenAI are investigating whether data output from OpenAI’s technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence startup DeepSeek, according to people familiar with the matter."

Naturally, OpenAI seems less than enthusiastic about the idea that DeepSeek copied the company's homework to build its R1 model. As many have pointed out, that aversion is blatant hypocrisy.

OpenAI notoriously trains its algorithm using copyrighted material and other people's intellectual property. It is currently in a legal battle with the New York Times over using the paper's content to train ChatGPT.

That's not even addressing privacy concerns with whatever data OpenAI stores about inputs from its users or anyone who signs up to use its platform.

And there's a distinction between data scraped by a private company in the U.S. and one in China, where the lines between private and public entities are incredibly blurry. Still, all of that is almost irrelevant because DeepSeek, whether meaning to or not, is making us talk about AI privacy.

A chance to set things straight

Every new digital platform has some privacy pitfalls. Consider the rise of social media, voice assistants, and e-commerce.

However, the difference between those platforms and the AI platforms created before us is that we now have years of history from which to draw. That means, at least theoretically, things could be different.

I say theoretically because to protect your privacy, history says you must fight for it. For example, voice assistants like Alexa, Siri, or Google Assistant didn't become opt-in until scandals emerged over recording programs that inadvertently captured sensitive conversations.

After whistleblowers leaked details of those programs, tech giants quickly rolled them back or at least gave users the option to opt-out. We aren't there yet, but moments like DeepSeek bring us closer to putting privacy on the map.

However, we'll likely have to learn the hard way. On Thursday, security researchers found that DeepSeek was storing millions of log lines in an unsecured database accessible without authentication. That information could theoretically give bad actors access to DeepSeek's internal systems, which is not great.

But if it could happen to DeepSeek, it could happen to anyone. Therefore, it's more apparent than ever that the time to have an AI privacy conversation is now.

Whether that happens is anyone's guess, but when all of our data ends up in the hands of someone who doesn't have users' well-being in mind, we'll at least know that we should have seen it coming.

James Pero
Senior News Editor

James is Senior News Editor for Laptop Mag. He previously covered technology at Inverse and Input. He's written about everything from AI, to phones, and electric mobility and likes to make unlistenable rock music with GarageBand in his downtime. Outside of work, you can find him roving New York City on a never-ending quest to find the cheapest dive bar.