Dr. Know

How Private Are My Private Conversations With AI Chatbots Like ChatGPT?

We invite you to ponder the concept of “automated surveillance.”

How private are my private conversations with AI chatbots like ChatGPT? Are such communications subject to subpoenas in criminal investigations? How about evidentiary discovery in civil cases? Is there a way to permanently delete them so they’re no longer discoverable? —Curious

The bad news, Curious, is that all your worst fears about chatbot-related privacy protections are about to be confirmed. The good news is that AI’s real threat to privacy is something much worse—something, in fact, that will make your money-laundering, meth-cooking, body-hiding ChatGPT inquiries seem as quaint and irrelevant as demerits in middle school gym class.

For the record, conversations with ChatGPT are just like emails, text messages or (for you freaks) actual letters: They’re nominally private, but—as you suspected—they’re fully vulnerable to subpoenas, discovery orders, or anything else signed by a judge.

Deleting them is a more complex question. Until recently, one’s ability to destroy the record of a ChatGPT conversation depended on the customer’s subscription level: The free version kept everything, the basic paid version let you delete content manually, and spendier versions could be configured to not store user conversations at all. (Even then, the company warned users against trusting it with sensitive data.)

A recent court order related to The New York Times lawsuit against OpenAI has muddied the waters, however. OpenAI is now required to keep copies of all ChatGPT conversations hosted on its servers, even those deleted by users. This was done in the name of preserving evidence, not policing content, but you can’t blame folks with chat titles like “Foolproof Embezzling Schemes” for being a little nervous.

Still, let’s not forget that none of this is new. Google has your entire search history, Siri has a map of every place you’ve been—we’ve all been generating massive data trails for years. If anything, the sheer size of those trails has kept us safer. Who could possibly have the time to sift through it all?

Who indeed? I invite you to ponder the concept of “automated surveillance.” Briefly, the problem is not that someone might come along and subpoena our secret cache of celebrity AI deepfakes, but that AI itself may soon have the bandwidth to connect the dots in our publicly available data, revealing anything from our breakfast cereal preferences to our potential for sedition. Almost makes you wonder if the ability to create limitless Steve Buscemi tentacle porn is even worth it—but nah, that’s crazy talk.


Questions? Send them to dr.know@wweek.com .

Marty Smith

Marty Smith is the brains (or lack thereof) behind Dr. Know and skirts the fine line between “cultural commentator” and “bum” on a daily basis. He may not have lived in Portland his whole life, but he’s lived in Portland your whole life, so don't get lippy. Send your questions to dr.know@wweek.com and find him on Twitter at @martysmithxxx.

Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.

Help us dig deeper.