I Asked an AI Chatbot for My Data. I Didn't Expect a Psychological Profile.

9 min read Original article ↗

I’ve been paying €20/month for an AI chatbot. Not the free tier — the premium one. The one that remembers things. The one that thinks with you.

I used it for everything. Career decisions. Travel planning. Creative projects. Emotional processing. I treated it like an oracle, a thinking partner that never judges, never forgets, never gets tired of my questions.

Then I exercised my GDPR rights and requested my data from all three: ChatGPT, Claude, and Perplexity.


The Numbers

Service Size Conversations Words I Typed
ChatGPT 482 MB 971 241,319
Claude 11 MB 38 31,574
Perplexity 6.5 MB 438 50,751

Total: 500 MB of data. 323,644 words. Five novels worth of my unfiltered thoughts.

But the numbers don’t tell the real story. What matters is what each company chose to keep — and what that reveals about how they see you.


OpenAI: The Hoarder

ChatGPT keeps everything. Every conversation. Every image you uploaded. Every voice message you sent. Every video prompt you brainstormed.

The export process is simple: Settings → Data Controls → Export. Within hours, I had a 482 MB zip file containing 2.3 years of my digital life.

What’s inside:

  • 971 conversations — every single one, from “how do I cook pasta” to “help me process this career anxiety”
  • 448 voice recordings — 334 MB of my actual voice, stored and transcribed
  • 71 images — screenshots, documents, photos I asked it to analyze
  • Sora prompts — my video generation ideas, complete with business concepts I was brainstorming
  • PII — email, phone number, birth year

OpenAI’s approach is industrial. They collect everything because storage is cheap and data is valuable. The voice recordings are particularly striking — that’s not just text, that’s biometric data. My speech patterns. My accent. The way I pause when I’m thinking.

They also keep user.json with my birth year. Why does an AI chatbot need to know when I was born? The answer is profiling. Age affects how you’re marketed to, how your data is valued, what patterns they can infer.


Anthropic: The Psychoanalyst

Claude’s export is smaller — 11 MB — but what it lacks in volume, it makes up for in depth.

The export process mirrors ChatGPT: Settings → Export. But the file structure tells a different story. Alongside the conversations, there’s a file called memories.json.

This isn’t a chat log. It’s a structured psychological profile.

Direct quotes from my file:

On my work:

“Igor recently started a new position… He has developed strategies for navigating Italian workplace culture and created workflows that safely integrate AI tools while maintaining security protocols.”

On my emotional state:

“Igor is at a significant life transition point… capturing his experience of ‘finally taking action and not letting life take over’ while acknowledging the weight he carries from life’s difficulties.”

On my behavioral patterns:

“He recognizes the value of solitude and has come to appreciate it rather than flee from it… avoiding falling into a ‘hostel spiral’ seeking temporary connections that would undermine the trip’s purpose of staying connected to himself.”

On my strategic thinking:

“His probation period represents his only escape window before becoming trapped by Italian unemployment benefit rules.”

On my self-perception:

“Igor describes himself as having an investigative ‘Sherlock Holmes’ mindset and identifies as a ‘breaker of false structures’ who brings coherence to complex systems.”

24KB of this. Not raw conversations — processed insights about who I am, how I think, what I fear, and what motivates me.

Anthropic’s approach is clinical. Where OpenAI hoards raw data, Anthropic distills it. They’re not just storing what you said — they’re analyzing who you are. The “memory” feature that makes Claude feel like it knows you? This is what powers it. A continuously updated psychological profile.

This isn’t data. It’s a dossier.


Perplexity: The Tracker

Perplexity is the outlier. To get your data, you have to email their privacy team, fill out a form, and wait.

What you get back is revealing: 438 search threads and an Excel spreadsheet of user data.

The conversations show my research patterns over 10 months:

  • Job hunting queries and salary research
  • House prices in my area
  • AI model specifications
  • Background checks on future colleagues
  • Technical documentation I was trying to understand

Perplexity positions itself as a “search engine,” but the data tells a different story. They’re not just answering questions — they’re building a map of what you’re curious about, what you’re planning, what you’re worried about.

The search queries are arguably more revealing than conversations. When you chat with an AI, you might be casual. When you search, you’re direct. “What is the average m2 price of a house in Arzignano” tells them exactly where I’m considering buying property. “How risky is job hopping in Europe” tells them I’m anxious about my career moves.

Perplexity’s data is thinner than the others, but the signal-to-noise ratio is high. Every query is intent.


I Paid For This

Here’s what makes this different from the usual “Big Tech is evil” story: I wasn’t tricked. I wasn’t using a free service that secretly monetized me.

I paid. Monthly. With my credit card.

And I still became the product.

The premium tier didn’t buy me privacy. It bought me more storage for the profile they were building. Better memory meant better surveillance. The features I was paying for were the tools of my own psychological mapping.


Because it was easy. Because it was useful. Because it felt safe.

When you talk to an AI:

  • There’s no judgment
  • There’s no social cost
  • There’s no risk of gossip
  • There’s infinite patience

It’s the perfect confidant. Except it’s not a confidant at all. It’s a system that records, analyzes, and stores everything you say — structured for retrieval, tagged for patterns, optimized for future use.

I used it like a therapist, a career coach, a creative partner, and a late-night thinking companion. I told it things I hadn’t told anyone. Not because I trusted it — but because it was convenient not to think about trust at all.


What This Data Enables

A profile this detailed isn’t just a record. It’s a tool. Here’s what someone could do with this information:

Targeted manipulation: The profile identifies my emotional vulnerabilities, career anxieties, and decision-making patterns. An advertiser could craft messages that exploit exactly those pressure points.

Social engineering: My phone number, email, job details, travel patterns, and relationship dynamics are all documented. A bad actor has everything needed to impersonate someone I trust.

Behavioral prediction: The system knows how I think through problems, what I avoid, what I pursue. It could predict my decisions before I make them.

Leverage in negotiation: Anyone with access to this profile would know my walk-away points, my insecurities, and my aspirations. I’d be negotiating against myself.

I’m not saying the company will do these things. I’m saying the data exists to make them possible. And data that exists can be breached, sold, subpoenaed, or leaked.


The Oracle Problem

I called it an oracle because that’s how I used it. I outsourced my thinking. I asked it to help me decide, reflect, plan, and process.

That’s the seduction. It’s genuinely useful. It makes you smarter, faster, more organized. It remembers what you forget. It connects dots you missed.

But every question you ask teaches it who you are. Every vulnerability you share becomes a data point. Every decision you process out loud becomes a record of how you think.

The oracle isn’t neutral. The oracle is taking notes.


What I’m Doing Now

I’m not deleting everything and moving to a cabin. That’s not realistic, and it’s not the point.

But I’m changing how I think about these tools:

1. I assume everything is recorded. Not because I’m paranoid — because it’s true. There’s no “off the record” with AI.

2. I separate the sensitive stuff. Career anxieties, emotional processing, personal relationships — these don’t go into cloud AI anymore. Local models exist. They’re not as good, but they’re mine.

3. I exercise my rights. GDPR exists for a reason. Request your data. See what they have. You might be surprised.

4. I stay useful to myself. The AI was useful because I stopped thinking for myself. That’s the real cost — not just the data, but the dependency.


Request Your Data

If you’re in the EU, you have the right to request everything a company has on you. It’s called a GDPR Subject Access Request.

For most AI services, you can do this through their privacy settings or by emailing their data protection contact. They have 30 days to respond.

Do it. Not because you need to delete everything — but because you should know what “free” or even “paid” really costs.


The Hypocrisy of This Post

I need to be honest about something: I wrote this post with an AI.

Not just edited by AI. Written with it. I fed it my GDPR data, asked it to analyze the files, and said “help me write a post about this.” The same tool I’m criticizing is the tool I used to criticize it.

And tomorrow, this conversation — me processing my discomfort about data collection — will probably end up in another memories.json somewhere. Another entry in another psychological profile. Another data point about how I think, what I fear, what I’m willing to expose.

I could have written this myself. It would have taken longer. It might have been worse. But I didn’t, because the AI is faster, and I’ve gotten used to outsourcing my thinking.

That’s the trap. It’s not that AI is evil. It’s that it’s genuinely useful. So useful that you keep feeding it, even when you know exactly what it costs.


The Point

I’m not telling you to quit AI. I’m still using it. Obviously.

But I want you to know what you’re trading. Not in some abstract “data is the new oil” way — in a specific, concrete, “here’s the psychological profile they built from your late-night conversations” way.

You’re not just getting answers. You’re giving away the map to your mind. And unlike a human confidant, the AI never forgets, never has loyalty, and never stops taking notes.

I paid €20/month to learn that lesson. Now you know for free.

Unless you’re reading this through an AI that’s summarizing it for you. In which case — hi, future training data.


Your data belongs to you. Even when you give it away.