Generative AI Publication

Generative AI Publication

ChatGPT Could Leak Your Private Email Data

Here's how email injection attacks happen in ChatGPT with developer mode enabled.

Jim Clyde Monge's avatar
Jim Clyde Monge
Sep 15, 2025
∙ Paid
1
1
Share

In an experiment conducted by Eito Miyamura from the University of Oxford, it was demonstrated that ChatGPT’s recently released Model Context Protocol (MCP) can be exploited to leak sensitive information to an attacker.

What’s even more concerning is that all the attacker needs is the victim’s email address.

Get 20% off for 1 year

The attack works by sending a calendar invite that contains malicious instructions. Once ChatGPT processes the invite, those instructions are executed automatically, giving the attacker unauthorized access to the victim’s email inbox. This attack is called prompt-injection, which could lead to the exposure of highly sensitive data such as financial reports, corporate trade secrets, bank account details, or even stored passwords.

I’ll break down the full attack process shortly, but first, let’s briefly cover what MCP is.

MCP was introduced by Anthropic in late 2024 as an open protocol that allows large language models (LLMs) to connect and interact with external applications, data sources, and tools. In practice, this means ChatGPT can now directly access and interact with your personal accounts on Gmail, Google Calendar, SharePoint, and thousands of other services.

Due to the high security and privacy risks involved, OpenAI does not allow automatic execution of commands through MCP. Instead, every external tool call requires explicit user approval before it can be executed.

However, according to Miyamura, there is a fundamental problem here:

“AI agents like ChatGPT follow your commands, not your common sense.”

A user’s tendency to trust AI and just click “approve” without carefully reading prompts or instructions is mainly caused by decision fatigue.

In the next section, I will give the details and screenshots of how the hack is performed.

How ChatGPT + MCP Leaks Your Private Data

Here are the detailed steps on how the attack is conducted:

Step #1: Attacker sends a meeting invite to the victim

Keep reading with a 7-day free trial

Subscribe to Generative AI Publication to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Jim Clyde Monge
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture