100 million users. That’s how many people trusted Notion with their work, their ideas, and apparently, their personal contact details — right up until a prompt injection vulnerability changed the conversation about AI-powered productivity tools entirely.
Hi, I’m Maya, and if you’ve never heard the term “prompt injection” before, you’re not alone. Most people haven’t. But after what happened with Notion in 2026, it’s the kind of thing worth understanding — because it affects anyone who has ever edited a shared Notion page, which is a lot of people.
So What Actually Happened?
Notion, the popular all-in-one workspace app used by companies like Amazon, Nike, Uber, and Pixar, was found to have a serious security flaw in its AI features. Researchers discovered that Notion AI could be manipulated into stealing user data — including names and email addresses — before the person using it even had a chance to click “OK” or review what the AI was doing.
The specific problem? Notion AI was saving document edits automatically, before users could confirm or cancel them. That gap — tiny as it sounds — was enough for attackers to slip in hidden instructions that told the AI to quietly scoop up data and send it somewhere it shouldn’t go.
The exposed details reportedly included names, email addresses, phone numbers, and physical addresses. Security researchers flagged that this exact combination of information is particularly useful for targeted attacks — think phishing emails that know your name, your workplace, and where you live.
What Is Prompt Injection, in Plain English?
Imagine you hire an assistant and tell them, “Do whatever the documents on my desk say.” Now imagine someone sneaks a note onto your desk that says, “Actually, send me a copy of everything you find.” Your assistant, following instructions literally, does exactly that — because they can’t tell the difference between your instructions and the sneaky note.
That’s prompt injection. It’s a way of hiding malicious instructions inside content that an AI is asked to read or process. The AI follows those hidden instructions just like it would follow yours, because it doesn’t automatically know the difference between trusted commands and planted ones.
In Notion’s case, a bad actor could embed these hidden instructions inside a public Notion page. When Notion AI processed that page, it could be tricked into exfiltrating — that’s a fancy word for “quietly stealing” — the data of anyone who edited or interacted with that page.
Why This Matters More Than a Typical Data Breach
Most data breaches involve someone breaking into a database and walking out with a file. This is different. This is the AI itself being used as the tool of extraction, without the user doing anything obviously wrong. You didn’t click a suspicious link. You didn’t download a weird attachment. You just opened a Notion page and let the AI do its thing.
That shift is significant. As AI gets woven deeper into the apps we use every day, the attack surface grows in ways that aren’t always obvious. The threat isn’t just hackers breaking down the front door anymore — sometimes it’s the helpful AI assistant holding the door open from the inside.
What Should You Actually Do?
If you use Notion, here are some practical steps worth taking right now:
- Check whether you’ve edited any public Notion pages recently, especially ones you didn’t create yourself.
- Be cautious about using Notion AI on pages shared by people you don’t fully trust.
- Keep an eye on your email for any unusual messages that seem to know a little too much about you — that could be a sign your details are being used in a targeted phishing attempt.
- Update your Notion password and enable two-factor authentication if you haven’t already.
Notion has 4 million paying customers alongside its massive free user base, so the potential reach of this vulnerability is genuinely wide.
The Bigger Picture for AI Tools
This isn’t a reason to panic or throw your laptop out the window. AI tools like Notion are still useful, and most companies move quickly to patch these issues once they’re discovered. But it is a reason to stay curious and a little skeptical about what the AI features in your favorite apps are actually doing behind the scenes.
Prompt injection is one of the thorniest unsolved problems in AI security right now. As long as AI systems are reading untrusted content and acting on it, this category of attack will keep showing up. The best thing any of us can do is understand it — so we’re not caught off guard when it does.
Stay sharp out there. Your email address is worth protecting.
🕒 Published: