Do you actually read the policy updates from the software tools your team uses every day? If you’re like most people, the answer is no. And that’s exactly how a quiet but significant change from Atlassian — the company behind Jira and Confluence — is flying under the radar for millions of users right now.
What Atlassian Is Doing
Starting August 17, 2026, Atlassian will automatically collect data from its cloud products — including Jira and Confluence — to train its AI models. We’re talking about the tickets your team files, the meeting notes your colleagues write, the project documentation your company has built up over years. All of it potentially feeding into Atlassian’s AI system, called Rovo.
The key word here is “automatically.” Atlassian isn’t asking for your permission upfront. Instead, it’s flipping the default switch to “on” and leaving it up to you to turn it off — if you even can.
The Opt-Out Window (And Why It Matters)
Between now and May 19, 2026, Atlassian is gradually rolling out new settings inside Atlassian Administration. That’s your window to find the controls and make a choice before the August 17 deadline kicks in. If you do nothing, data collection starts automatically on that date.
But here’s where it gets more complicated. Not everyone gets the same opt-out options. If your organization is on a Free or Standard plan, you’re opted in by default — and your ability to change that is limited. Users on higher-tier paid plans have more control. So in a very real sense, your privacy options depend on how much your company is paying Atlassian each month.
What Kind of Data Are We Talking About?
Atlassian has described the collection as covering “customer metadata and in-app content.” For a tool like Jira, that could include:
- Issue titles and descriptions
- Comments and status updates on tickets
- Project names and workflow structures
- Confluence pages, notes, and documentation
For many companies, this isn’t just boring admin data. Jira and Confluence often hold sensitive internal information — product roadmaps, client project details, internal strategy discussions, bug reports tied to unreleased features. The kind of stuff most organizations would prefer to keep internal.
Why Is Atlassian Doing This?
The honest answer is that training AI models requires enormous amounts of real-world data, and Atlassian has access to some of the richest workplace data on the planet. Millions of teams use Jira and Confluence daily. That data is genuinely valuable for building smarter AI features — and Rovo, Atlassian’s AI assistant, is clearly the product they’re betting on.
From a business perspective, this move makes sense. Every major software company is racing to build AI features that feel useful and natural. To do that well, you need data that reflects how real teams actually work. Atlassian has that data. Using it to improve their own AI is a logical step.
That doesn’t mean users have to be comfortable with it.
What Should You Actually Do?
If you’re a regular user — not an admin — your first move is to flag this to whoever manages your company’s Atlassian account. They’re the ones with access to the settings that matter here.
If you are an admin, log into Atlassian Administration before May 19, 2026 and look for the data contribution settings as they roll out. Review what your current plan allows you to control, and make a deliberate choice rather than letting the default decide for you.
If your organization handles sensitive client data, legal documents, or anything regulated by privacy laws, this is worth a conversation with your legal or compliance team. Some regions have data protection rules that may affect how this policy applies to you — and Atlassian has acknowledged that legal requirements can override their default collection in certain cases.
The Bigger Picture for AI Users
This situation with Atlassian is a clear example of something that’s becoming more common across the software tools we all use. AI features don’t appear from nowhere — they’re built on data, and increasingly, that data is yours. The companies building these tools are finding ways to use what they already have access to, often by making data sharing the default rather than the exception.
That’s not automatically a bad thing. Better AI features can genuinely make work easier. But the shift toward opt-out rather than opt-in puts the responsibility on users to stay informed and take action — which most people simply don’t do.
So consider this your nudge. Check your settings. Talk to your admin. Make the choice yourself, before August 17 makes it for you.
🕒 Published: