Is ChatGPT secure? What about Claude?
Updated: May 2026. We refresh this page regularly to keep pace with fast-moving AI platforms and policies.
The boardroom question has moved on
Two stories, eight months apart, frame the new reality.
In August 2025, the acting director of America’s Cybersecurity and Infrastructure Security Agency uploaded sensitive government documents – marked “For Official Use Only” – to the public version of ChatGPT. CISA’s automated systems flagged the upload immediately, triggering a Department of Homeland Security investigation. The head of America’s cyber agency.
Around the same time, Italy’s data protection authority – the Garante – continued a pattern of high-profile enforcement against AI chatbot providers. It had already fined OpenAI EUR15 million in December 2024 for GDPR violations linked to ChatGPT. It followed up by fining Replika’s parent company EUR5 million and opening an investigation into DeepSeek. European regulators have teeth, and they’re using them. UK organisations should assume the ICO is watching closely.
Both stories tell us something important. The major platforms themselves are now broadly secure. The risk has migrated – to how people use them, what they’re connected to, and how organisations are governing the parts they can’t see.
So the boardroom question has shifted. It’s no longer “is ChatGPT safe?”. It’s “how do we govern AI use across our organisation – including the parts we don’t directly approve?”
The platforms aren’t the main problem any more
There’s a real difference between consumer accounts and enterprise tiers. The major providers all offer enterprise-grade controls now:
- ChatGPT Enterprise and Team offer zero retention by default, encryption at rest and in transit, ISO 27001 certification, and don’t use customer prompts for training.
- Claude for Work offers similar zero-retention defaults, full admin logging, and SOC 2 compliance.
- Microsoft 365 Copilot keeps prompts and responses inside your Microsoft tenant, covered by your existing data-loss-prevention policies.
- Gemini for Workspace offers comparable enterprise controls within Google’s tenant model.
The technical security of the major platforms isn’t where most organisations are getting hurt. The risk has moved.
Where the real risk now sits in 2026
According to IBM’s 2025 Cost of a Data Breach report, 97% of organisations that experienced an AI-related security incident lacked proper AI access controls. 63% had no AI governance policies at all. Three patterns dominate.
Shadow AI
The biggest single risk is the gap between the platform you’ve approved and the platforms your team actually uses.
Research from LayerX in 2025 found that around 18% of enterprise users regularly paste data into AI tools, and over half of those events include corporate information – often via personal accounts on consumer tiers that have neither the retention controls nor the audit logs of the enterprise versions. IBM put shadow AI as a factor in 20% of breaches, adding around $670,000 to the average cost.
This is the CISA story repeated tens of thousands of times a day across organisations that haven’t approved AI use, have banned it ineffectively, or simply haven’t made the rules clear.
Supply chain and integrations
Even when your team is using approved tools properly, data moves through more places than you might think.
In November 2025, OpenAI confirmed a security incident at its analytics provider Mixpanel, exposing names, email addresses and user IDs of API platform users. OpenAI’s own systems weren’t breached – but its users’ data still was. European observers flagged potential GDPR data-minimisation concerns with the analytics data Mixpanel had been collecting in the first place.
The Salesloft-Drift breach in August 2025 was bigger. Attackers compromised an AI chatbot embedded in over 700 organisations’ sales stacks – including Cloudflare, Palo Alto Networks and Zscaler – then used the chatbot’s legitimate integrations to exfiltrate Salesforce data. The chatbot itself was the vector.
And in March 2026, McKinsey’s internal chatbot Lilli was found to have 22 unauthenticated endpoints, exposing more than 700,000 private files and 46 million chat logs. The detail that should give every leader pause: an AI security agent identified the target and breached it in two hours, for $20 of API credits.
Agents
As AI tools graduate from chatbots to agents – with permissions to read mail, browse, and act on your behalf – the attack surface expands materially. We’ll explore this in more depth in our forthcoming piece on AI agents. For now, the principle is simple: every permission you grant an agent is a permission an attacker could exploit if the agent is compromised.
A 2026 governance checklist for leaders
The principles haven’t changed much. The priorities have.
Know your data. Map what types of information employees might feed into AI tools. Classify what must stay internal – customer PII, trade secrets, sensitive financials – and set clear “never share” rules.
Choose the right tier. Use enterprise licences, not consumer accounts. Verify encryption at rest and in transit, retention controls, and whether prompts are excluded from training.
Address shadow AI head-on. Banning consumer ChatGPT or Claude doesn’t eliminate the demand – it eliminates your visibility. Provide approved alternatives, monitor browser-level usage where appropriate, and make the rules unambiguous.
Vet your integrations and agents. Every chatbot integration, plugin or agent expands your data perimeter. Review them with the same care you’d give any third-party access – token scoping, IP allow-listing where possible, and proper deprovisioning when contracts end.
Control access and retention. Single sign-on, role-based permissions, short retention windows, and DLP scanning on both inputs and outputs.
Align with the EU AI Act. The penalty regime activated on 2 August 2025; high-risk system obligations apply from 2 August 2026. Maximum fines reach EUR35 million or 7% of global turnover for prohibited practices. UK organisations operating in the EU are in scope.
Train for “careful curiosity.” Pasting an internal deck into an unsecured chatbot is the digital equivalent of leaving it on a cafe table. Culture completes the control set.
Keep risk in proportion
The picture isn’t as bleak as the headlines suggest. IBM’s 2025 report found the average cost of a data breach actually fell 9% globally to $4.44 million – the first decline in five years – driven largely by AI-powered defences identifying and containing incidents faster. Organisations using AI extensively in their security stacks saved nearly $1.9 million per breach on average.
So AI is now part of the threat and part of the defence. Banning it avoids one set of risks but introduces another – lost productivity, lost capability, and shadow AI growing in the dark.
A more useful stance is to treat AI use as a managed service with defined guardrails, then measure and improve. The organisations that get this right won’t just avoid headlines. They’ll build the kind of trusted AI capability that compounds over time.
If you spot a change in the platforms or the regulatory landscape that affects this guidance, tell us. We keep this page updated so it stays practical and current.
Last updated: May 2026