Is ChatGPT secure? A balanced briefing for leaders

Generative AI is moving from pilot to production. Gartner expects more than 80 percent of enterprises to have tested or deployed Gen-AI tools by 2026 (via Solar Impulse). Yet trust is lagging: a 2023 BlackBerry survey of 2,000 IT decision-makers found 75 percent of organisations were banning or considering bans on ChatGPT, and 67 percent cited data-security and privacy risk as the main reason (via Solar Impulse).

So the boardroom question is inevitable: If we put company data into ChatGPT, where does it go? And can we keep control?

The reassuring answer is yes, but only with the right configuration, policy and culture.

What actually happens to your data?

When it comes to data security, there’s a big difference between the paid-for tools, and the free version. 

  • ChatGPT Enterprise states that it encrypts data at rest and in transit, and does not use customer prompts to train its model. The service is audited to ISO 27001, the global information-security standard.

  • Claude for Work says it offers zero-retention by default and full admin logging.

  • Microsoft Copilot keeps prompts and responses inside your Microsoft tenant, covered by existing data-loss-prevention (DLP) rules and aligned with the NIST AI Risk-Management Framework.

Enterprise controls exist. But sometimes you have to enable them, and free versions rarely include them.

When productivity turns into risk

In 2024 Samsung engineers pasted confidential source code into public ChatGPT. Within hours the firm banned external generative-AI tools and began building an in-house alternative . The incident was minor, but the lesson was clear: most “AI leaks” are governance gaps, not model failures.

The financial stakes are high. IBM’s Cost of a Data Breach 2024 report puts the average breach cost at 4.88 million US dollars, up ten percent year-on-year. Organisations that deployed security AI saved about 1.9 million dollars per incident, showing that technical safeguards pay back fast. 

A privacy checklist for leaders

  1. Know your data
    Map what types of information employees might feed into AI tools. Classify anything that must stay internal like customer PII (Customer Personally Identifiable Information), trade secrets, sensitive financials. Set clear “never share” rules.

  2. Choose the right tier
    Use enterprise licences, not consumer accounts. Verify encryption at rest and in transit, retention controls, and whether prompts are excluded from training.

  3. Minimise and mask
    Encourage teams to remove personal identifiers or proprietary details before prompting. Where possible, use synthetic or dummy data for testing.

  4. Control access and retention
    Explore single sign-ons, role-based permissions, and short log-retention periods. You could also consider DLP (Data Loss Prevention) scanning on both input and output channels.

  5. Align with recognised frameworks
    Use guides such as NIST’s Generative AI profile to run risk assessments and document decisions, especially important for regulators and auditors.

  6. Train for “careful curiosity”
    Make it clear that pasting an internal deck into an unsecured chatbot is the digital equivalent of leaving it on a café table. Culture completes the control set.

Keep risk in proportion

Headline threats like prompt-injection attacks or AI “hallucinations” sound new, but they echo older issues such as SQL injection and phishing. With layered defences – policy, configuration, monitoring and training. Generative AI can be as secure as any other cloud platform.

Banning AI outright avoids one set of risks but introduces another: lost productivity and innovation. A more balanced stance is to treat AI use as a managed service with defined guardrails, then measure and improve over time.

Done well, generative AI cuts costs, accelerates work and, according to IBM, can even reduce breach impact by nearly two million dollars. The opportunity is real; so is the obligation to protect data. The organisations that master both will be the ones who can innovate using AI, whilst still protecting their precious data.

Next
Next

Can AI be green?