Do we own AI outputs?

Brilliant Noise 10 January 2025

Updated: May 2026. We refresh this page regularly to keep pace with fast-moving AI platforms and policies.

When your teams feed brand data into generative AI tools, who owns what comes out? It’s a live question across boardrooms – and one the courts have started answering, case by case.

The reassuring answer? Usually you do. But only if your inputs are clean, your contracts are watertight, and a human is making the creative decisions that count.

The cases shaping the law

Several rulings since we first published this piece have started to draw the lines.

Bartz v. Anthropic (US, settled September 2025): the headline case for training-data risk. Three authors sued Anthropic over the use of their books to train its Claude model. In June 2025, Judge William Alsup ruled that training on lawfully-acquired works was “exceedingly transformative” and qualified as fair use – but that downloading pirated copies was not. Anthropic settled for $1.5 billion, the largest copyright settlement in US history. The principle that should travel into every boardroom conversation: how training data was sourced matters.

Getty Images v. Stability AI (UK High Court, November 2025): the first major UK ruling on generative AI and copyright. Most of Getty’s copyright claims failed – the court held that the Stable Diffusion model itself doesn’t store the training images, so isn’t an “infringing copy” under UK law. Limited trademark infringement was found where outputs reproduced the Getty watermark. The wider significance: territoriality matters, and the underlying question of whether UK-based scraping for training would infringe remains open. The UK government is due to publish its full position on AI and copyright by March 2026.

Disney, Universal and Warner Bros. v. Midjourney (filed 2025, ongoing): the new front. Three of the “Big Five” Hollywood studios allege Midjourney’s image generator produces unauthorised copies of their copyrighted characters – Homer Simpson, Darth Vader, Shrek and others. The legal arguments aren’t novel; the plaintiff weight is. The studios have the resources to take this all the way. Watch this one.

Thaler v. Perlmutter (US, March 2025): the DC Circuit confirmed that purely AI-generated works can’t be copyrighted in the United States. If no human made a creative decision, the output belongs to the public domain.

Each case turns on the same underlying question: who controlled what. Whose data went in. Who edited the outputs. Whether a human was meaningfully in the loop.

Rulebooks are catching up

Regulatory frameworks are evolving fast.

The EU AI Act (in force from August 2025): general-purpose AI providers must publish public summaries of their training data. High-risk system obligations apply from August 2026, with maximum fines reaching EUR35 million or 7% of global turnover for prohibited practices.

The US Copyright Office (updated guidance throughout 2025): outputs are only copyright-protected when a human makes “discernible creative choices”. Autonomously generated content doesn’t qualify.

The UK position remains unsettled following the Getty ruling. Government guidance is due in 2026 and may include a text-and-data-mining exception similar to the EU’s.

The takeaway: Human involvement is the legal anchor of any ownership claim you can make.

A four-point playbook for protecting your IP

Generative AI can scale your creative output without putting your rights at risk – but only if you treat IP like the asset it is. Four things every board should demand:

1. Scrub the inputs. No customer data. No trade secrets. No third-party content unless cleared. If you wouldn’t put it on a billboard, don’t paste it into an AI tool.

2. Contract for ownership. Define background, foreground and joint IP. Agree who owns what before the work begins – with the agency, the platform, and any contractors using AI in the chain. Get your AI vendor terms reviewed by counsel.

3. Co-create, don’t delegate. Use AI to draft, suggest or remix – but make sure human authorship is obvious and traceable. The Copyright Office and the Thaler ruling both point in the same direction: no human creativity, no copyright.

4. Audit and adapt. Review your AI usage regularly. Run IP checks on outputs that go to market. Update contracts as models, vendors and risks evolve. Diligence your vendors’ training-data practices where you can – Bartz showed how exposure can become liability, even when the use itself was found to be transformative.

IP governance is a board-level responsibility. If AI is in your creative process, the controls need to sit at that level too. If AI is now part of your creative process, protecting your outputs starts with how you govern your inputs.

If you spot a ruling or regulatory change that affects this guidance, tell us. We keep this page updated so it stays practical and current.

Last updated: May 2026