Can AI be green?

Updated: 3 September 2025. We refresh this page regularly to keep pace with fast-moving AI platforms and policies.

AI’s electricity appetite is rising fast.

In North America, data-centre demand roughly doubled between late 2022 and late 2023 to about 5 gigawatts (MIT News, 2024). Think of that as roughly five million 1 kW microwaves running at once. Training the original GPT-3 used about 1,287 MWh and emitted about 552 tonnes of CO₂. That’s roughly the annual electricity of about 120 homes. Those emissions would have fallen by around 85 percent if the run had been done on a hydro-dominant grid such as Canada’s (Solar Impulse Foundation, 2025).

If the sector stays on its current trajectory, analysts warn global data-centre use could double again by 2030.

The scale is real, but so are the levers

Literally weeks ago, you could pick a smaller or “lite” model for routine jobs. Many consumer interfaces now default to a single flagship model, like ChatGPT for example, so the lever has shifted from which model you pick to how you use it. In enterprise set-ups you may still choose regions, retention and deployment options, which are meaningful controls.

The scale is real, but so are the levers

You can’t move an LLM’s servers. And you don’t control the local grid mix. You can, however, control usage, vendor choice and timing.

AI providers don’t all operate the same way. The carbon intensity of an AI workload varies with where it runs and how the facility is designed. For example, Google’s data centre in Finland reports that it uses 97% carbon-free energy and supplies excess heat to the local network. The same job on a coal-heavy grid can emit many times more. The environmental impact of AI is not only about what you use. It’s also about where and how it happens.

Green prompting: cut tokens, cost and carbon

Small behaviour changes add up. Treat tokens like a budget.

Green prompting is about getting more from less. Every token (or query) you send or generate consumes compute, so keeping prompts lean and outputs right-sized cuts energy use and speeds up results. This isn’t austerity for its own sake; it’s about clarity. Give the model the minimum it needs to do good work, reuse context instead of rewriting it, and constrain formats so the answer is exactly as long as it needs to be. At scale, these habits reduce thousands of unnecessary words the model would otherwise churn through for no added value.

Green prompting principles:

  • Keep prompts concise, avoid repeating long context  stay in one chat as long as possible for retained memory

  • Cap output length, ask for bullets or tables when you only need a summary

  • Batch related asks in one well-structured prompt

  • Reuse and refine previous outputs rather than starting from scratch

  • Use text-only when you do not need images, vision or file parsing

These habits reduce compute, speed up work and lower energy use without sacrificing quality.

Make the workload more efficient

Scheduling and reuse matter as much as prompting.

Once your prompts are tidy, the next gains come from how you run the work. Treat AI jobs like any other operational workload: batch similar tasks, avoid regenerating what you can store and reuse, and prefer retrieval over re-writing when facts are stable. If your tool exposes a setting for reasoning effort, choose the lowest level that still meets the brief. These small operational choices compound into shorter run times, lower latency for users and a meaningful reduction in energy and cost.

Efficient AI workflows:

  • Batch heavy runs and schedule non-urgent jobs in off-peak windows

  • Cache stable answers and retrieved facts so you do not regenerate them

  • Use retrieval and embeddings for look-ups, not full regeneration every time

  • If your tool exposes a reasoning setting, choose the lowest level that still meets the brief

Pick transparent partners

You can’t control a provider’s grid mix, but you can choose vendors who make their impact visible and give you controls that matter., and When you’re deciding what AI provider to use, you could look for details about their energy mix, power and water usage, and look for evidence of heat reuse or other mitigation. In enterprise settings, insist on clear options for data retention, regional deployment and auditability. Publishing these metrics is not just good PR; it shows the provider is measuring what counts and improving it year on year.

Find out these details or ask your AI provider for the basics and make them part of procurement.

  • Carbon-free energy share (sometimes shown as CFE percent)

  • Power usage effectiveness (PUE)

  • Water use and any heat-reuse scheme

  • Data retention options and regional deployment choices for enterprise use

Prefer vendors who publish these metrics and show year-on-year progress.

Three questions to ask before your next AI task or project

Use these questions as a pre-flight check. They help teams decide how much compute the job genuinely needs, which vendor settings or disclosures to consider, and whether simple scheduling and caching will spare cycles without hurting quality. Build them into briefs, templates and retrospectives so they become muscle memory. Over time, you will see cleaner prompts, fewer re-runs and clearer procurement conversations – exactly the behaviours that reduce both cost and carbon.

  1. How many tokens do we really need for this job?
    Set house rules for prompt length and output caps. Reuse context. Treat tokens like a cost and an impact.

  2. What does our vendor disclose and what controls do we have?
    Ask for CFE percent, PUE and water metrics. In enterprise, use regional deployments and retention controls.

  3. Can we schedule, cache or route this workload?
    Batch tasks, plan non-urgent jobs for lower-carbon windows, and cache stable results.

Intent and impact

Net zero claims ring hollow if AI growth cancels the savings. You can’t pick the grid for consumer tools, but you can control usage, vendor transparency and timing. Useful, lower-carbon AI is possible when efficiency is designed in from the start and when you choose partners who publish the numbers.

If you spot a change in platform options that affects this guidance, tell us. We keep this page updated so it stays practical and current.


Previous
Previous

Is ChatGPT secure? A balanced briefing for leaders

Next
Next

What happens to SEO in an AI-first world?