Can AI be green?
Created with Google Image fx
Updated: 27th October 2025. We refresh this page regularly to keep pace with fast-moving AI platforms and policies.
The cost of intelligence: why AI’s footprint matters
AI is no longer niche. It shapes the tools we use every day — from chatbots and creative assistants to logistics systems and boardroom decision models.
That ubiquity brings growing scrutiny. People are increasingly aware that AI systems run on vast infrastructure: data centres, cooling systems, and thousands of GPUs consuming electricity around the clock.
This awareness sits within a wider public anxiety about climate change and resource use.
Regulators, academics, and NGOs are calling for AI companies to publish transparent data on energy consumption, carbon emissions, and water use.
The narrative is complex:
On one hand, AI is seen as energy-hungry — a potential climate liability.
On the other, it’s also viewed as a tool for decarbonisation — helping optimise power grids, transport systems and energy-intensive industries.
If the sector stays on its current trajectory, analysts warn global data-centre use could double again by 2030.
For businesses, the reputational risk is real. The organisations that design, train or deploy AI models are increasingly judged on how responsibly they do it.
The numbers are huge, but they don’t have to stay that way
In North America, data-centre demand roughly doubled between late 2022 and late 2023 to about 5 gigawatts — roughly five million microwaves running at once. Training the original GPT-3 used around 1,287 MWh of power and emitted about 552 tonnes of CO₂ — the same annual electricity use as about 120 homes.
If the sector continued unchecked, analysts warned global data-centre demand could double again by 2030.
But there is hope for how how AI models are designed and trained. The most promising advance so far — 4-bit training — shows that smarter AI doesn’t have to mean more carbon emissions.
Meet 4-bit training: The efficiency breakthrough that could change everything
Every time an AI model learns, it performs vast numbers of mathematical calculations. Traditionally, these are carried out with 8-bit precision, which defines how finely the model stores and manipulates numbers.
NVIDIA – the company behind most of the world’s AI chips recently announced their new NVFP4 method which proved that it’s possible to train large models using only 4 bits of precision — half the data for each number. This may sound trivial. But in computing, every bit counts.
Lowering precision means:
Faster training (more calculations per second).
Lower memory use (less data to store and move).
Lower energy use (less time and power per training run).
Until recently, this approach was too unstable — rounding errors would build up and crash the model. NVIDIA’s team solved that with new mathematical methods, allowing massive language models to train with almost no accuracy loss (a difference of just 0.04 % on reasoning benchmarks).
The result is extraordinary:
Up to 2–3× faster training.
Half the energy and carbon cost for the same capability.
This doesn’t make AI carbon-neutral overnight, but it could mark a turning point. Smarter model design can now decouple progress from emissions, shrinking AI’s footprint at the source.
What are the biggest names in AI doing to cut their carbon?
4-bit training is one example of a wider movement towards sustainable AI engineering.
Across the industry:
OpenAI is experimenting with model distillation — smaller networks that perform as well as their larger parents.
Google DeepMind is using neural architecture search to design energy-efficient model structures.
Meta is deploying post-training quantisation — compressing trained models for faster, lighter inference.
Microsoft is reportedly investing heavily in renewable-powered data centres and liquid cooling systems.
This is all good news if we’re to flatten the curve of AI’s energy growth. It’s imperative that progress no longer has to come with a proportional increase in power and emissions.
What you can do right now
These positive innovations towards greener AI You can’t redesign a chip or relocate a data centre, but you can influence the impact of your AI use – through usage habits, vendor choice and timing.
The carbon intensity of an AI workload depends on where it runs and how the data centre is designed. For instance, Google’s facility in Finland runs on 97 % carbon-free energy and supplies its excess heat to the local district network. The same workload on a coal-heavy grid would emit many times more CO₂.
So the question isn’t just what you use — it’s where and how it happens.
What is green prompting? How to cut tokens, cost and carbon
Small behavioural changes add up. Think of tokens — the units of text an AI reads or writes — as your energy budget. Every token costs compute, so keeping prompts short and outputs right-sized saves power and time.
Green prompting principles
Keep prompts concise and reuse context rather than restating it.
Stay in one chat so the model remembers prior context.
Limit output length; ask for bullets or tables where possible.
Batch related requests together.
Reuse previous outputs rather than starting from scratch.
Use text-only when you don’t need image or file analysis.
These small optimisations reduce compute load without hurting quality.
How to design efficient AI workflows
Once prompts are tidy, look at how you run the work. Treat AI jobs like any other workload: batch, schedule, cache.
Efficient AI workflows:
Batch heavy runs and schedule non-urgent jobs during off-peak energy windows.
Cache stable answers and retrieved data instead of regenerating them.
Use retrieval methods for look-ups rather than full re-generation.
If your tool has a reasoning depth or effort setting, pick the lowest level that meets the brief.
Operational discipline translates directly into faster performance, lower costs and a smaller footprint.
Work with AI providers who show their numbers
You can’t always choose where a model runs — but you can choose who runs it.
When comparing vendors, ask for transparency on these metrics (and here’s what they mean):
Carbon-Free Energy Share (CFE %) — the percentage of electricity a data centre sources from carbon-free power such as wind, solar or hydro. The higher the better.
Power Usage Effectiveness (PUE) — a measure of data-centre efficiency. A perfect score is 1.0 (all power goes to computing, none to cooling). Most modern facilities score between 1.1 – 1.3.
Water use and heat-reuse schemes — look for cooling systems that recycle or minimise water and capture waste heat for nearby buildings.
Model efficiency techniques — ask whether the provider uses low-precision methods like 4-bit training or quantisation to cut compute per generation. These can dramatically reduce energy intensity.
Prefer vendors who publish these metrics and show steady year-on-year progress. Transparency signals genuine accountability.
Ask these questions before you hit “generate”
How many tokens do we really need?
Set rules for prompt length and output caps. Reuse context wherever possible.What does our vendor disclose — and what controls do we have?
Ask about CFE %, PUE, water metrics and model-efficiency techniques.Can we schedule, cache or route this workload better?
Batch tasks, plan non-urgent jobs for low-carbon windows, and reuse results instead of rerunning.
Building these into your team’s workflow turns sustainability from a principle into a habit.
The next advantage is low-carbon intelligence
AI’s growth curve once looked incompatible with climate goals. But that picture is shifting.
Engineering breakthroughs like 4-bit training show that intelligence doesn’t have to mean intensity — and that efficiency can now scale as fast as capability.
For users and organisations, the same rule applies: choose partners who measure what matters, build literacy around impact, and design efficiency in from the start.
Useful, lower-carbon AI is not a contradiction — it’s the next competitive advantage.
If you spot a change in platform options that affects this guidance, tell us. We keep this page updated so it stays practical and current.
Last updated: October 2025