Click HERE to read more
The editor was already committed to developing a data center in his home in order to obtain the privacy benefits when it occurred to him that using a small, but powerful device like a Mac mini that ran on house current and didn’t require cooling systems might have a smaller carbon footprint than submitting the same queries to a powerful large model in a cloud. These CO2 savings sounded like the energy savings from installing solar panels on a house. So the editor’s first question was, “What was ChatGPT‘s carbon footprint per query?”
The editor’s previous experience had led him to believe that Anthropic’s Claude was the best at reasoning, Google‘s Gemini was still the best at search, and OpenAI’s ChatGPT fell somewhere in between. So he employed a “Mixture of chatbots” for his investigation.
B1. ChatGPT’s cloud hallucinations
When the editor posed the question to ChatGPT two months ago, it produced what seemed to be credible estimates … until the editor checked the sources of its estimates and found that the chatbot had up the numbers. So he set the question aside for a while.
B2. Claude’s cloud estimates
A few days ago, the editor posed the question to Claude and received the following three estimates plus sources.
- "What is the CO2 emission per ChatGPT query?", Smartly.AI, 6/7/24 ➡ 4.32 grams per query
- "What's the carbon footprint of using ChatGPT?", Hannah Ritchie, Sustainability By Numbers, 5/5/25 ➡ 2 to 3 grams per query.
- "Ask an AI: We asked ChatGPT about its carbon footprint", Anders Lorenzen, A Greener Life, A Greener World, 6/27/25 ➡ 1.07 grams per query
Claude concluded that ChatGPT’s footprint ranged between 2-4 grams of CO2 per query. In other words, Claude dropped the lowest and most recent estimate. But the editor noted that the highest estimate had been made a year ago. OpenAI’s cloud operation had probably become more efficient during that year, which made the editor more inclined to drop the earlier highest estimate. Furthermore, he had never heard of any of these sources. So he posed the question to Gemini.
B3. Gemini’s cloud estimates
- "The Carbon Footprint of a ChatGPT Query: Less Than a Cup of Coffee", Abdellaoui Mehdi, Medium, 4/12/25 ➡ 1.5 to 4.3 grams per query.
- "What's the carbon footprint of using ChatGPT?", Hannah Ritchie, Sustainability By Numbers, 5/5/25 ➡ 2 to 3 grams per query.
- "Ask an AI: We asked ChatGPT about its carbon footprint", Anders Lorenzen, A Greener Life, A Greener World, 6/27/25 ➡ 1.07 grams per query
All three estimates are recent, and the last two were also on Claude's list. The editor was troubled by the estimate from Medium, the first on Gemini's list. The upper end of its range was as large as the highest estimate on Claude's list, an estimate that was made in 2023. Therefore the editor decided to exclude this source.
Having dismissed the highest estimate because it was too high, the editor then dismissed the third source on Gemini's list because it was the lowest of all of the estimates on both lists. As consequence, his working estimate became the middle range estimate on both lists ➡ 2 to 3 grams per query.
C. Carbon footprint of Apple’s Mac Mini with M4 chip
Local AI (Mac Mini M4 setup):
Running at 33-65 watts for inference = 0.03-0.07 grams of CO2 per query
-- "Energy Efficiency of M4 Mini", Waloshin, MacRumors, 11/09/24
D. Range of estimated CO2 benefits
Cloud / local = (lowest cloud / highest local) up to (highest cloud / lowest local) = 2/.07 up to 3 /.03 = 200/7 up to 300/3 = 28 up to 100. So thecloud has a 28 to 100 times larger CO2 footprint, repeat cloud CO2 emissions range from 28 to 100 times the size of emissions from a local Mac Mini.

- No Data Center Overhead: Data centers now consume 4.4% of all US energy, with 48% higher carbon intensity than the grid average
-- "We did the math on AI’s energy footprint. Here’s the story you haven’t heard.", James O'Donnellarchive pageCasey Crownhartarchive page, Tech Review, 5/20/25
- Dedicated Efficiency: Mac Mini M4 idles at just 3 watts - comparable to a Raspberry Pi - versus massive server farms running 24/7
-- "M4 Mac mini's efficiency is incredible", Jeff Geerling, 11/12/24
- No Network Infrastructure: Zero energy for data transmission, routing, and internet backbone
- Optimized Hardware: Apple Silicon’s unified memory architecture is particularly efficient for AI inference
-- "DeepSeek-V3 on M4 Mac: Blazing Fast Inference on Apple Silicon", DigiAlps
F. iPhone vs Mac Mini
- Battery life (can’t run larger models continuously)
- Thermal limits (gets hot, throttles performance)
- Storage (limited space for larger models)
- Expensive bleeding edge chips
Meanwhile, your Mac Mini:
- Runs 24/7 on wall power
- Can use larger, more capable models
- Serve your whole household (or staff if installed in an office)
- Upgrade components as needed, e.g., add external solid state storage (SSD)
- Cost less than a top-tier iPhone
No comments:
Post a Comment
Your comments will be greatly appreciated ... Or just click the "Like" button above the comments section if you enjoyed this blog note.