It is worth reading Google DeepMind's new paper - "AlphaEvolve discovered a simple yet remarkably effective heuristic to help Borg orchestrate Google's vast data centers more efficiently. This solution, now in production for over a year, continuously recovers, on average, 0.7% of Google’s worldwide compute resources. This sustained efficiency gain means that at any given moment, more tasks can be completed on the same computational footprint."
One thing that I find perplexing: data centers in general, and AI in particular, use massive amounts of energy. That's given. The energy in the form of electricity is then converted to heat. The heat is harmful to the machines that generate it, so we use even more energy to power massive air conditions to refrigerate the data centers.
To this day, nobody seems to have even attempted to figure out how we might recycle that heat and put it to good use. If we could capture that heat, and use it to generate more electricity, we'd need fewer or smaller power sources, and fewer or smaller air conditioners. Thus not only generating additional energy but also reducing the amount of energy needed simultaneously.
CPU's and GPU's typically have massive heat sinks attached to disburse the heat. What a waste of energy!
I wondered about that as well, but i discovered that there are some data centers doing this in Denmark and Stockholm, plus ideas in consideration. I am curious why this is so slow to implement when, as you say, it is perplexing that it wasn't an obvious form of energy to be used.
Thank YOU both for these points. You are both right and it seems common sense to raise this point that data centers convert nearly all the electricity they use into heat. While the idea of recycling this heat is ibeing actively explored and even implemented in some areas (as Curiosity Sparks says), it's not yet a widespread standard practice for a few key reasons.
The IEA's report touches upon this in a section about "Data centre heat reuse to help decarbonise district heating" (Box 5.5 in the report). It notes that the technology to recover and reuse this excess heat is generally well-established. For instance:
One of the primary uses currently being explored and implemented is to channel this waste heat into district heating networks to warm buildings. The report mentions successful initiatives, like in Stockholm, where data center heat is already contributing to the local heating system. Some newer cooling technologies, like liquid cooling, can even provide heat at temperatures (40−80 degrees C) suitable for direct use in these systems.
Governments are also starting to take notice. The IEA report mentions that some countries and regions (like Germany, the Netherlands, and the broader EU) are beginning to introduce policies or mandates that require new data centers to integrate heat recovery or at least assess its feasibility.
However, there are challenges according to the report, summarized as:
Finding a nearby, consistent 'offtaker' for the heat (like a district heating system or an industrial facility) isn't always straightforward. The infrastructure to transport the heat needs to be in place or built, which requires investment and coordination. Aligning the construction and operational schedules of data centers with potential heat users can also be complex.
While some heat is high-grade enough for direct use, much of it is relatively low-grade. Using low-grade heat to generate more electricity (as you suggested for a closed-loop system) is often not very efficient with current technologies due to thermodynamic limitations. It's generally more efficient to use that heat directly for heating purposes where possible.
Clear business models that benefit both the data center operator (beyond just PUE improvements or social license) and the heat consumer are still developing.
So, while it might seem like a universally obvious solution, the practicalities of capturing, transporting, and effectively utilizing that waste heat at scale are quite complex. It's definitely an area with significant potential for improving overall energy efficiency and reducing the environmental impact of data centers, and one that's gaining more attention.
The current approach with heat sinks is about immediate heat dispersal to protect the equipment, but the conversation is certainly shifting towards seeing that 'waste' as a potential resource.
Thanks for raising such a critical and practical point.
Thank you for this detailed reply. I just found time today to read the entire IEA report, so I notated that section. I had a conversation with an engineer about this topic about two weeks ago, so I understand the complexity. That said, data centres, although not at this level, have existed for years, the wasted heat was a known issue. It appears to be another lack to proactive engagement, as it wasn't 'cost effective', just as history of tech has demonstrated repeatedly.. The understanding of technology's profound impact on earth- it's resources and how it impacts our water etc- are a late entry into the the development of technological.And in fact, that lack is still evident.
The lack is indeed evident. I'm glad that you read the report, the more of us that have awareness and the action from the report authors, will hopefully get attention in governments.
First, I liked the paradox you highlighted: "AI is both a voracious energy consumer and a promising energy optimizer." This contradiction perfectly encapsulates the broader challenges of AI development—its potential to transform industries while simultaneously straining our resources.
However, I'd like to return to a couple of questions I've raised before, as they remain just as pressing:
1. Are we on the right track when the human mind can perform an unbelievable amount of tasks with only 20W?
2. Why are we trying to replicate the human brain when human intelligence is just one form of intelligence?
The answers seem clear: No, we are not on the right track, and no, we should not replicate the human brain. Our brains evolved for particular purposes—survival and reproduction. The traits that helped us achieve these goals were enhanced over time, while those we didn't use faded away. This evolution was efficient but specialized, and it doesn't necessarily mean replicating our brains is the best path forward for developing artificial intelligence. Humans, however, tend to stick with brute-force methods if they appear to work, even when they come with ethical, environmental, or societal costs. Take, for example, the history of car engines: electric vehicles existed before gas-powered ones. However, when gasoline was discovered to be a cheaper and more convenient fuel for longer travel distances, we abandoned electric cars for nearly a century. Only now, after realizing the environmental costs of gasoline, are we returning to electric vehicles—but we've lost decades of progress. This raises a compelling question: Are we trying to reinvent something that already exists in nature but in a less efficient, unsustainable way?
We seem to be repeating this shortsighted behavior with AI. While the brute-force approach of scaling up large language models (LLMs) may seem adequate now, it comes with significant costs: the environmental impact of energy consumption, ethical challenges related to data usage, and the risk of simply creating more powerful tools without addressing the fundamental mystery of intelligence itself.
Across the U.S., green energy projects are being canceled for reasons that range from political resistance to cultural perceptions, such as the idea that renewable energy is "not manly." Meanwhile, the fossil fuel lobby remains powerful, and half the population doesn't believe in the urgency of green energy.
If we haven't already, this makes it likely we'll surpass the 2.5-degree Celsius temperature threshold. Despite its potential as an energy optimizer, AI might expedite this process due to its massive energy demands. The push to restart or expand nuclear energy plants is another example of a promising solution that faces significant social and political resistance, particularly if it threatens fossil fuel consumption.
The global impact will be unequal. The poorest people in the southern hemisphere will bear the brunt of these changes. Ironically, they will suffer the consequences of technology they did not develop, will not benefit from it in the short run, and may even lose their livelihoods, as automation reduces the need for human labor in many industries.
While living standards have risen worldwide over the past 50 years, this progress is at risk. We could regress to a state where inequality grows, and large portions of the population are left behind by technological advancements that primarily benefit wealthy nations and corporations.
As I've said before, instead of trying to replicate human intelligence, we should focus on developing a truly general intelligence that not only surpasses human intelligence but is inspired by the vast array of intelligence already present in nature.
For example, as detailed in How Fruit Flies, Bees, and Squirrels Beat Artificial Intelligence (https://tinyurl.com/fat56s37), nature offers us incredible examples of diverse and efficient intelligence:
a) Fruit flies have collision-avoidance reflexes that surpass even the most advanced computer vision systems. Their micro-brains calculate escape routes faster than high-end AI, demonstrating that intelligence can be specialized and incredibly efficient.
b) Bees rely on decentralized decision-making to solve problems like hive location, using their famous waggle dances to communicate. This natural "swarm intelligence" is far more robust and adaptive than any current AI model mimicking collective behavior.
c) Squirrels store and retrieve information remarkably efficiently, remembering where they've hidden wild months later and even engaging in deceptive caching to mislead potential thieves. Despite this, no squirrel has hallucinated an imaginary acorn—a notable contrast to LLMs, which confidently generate fabricated outputs.
These examples illustrate that intelligence is not a unified phenomenon but a spectrum of capabilities shaped by specific evolutionary pressures. This diversity should inspire AI research to move beyond the narrow goal of human replication and instead embrace the variety of existing ecological intelligence.
The need to address AI development's ethical, environmental, and societal costs is critical. Real intelligence is embodied—it exists within a living system that interacts dynamically with its environment. AI, by contrast, is an abstraction—it predicts text sequences but doesn't understand causes and consequences.
If we continue down the current path of brute-force scaling, we may achieve short-term goals, but at what cost? Instead, we should take cues from nature, learning from the efficiency and adaptability of fruit flies, bees, squirrels, and countless other creatures. In doing so, we might develop AI that is more sustainable, efficient, and more aligned with the real-world challenges we face as a species.
Brilliant, just brilliant, fruit flies, bess and squirrels!
I'm glad the paradox of AI as both a 'voracious energy consumer and a promising energy optimizer' hit home. It's precisely this tension, as highlighted by the IEA report and my visit to that data center, that my essay aimed to bring to the forefront, the sheer physicality and immediate energy implications of the AI we are building and deploying right now and how they consume and can help.
Your fundamental questions about whether we are on the right track, comparing AI's massive power draw to the human mind's 20W efficiency and questioning the drive to replicate human-like intelligence, are deep and pressing. I think we need to grapple with the consequences of the current trajectory; your points compel us to also scrutinize the trajectory itself. Is the 'brute-force' scaling of LLMs, with its attendant environmental and societal costs, the most sustainable or even the most intelligent path in the long run? This is a critical question. I don't have the answer, nor does the report... although the report is pushing for a 'we have to build clean energy' and make AI work!
The historical analogy of gasoline vs. electric cars is a powerful and underscores the concern that we might be prioritizing immediate perceived performance or convenience over long-term sustainability, a theme central to the 'reckoning' i tried to describe concerning AI's energy appetite. The energy infrastructure challenges, from grid capacity to the sourcing of power, are direct consequences of this current "brute-force" approach.
Your examples from nature, fruit flies, bees, squirrels, are fascinating and vividly illustrate the diverse and incredibly efficient forms of intelligence that already exist. They absolutely offer a compelling argument for exploring alternative, perhaps more inherently sustainable and specialized, paradigms for AI development. While my essay focused on the immediate challenge of powering the AI we have, your insights suggest vital avenues for developing the AI we might aspire to, AI that is perhaps less about replication and more about drawing inspiration from these efficient ecological models.
The broader societal context you bring in, the challenges facing green energy adoption, the influence of lobbying, and the deeply concerning issue of unequal global impact, are absolutely right. The risk that AI's benefits might accrue to a few while its environmental and social costs are borne by the many, particularly in the Global South, is a stark reality that the push for massive energy consumption only intensifies. This aligns with the urgency I sought to convey: if we don't manage this transition thoughtfully, the progress made in living standards could be significantly threatened.
Ultimately, we need these discussions on AI in its current, very real, and rapidly escalating energy demands, as detailed by the IEA, and to call for immediate, practical responses in policy, infrastructure, and technological accountability. Your comments enrich this by urging a simultaneous, deeper inquiry into the fundamental goals and methods of AI development. I'm 100% with you.
Some excellent insights in here. I particularly like this one: This evolution was efficient but specialized, and it doesn't necessarily mean replicating our brains is the best path forward for developing artificial intelligence.
This "need" to be gods that create something in our own likeness seems to be just pure hubris :) We were likely on a much better track when creating tools that served a specific purpose and allow humans to continue to evolve toward enlightenment at biological speed, rather than creating a pseudo-being that will likely arrive there first.
Thank You Susan. You've pinpointed a really crucial idea regarding the specialized nature of our own intelligence and how that might not be the ideal blueprint for artificial systems.
The thought that our current AI ambitions might carry an element of 'hubris,' particularly in the drive to mirror human consciousness certainly makes one consider the alternative path you highlighted: focusing on AI as highly advanced, purpose-driven instruments that augment human capabilities, rather than striving to develop an artificial counterpart to ourselves.
From the perspective of my essay, which grapples with the immense energy and resource demands of current AI, this distinction is particularly important. If the pursuit of creating AI 'in our own likeness' inherently leads to more generalized, and perhaps less efficient, systems that 'sweat watts' at an unsustainable scale, then your point about focusing on more targeted 'tools' could suggest a more sustainable path. Perhaps an AI designed for a specific purpose, rather than attempting to emulate the breadth of human cognition, could achieve its goals with a significantly smaller environmental footprint.
The concern you raise about a 'pseudo-being' potentially outstripping human evolution towards 'enlightenment' also touches on deep anxieties about the pace and ultimate control of this technology. It circles back to the question of whether we are fully considering the long-term consequences of the paths we are currently forging with AI.
It’s a valuable perspective that urges a deep consideration of not just how we build AI, but what kind of AI we are fundamentally aiming to create and for what ultimate purpose, especially when the stakes for our planet's resources are so high.
It’s baffling to me that some of the brightest minds in AI have set such a low bar for intelligence. They’re relying on brute force methods—scaling data and compute—like that’s the best way to solve the complex challenge of understanding cognition. These approaches deliver measurable results in things like natural language processing and image generation, but they also reflect a reductionist mindset. It’s as if intelligence is being treated as nothing more than pattern recognition, which completely misses the point. This brute force strategy ignores the elegance and efficiency of natural intelligence—something nature has already perfected in countless ways.
The truth is, nature has already solved the problem of intelligence more effectively and efficiently than anything we’ve created. We don’t need to reinvent the wheel. Instead, we should learn from nature, studying how intelligence evolved and applying those principles to AI. This means taking a multidisciplinary approach, bringing in insights from neuroscience, biology, cognitive science, and ecology. To build truly general intelligent systems, we must stop treating intelligence as data-crunching and start looking at the bigger picture.
What’s frustrating is how much of AI research has shifted toward short-term goals. The focus is on publishing papers, hitting benchmarks, or attracting funding—things that are great for immediate results but don’t do much to deepen our understanding of intelligence. It feels like we’re prioritizing commercial success over exploring the richness and diversity of cognition as it exists in the natural world. That’s a huge missed opportunity. Nature has so much to teach us, but instead of learning from it, we’re stuck scaling up models and hoping more compute solves everything.
Additionally, as Colin said, we have essentially turned intelligence into an energy infrastructure problem. To replicate something as efficient as a 20W human brain, we’re burning through an obscene amount of energy—massive data centers running 24/7, churning through millions of watts to solve what nature handles with a fraction of the power. And sure, it works (kind of), but what happens when we try to scale this approach to tackle real problems? Problems like climate change, colonizing other planets, or even expanding our species across galaxies and stars?
How should we scale this approach for bigger challenges if we struggle to replicate basic reasoning or intelligence with this much energy? Climate change, for example, is fundamentally an energy and resource problem. Transitioning to renewables, storing energy, and removing CO₂ from the atmosphere requires massive amounts of energy. If our current AI systems can’t even replicate a human brain efficiently, how will they help us solve climate change? The same goes for becoming a multi-planet or multi-star species.
We’re essentially stuck in an arms race against physics. The energy we can produce on Earth, or even on future planets, is finite. If we’re already hitting efficiency bottlenecks while trying to mimic simple natural intelligence, how can we expect to scale this for problems that are larger orders of magnitude? Nature didn’t evolve intelligence by throwing infinite energy at it—it evolved intelligence by using constraints as a feature.
My experience implementing large enterprise systems has shown that constraints breed creativity, just as nature has done for intelligence. The current AI race without constraints is a race to nowhere. Constraints are not barriers—they are the forge in which true innovation is born.
Thanks MG, Once again you have perfectly articulated a frustration that I believe many share regarding the prevailing methods and perhaps even the definition of 'success' in the field.
Your point about the "low bar for intelligence," characterizing it as primarily pattern recognition driven by sheer computational power and vast dataset. is a really important critique. It seems to me that the elegance and profound efficiency we see in natural cognitive systems are often sidelined in the current push for scale. My essay's focus on the immense energy footprint, is in many ways a direct consequence of this "brute force" paradigm you describe. We are turning the pursuit of intelligence into a colossal energy infrastructure challenge.
The observation that nature (humans, squirrels, bees, fruit flies and general ecology) has already demonstrated highly effective and resource-frugal intelligence across countless species is true. The call to adopt a more multidisciplinary approach, drawing from biology, neuroscience, and ecology, rather than simply trying to "reinvent the wheel" with less efficient methods, is a powerful argument for a different research emphasis. I think ecology is missing in the current AI drive!
You bare right and I share your concern about the prevailing short-termism in AI research, which overshadows the deeper, more foundational quest to understand cognition in its varied forms. This "missed opportunity" to learn from nature's proven solutions is significant. I will write more on teh biology approach they are taking after some discussions this week with people in the main AI labs.
Your question about scalability is crucial: if replicating even basic aspects of intelligence demands such an "obscene amount of energy," how can we realistically expect this approach to tackle planetary-scale challenges like climate change or interstellar exploration? As my essay highlights, AI is already a massive energy consumer; if its own development is part of the energy problem, its capacity to contribute to solutions for energy-intensive global issues becomes far more complex. We risk, as you brilliantly say, an "arms race against physics."
I share, and have similar experience, of your final point (also as Susan notes in her comment), drawn from your enterprise experience, that "constraints breed creativity." This is a vital insight. Nature evolved its intelligent solutions within stringent energy and resource constraints. Perhaps the current AI race, often seemingly defined by an abundance (or pursuit) of computational power without equivalent emphasis on efficiency, is missing this fundamental driver of true innovation. The very energy and resource limitations per our discussions and the IEA report bring to light might, one hopes, eventually act as the necessary constraint to forge more genuinely innovative and sustainable AI pathways.
"Nature didn’t evolve intelligence by throwing infinite energy at it—it evolved intelligence by using constraints as a feature."
In fact, it's evolved to succeed through conservation of energy!!
I also come from a career of implement large multi-site systems, and agree completely with your thesis. "My experience implementing large enterprise systems has shown that constraints breed creativity, just as nature has done for intelligence. The current AI race without constraints is a race to nowhere. Constraints are not barriers—they are the forge in which true innovation is born"
This is an extremely important insight. It is constraints, not abundance that carries us forward with success. The whole premise of what AI is supposed to achieve is flawed.
This is excellent. One thing I’d add: the true bottleneck isn’t just generation capacity. It’s grid latency, permitting cycles, and the slow churn of transmission buildout. AI is scaling on VC timelines, but it’s running headfirst into infrastructure that moves on utility commission timelines.
Second: model efficiency gains are real, but the Jevons Paradox still dominates. Every watt saved just funds more prompts. Intelligence isn’t becoming cheap; it’s becoming entropic. The future isn’t software-native. It’s kilowatt-native.
Thank you for these incredibly insightful additions. They are core bottlenecks and paradoxes.
Your first point is crucial and something I aimed to underscore in the essay: the true chokepoint isn't solely generation capacity, but precisely, as you said, grid latency, permitting cycles, and the agonizingly slow pace of transmission buildout. As I wrote, the IEA's warning that new transmission lines can take 4-8 years, with component wait times doubling, potentially stalling a significant percentage of data center projects. This mismatch between AI scaling on 'VC timelines' and infrastructure on 'utility commission timelines' perfectly captures the critical challenge.
And your second point about the Jevons Paradox is equally vital. The essay touches on this by mentioning that model efficiency gains, while real, are often being 'offset by the explosive scale of deployment.' Your phrasing that 'every watt saved just funds more prompts' is a stark and accurate way to put it. The idea that intelligence isn't becoming cheap in energy terms but rather 'entropic' is excellent, and it aligns directly with the essay's opening that intelligence is circuitry, and 'circuitry is greedy,' though I prefer 'entropic'.
The concept of the future being 'kilowatt-native' is brilliant and succinctly encapsulates the fundamental argument: energy isn't just an operational cost for AI; it's becoming an intrinsic, defining characteristic of this technological era.
These observations powerfully reinforce the 'reckoning'. We are not just dealing with a new type of software; we are dealing with a new, very physical, and incredibly power-hungry form of infrastructure that demands a fundamental rethinking of our energy systems and timelines.
With a background in nuclear energy, I’ve followed the AI-power conversation closely. For over a century, we’ve assumed energy must be centrally generated and distributed—a model shaped by industrial-era constraints. But today, we have the technology to rethink that entirely.
Ironically, outsourcing manufacturing left us with depreciated infrastructure, freeing us to start fresh. We’re no longer bound to retrofit old systems. We can design a new blueprint that fits the needs of a 21st-century economy.
Rather than scarring landscapes with massive grids and wind farms, we can localize energy—placing it next to the businesses and communities it serves. It’s now practical, scalable, and offers direct accountability. Cost is tied to use. Risk is contained. Growth can happen in step with demand.
One of the biggest mistakes we make in thinking about AI is trying to fit it into old frameworks. AI isn’t an upgrade to the current model—it’s a reason to design something new. If we build tomorrow’s systems on yesterday’s assumptions, we’ll waste the opportunity.
Historically, industry formed around access to power. But that need is dissolving. With the right mindset, we could decentralize energy entirely. Forward-thinking businesses are already doing this—building operations that include their own power supply, tailored to location, values, and budget.
And this shift shouldn’t stop with business. Personally, I’d rather live in a self-powered home than rely on aging infrastructure and corporate providers—especially in extreme weather. It’s time to stop patching the old model and start building the one we actually need.
This is a really forward-thinking comment Susan, that challenges conventional energy paradigms, and it’s great to get a perspective from you with a nuclear energy background. Like you I agree AI presents a 'reckoning' for our current systems.
The notion that the 'depreciated infrastructure' in some regions, ironically resulting from outsourcing, might offer a 'fresh start' for innovative energy blueprints is a really interesting take. It suggests an opportunity to leapfrog rather than just retrofit.
The IEA report notes that some tech companies are looking into dedicated power solutions, including SMRs [Small Modular Reactors], which could be seen as a step towards more localized and responsive energy supply as you suggest but this should be broader.
I especially agree with your point that 'AI isn’t an upgrade to the current model, it’s a reason to design something new.' I aimed to highlight the sheer scale of AI's energy needs and the strain this puts on existing frameworks. Your comment pushes this further, suggesting we should see this strain not just as a problem to be managed within the old system, but as a fundamental impetus to innovate the system itself.
The idea of industry historically forming around power access, and that need potentially dissolving with decentralized options, is a significant shift. While the scale of AI's energy demand is vast, exploring a diverse portfolio of solutions, including more localized and potentially decentralized systems alongside necessary grid enhancements, seems crucial. The IEA report also points to a mix of strategies, from large-scale renewables and grid investments to new technologies like SMRs and geothermal, to meet this demand.
I like your desire for a 'self-powered home' and energy autonomy and resilience, particularly given the vulnerabilities of aging infrastructure in the face of extreme weather.
Overall I agree with your powerful call to action: to move beyond 'patching the old model' and to actively design and build the energy systems truly suited for the 21st century, with AI's unique demands perhaps being the critical catalyst for that change.
This approach would encourage those responsible for powering AI facilities to prioritize energy efficiency by reintroducing financial constraints. As long as the current regulatory-controlled power grid is subsidized by public funds, there is little incentive to improve efficiency. This is true at the individual level as well - personally, I didn't get really good at turning the light out when I left the room until I was paying the light bill :). This brings us back to the conversation with MG about how constraints can drive innovation.
Yes, for sure those constraints are essential. I see Google announced squeezing out even 0.7 percent improvement in efficiencies (and likely costs) in their data centers - which is actually quite significant in the overall parameters.
"AlphaEvolve discovered a simple yet remarkably effective heuristic to help Borg orchestrate Google's vast data centers more efficiently. This solution, now in production for over a year, continuously recovers, on average, 0.7% of Google’s worldwide compute resources. This sustained efficiency gain means that at any given moment, more tasks can be completed on the same computational footprint."
Small things become big when scaled, for sure. I appreciate that other countries are stepping up to compete - like China's Deepseek and it looks like Saudi's may be joining the race as well. The competition is the only thing that will achieve advancement in this space. America and actually the West, have become too complaisant as the tech leaders. China is the perfect example of how constraint and struggle lead to advancements - in both chips and energy. Hopefully America will be able to re-engage in the competition and not lose the moat completely.
One could argue: Great advances in the development of humanity always required energy. Indeed, the essential components of functioning economic activity any time are labor, land, capital, and energy (looking back: wood => coal => oil => gas, nuclear energy ...). Economic challenges remain the same: scarcity, efficiency, costs, performance ...
Thank you for such an important post, Colin. The more I have learned about the energy costs of AI, the more I lament that we aren't investing enough energy/resources in the electricity of the human mind. Indeed, we seem to have given up on this form of electricity in favor of sending electricity to artificial minds.
Excellent Norman, yes where should we be focused, of course the AI Labs claim that they will augment natural intelligence, but I am still concerened, like you that we could dumb down generations of people. Highlighting the immense potential within human cognition and questioning where our collective focus and resources are being channeled in the age of AI is the right one - very well said.
Staggering numbers - "The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030." https://www.bbc.com/news/articles/cewd5014wpno
Depending on one's criteria, it looks like (with AI) it might cost more than we gain. For some reason the argument of war-industry executives proudly announcing how many jobs they provide globally in a $2trillion/year industry, springs to mind.
It is worth reading Google DeepMind's new paper - "AlphaEvolve discovered a simple yet remarkably effective heuristic to help Borg orchestrate Google's vast data centers more efficiently. This solution, now in production for over a year, continuously recovers, on average, 0.7% of Google’s worldwide compute resources. This sustained efficiency gain means that at any given moment, more tasks can be completed on the same computational footprint."
https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/?fbclid=IwY2xjawKRwH5leHRuA2FlbQIxMQBicmlkETFuRXBLQ3plM0pkVWlxTkpSAR4J4vITiE7M1OS40cEJoY_N3vsFbMRjOOVQTkeUs45EuVOcT0jLrwayztcQYw_aem_LU5BINrv1rHpGKPiPE9inQ
One thing that I find perplexing: data centers in general, and AI in particular, use massive amounts of energy. That's given. The energy in the form of electricity is then converted to heat. The heat is harmful to the machines that generate it, so we use even more energy to power massive air conditions to refrigerate the data centers.
To this day, nobody seems to have even attempted to figure out how we might recycle that heat and put it to good use. If we could capture that heat, and use it to generate more electricity, we'd need fewer or smaller power sources, and fewer or smaller air conditioners. Thus not only generating additional energy but also reducing the amount of energy needed simultaneously.
CPU's and GPU's typically have massive heat sinks attached to disburse the heat. What a waste of energy!
I wondered about that as well, but i discovered that there are some data centers doing this in Denmark and Stockholm, plus ideas in consideration. I am curious why this is so slow to implement when, as you say, it is perplexing that it wasn't an obvious form of energy to be used.
Thank YOU both for these points. You are both right and it seems common sense to raise this point that data centers convert nearly all the electricity they use into heat. While the idea of recycling this heat is ibeing actively explored and even implemented in some areas (as Curiosity Sparks says), it's not yet a widespread standard practice for a few key reasons.
The IEA's report touches upon this in a section about "Data centre heat reuse to help decarbonise district heating" (Box 5.5 in the report). It notes that the technology to recover and reuse this excess heat is generally well-established. For instance:
One of the primary uses currently being explored and implemented is to channel this waste heat into district heating networks to warm buildings. The report mentions successful initiatives, like in Stockholm, where data center heat is already contributing to the local heating system. Some newer cooling technologies, like liquid cooling, can even provide heat at temperatures (40−80 degrees C) suitable for direct use in these systems.
Governments are also starting to take notice. The IEA report mentions that some countries and regions (like Germany, the Netherlands, and the broader EU) are beginning to introduce policies or mandates that require new data centers to integrate heat recovery or at least assess its feasibility.
However, there are challenges according to the report, summarized as:
Finding a nearby, consistent 'offtaker' for the heat (like a district heating system or an industrial facility) isn't always straightforward. The infrastructure to transport the heat needs to be in place or built, which requires investment and coordination. Aligning the construction and operational schedules of data centers with potential heat users can also be complex.
While some heat is high-grade enough for direct use, much of it is relatively low-grade. Using low-grade heat to generate more electricity (as you suggested for a closed-loop system) is often not very efficient with current technologies due to thermodynamic limitations. It's generally more efficient to use that heat directly for heating purposes where possible.
Clear business models that benefit both the data center operator (beyond just PUE improvements or social license) and the heat consumer are still developing.
So, while it might seem like a universally obvious solution, the practicalities of capturing, transporting, and effectively utilizing that waste heat at scale are quite complex. It's definitely an area with significant potential for improving overall energy efficiency and reducing the environmental impact of data centers, and one that's gaining more attention.
The current approach with heat sinks is about immediate heat dispersal to protect the equipment, but the conversation is certainly shifting towards seeing that 'waste' as a potential resource.
Thanks for raising such a critical and practical point.
Thank you for this detailed reply. I just found time today to read the entire IEA report, so I notated that section. I had a conversation with an engineer about this topic about two weeks ago, so I understand the complexity. That said, data centres, although not at this level, have existed for years, the wasted heat was a known issue. It appears to be another lack to proactive engagement, as it wasn't 'cost effective', just as history of tech has demonstrated repeatedly.. The understanding of technology's profound impact on earth- it's resources and how it impacts our water etc- are a late entry into the the development of technological.And in fact, that lack is still evident.
The lack is indeed evident. I'm glad that you read the report, the more of us that have awareness and the action from the report authors, will hopefully get attention in governments.
My 2¢.
First, I liked the paradox you highlighted: "AI is both a voracious energy consumer and a promising energy optimizer." This contradiction perfectly encapsulates the broader challenges of AI development—its potential to transform industries while simultaneously straining our resources.
However, I'd like to return to a couple of questions I've raised before, as they remain just as pressing:
1. Are we on the right track when the human mind can perform an unbelievable amount of tasks with only 20W?
2. Why are we trying to replicate the human brain when human intelligence is just one form of intelligence?
The answers seem clear: No, we are not on the right track, and no, we should not replicate the human brain. Our brains evolved for particular purposes—survival and reproduction. The traits that helped us achieve these goals were enhanced over time, while those we didn't use faded away. This evolution was efficient but specialized, and it doesn't necessarily mean replicating our brains is the best path forward for developing artificial intelligence. Humans, however, tend to stick with brute-force methods if they appear to work, even when they come with ethical, environmental, or societal costs. Take, for example, the history of car engines: electric vehicles existed before gas-powered ones. However, when gasoline was discovered to be a cheaper and more convenient fuel for longer travel distances, we abandoned electric cars for nearly a century. Only now, after realizing the environmental costs of gasoline, are we returning to electric vehicles—but we've lost decades of progress. This raises a compelling question: Are we trying to reinvent something that already exists in nature but in a less efficient, unsustainable way?
We seem to be repeating this shortsighted behavior with AI. While the brute-force approach of scaling up large language models (LLMs) may seem adequate now, it comes with significant costs: the environmental impact of energy consumption, ethical challenges related to data usage, and the risk of simply creating more powerful tools without addressing the fundamental mystery of intelligence itself.
Across the U.S., green energy projects are being canceled for reasons that range from political resistance to cultural perceptions, such as the idea that renewable energy is "not manly." Meanwhile, the fossil fuel lobby remains powerful, and half the population doesn't believe in the urgency of green energy.
If we haven't already, this makes it likely we'll surpass the 2.5-degree Celsius temperature threshold. Despite its potential as an energy optimizer, AI might expedite this process due to its massive energy demands. The push to restart or expand nuclear energy plants is another example of a promising solution that faces significant social and political resistance, particularly if it threatens fossil fuel consumption.
The global impact will be unequal. The poorest people in the southern hemisphere will bear the brunt of these changes. Ironically, they will suffer the consequences of technology they did not develop, will not benefit from it in the short run, and may even lose their livelihoods, as automation reduces the need for human labor in many industries.
While living standards have risen worldwide over the past 50 years, this progress is at risk. We could regress to a state where inequality grows, and large portions of the population are left behind by technological advancements that primarily benefit wealthy nations and corporations.
As I've said before, instead of trying to replicate human intelligence, we should focus on developing a truly general intelligence that not only surpasses human intelligence but is inspired by the vast array of intelligence already present in nature.
For example, as detailed in How Fruit Flies, Bees, and Squirrels Beat Artificial Intelligence (https://tinyurl.com/fat56s37), nature offers us incredible examples of diverse and efficient intelligence:
a) Fruit flies have collision-avoidance reflexes that surpass even the most advanced computer vision systems. Their micro-brains calculate escape routes faster than high-end AI, demonstrating that intelligence can be specialized and incredibly efficient.
b) Bees rely on decentralized decision-making to solve problems like hive location, using their famous waggle dances to communicate. This natural "swarm intelligence" is far more robust and adaptive than any current AI model mimicking collective behavior.
c) Squirrels store and retrieve information remarkably efficiently, remembering where they've hidden wild months later and even engaging in deceptive caching to mislead potential thieves. Despite this, no squirrel has hallucinated an imaginary acorn—a notable contrast to LLMs, which confidently generate fabricated outputs.
These examples illustrate that intelligence is not a unified phenomenon but a spectrum of capabilities shaped by specific evolutionary pressures. This diversity should inspire AI research to move beyond the narrow goal of human replication and instead embrace the variety of existing ecological intelligence.
The need to address AI development's ethical, environmental, and societal costs is critical. Real intelligence is embodied—it exists within a living system that interacts dynamically with its environment. AI, by contrast, is an abstraction—it predicts text sequences but doesn't understand causes and consequences.
If we continue down the current path of brute-force scaling, we may achieve short-term goals, but at what cost? Instead, we should take cues from nature, learning from the efficiency and adaptability of fruit flies, bees, squirrels, and countless other creatures. In doing so, we might develop AI that is more sustainable, efficient, and more aligned with the real-world challenges we face as a species.
Brilliant, just brilliant, fruit flies, bess and squirrels!
I'm glad the paradox of AI as both a 'voracious energy consumer and a promising energy optimizer' hit home. It's precisely this tension, as highlighted by the IEA report and my visit to that data center, that my essay aimed to bring to the forefront, the sheer physicality and immediate energy implications of the AI we are building and deploying right now and how they consume and can help.
Your fundamental questions about whether we are on the right track, comparing AI's massive power draw to the human mind's 20W efficiency and questioning the drive to replicate human-like intelligence, are deep and pressing. I think we need to grapple with the consequences of the current trajectory; your points compel us to also scrutinize the trajectory itself. Is the 'brute-force' scaling of LLMs, with its attendant environmental and societal costs, the most sustainable or even the most intelligent path in the long run? This is a critical question. I don't have the answer, nor does the report... although the report is pushing for a 'we have to build clean energy' and make AI work!
The historical analogy of gasoline vs. electric cars is a powerful and underscores the concern that we might be prioritizing immediate perceived performance or convenience over long-term sustainability, a theme central to the 'reckoning' i tried to describe concerning AI's energy appetite. The energy infrastructure challenges, from grid capacity to the sourcing of power, are direct consequences of this current "brute-force" approach.
Your examples from nature, fruit flies, bees, squirrels, are fascinating and vividly illustrate the diverse and incredibly efficient forms of intelligence that already exist. They absolutely offer a compelling argument for exploring alternative, perhaps more inherently sustainable and specialized, paradigms for AI development. While my essay focused on the immediate challenge of powering the AI we have, your insights suggest vital avenues for developing the AI we might aspire to, AI that is perhaps less about replication and more about drawing inspiration from these efficient ecological models.
The broader societal context you bring in, the challenges facing green energy adoption, the influence of lobbying, and the deeply concerning issue of unequal global impact, are absolutely right. The risk that AI's benefits might accrue to a few while its environmental and social costs are borne by the many, particularly in the Global South, is a stark reality that the push for massive energy consumption only intensifies. This aligns with the urgency I sought to convey: if we don't manage this transition thoughtfully, the progress made in living standards could be significantly threatened.
Ultimately, we need these discussions on AI in its current, very real, and rapidly escalating energy demands, as detailed by the IEA, and to call for immediate, practical responses in policy, infrastructure, and technological accountability. Your comments enrich this by urging a simultaneous, deeper inquiry into the fundamental goals and methods of AI development. I'm 100% with you.
Thank you again for pushing the boundaries.
Some excellent insights in here. I particularly like this one: This evolution was efficient but specialized, and it doesn't necessarily mean replicating our brains is the best path forward for developing artificial intelligence.
This "need" to be gods that create something in our own likeness seems to be just pure hubris :) We were likely on a much better track when creating tools that served a specific purpose and allow humans to continue to evolve toward enlightenment at biological speed, rather than creating a pseudo-being that will likely arrive there first.
Thank You Susan. You've pinpointed a really crucial idea regarding the specialized nature of our own intelligence and how that might not be the ideal blueprint for artificial systems.
The thought that our current AI ambitions might carry an element of 'hubris,' particularly in the drive to mirror human consciousness certainly makes one consider the alternative path you highlighted: focusing on AI as highly advanced, purpose-driven instruments that augment human capabilities, rather than striving to develop an artificial counterpart to ourselves.
From the perspective of my essay, which grapples with the immense energy and resource demands of current AI, this distinction is particularly important. If the pursuit of creating AI 'in our own likeness' inherently leads to more generalized, and perhaps less efficient, systems that 'sweat watts' at an unsustainable scale, then your point about focusing on more targeted 'tools' could suggest a more sustainable path. Perhaps an AI designed for a specific purpose, rather than attempting to emulate the breadth of human cognition, could achieve its goals with a significantly smaller environmental footprint.
The concern you raise about a 'pseudo-being' potentially outstripping human evolution towards 'enlightenment' also touches on deep anxieties about the pace and ultimate control of this technology. It circles back to the question of whether we are fully considering the long-term consequences of the paths we are currently forging with AI.
It’s a valuable perspective that urges a deep consideration of not just how we build AI, but what kind of AI we are fundamentally aiming to create and for what ultimate purpose, especially when the stakes for our planet's resources are so high.
It’s baffling to me that some of the brightest minds in AI have set such a low bar for intelligence. They’re relying on brute force methods—scaling data and compute—like that’s the best way to solve the complex challenge of understanding cognition. These approaches deliver measurable results in things like natural language processing and image generation, but they also reflect a reductionist mindset. It’s as if intelligence is being treated as nothing more than pattern recognition, which completely misses the point. This brute force strategy ignores the elegance and efficiency of natural intelligence—something nature has already perfected in countless ways.
The truth is, nature has already solved the problem of intelligence more effectively and efficiently than anything we’ve created. We don’t need to reinvent the wheel. Instead, we should learn from nature, studying how intelligence evolved and applying those principles to AI. This means taking a multidisciplinary approach, bringing in insights from neuroscience, biology, cognitive science, and ecology. To build truly general intelligent systems, we must stop treating intelligence as data-crunching and start looking at the bigger picture.
What’s frustrating is how much of AI research has shifted toward short-term goals. The focus is on publishing papers, hitting benchmarks, or attracting funding—things that are great for immediate results but don’t do much to deepen our understanding of intelligence. It feels like we’re prioritizing commercial success over exploring the richness and diversity of cognition as it exists in the natural world. That’s a huge missed opportunity. Nature has so much to teach us, but instead of learning from it, we’re stuck scaling up models and hoping more compute solves everything.
Additionally, as Colin said, we have essentially turned intelligence into an energy infrastructure problem. To replicate something as efficient as a 20W human brain, we’re burning through an obscene amount of energy—massive data centers running 24/7, churning through millions of watts to solve what nature handles with a fraction of the power. And sure, it works (kind of), but what happens when we try to scale this approach to tackle real problems? Problems like climate change, colonizing other planets, or even expanding our species across galaxies and stars?
How should we scale this approach for bigger challenges if we struggle to replicate basic reasoning or intelligence with this much energy? Climate change, for example, is fundamentally an energy and resource problem. Transitioning to renewables, storing energy, and removing CO₂ from the atmosphere requires massive amounts of energy. If our current AI systems can’t even replicate a human brain efficiently, how will they help us solve climate change? The same goes for becoming a multi-planet or multi-star species.
We’re essentially stuck in an arms race against physics. The energy we can produce on Earth, or even on future planets, is finite. If we’re already hitting efficiency bottlenecks while trying to mimic simple natural intelligence, how can we expect to scale this for problems that are larger orders of magnitude? Nature didn’t evolve intelligence by throwing infinite energy at it—it evolved intelligence by using constraints as a feature.
My experience implementing large enterprise systems has shown that constraints breed creativity, just as nature has done for intelligence. The current AI race without constraints is a race to nowhere. Constraints are not barriers—they are the forge in which true innovation is born.
Thanks MG, Once again you have perfectly articulated a frustration that I believe many share regarding the prevailing methods and perhaps even the definition of 'success' in the field.
Your point about the "low bar for intelligence," characterizing it as primarily pattern recognition driven by sheer computational power and vast dataset. is a really important critique. It seems to me that the elegance and profound efficiency we see in natural cognitive systems are often sidelined in the current push for scale. My essay's focus on the immense energy footprint, is in many ways a direct consequence of this "brute force" paradigm you describe. We are turning the pursuit of intelligence into a colossal energy infrastructure challenge.
The observation that nature (humans, squirrels, bees, fruit flies and general ecology) has already demonstrated highly effective and resource-frugal intelligence across countless species is true. The call to adopt a more multidisciplinary approach, drawing from biology, neuroscience, and ecology, rather than simply trying to "reinvent the wheel" with less efficient methods, is a powerful argument for a different research emphasis. I think ecology is missing in the current AI drive!
You bare right and I share your concern about the prevailing short-termism in AI research, which overshadows the deeper, more foundational quest to understand cognition in its varied forms. This "missed opportunity" to learn from nature's proven solutions is significant. I will write more on teh biology approach they are taking after some discussions this week with people in the main AI labs.
Your question about scalability is crucial: if replicating even basic aspects of intelligence demands such an "obscene amount of energy," how can we realistically expect this approach to tackle planetary-scale challenges like climate change or interstellar exploration? As my essay highlights, AI is already a massive energy consumer; if its own development is part of the energy problem, its capacity to contribute to solutions for energy-intensive global issues becomes far more complex. We risk, as you brilliantly say, an "arms race against physics."
I share, and have similar experience, of your final point (also as Susan notes in her comment), drawn from your enterprise experience, that "constraints breed creativity." This is a vital insight. Nature evolved its intelligent solutions within stringent energy and resource constraints. Perhaps the current AI race, often seemingly defined by an abundance (or pursuit) of computational power without equivalent emphasis on efficiency, is missing this fundamental driver of true innovation. The very energy and resource limitations per our discussions and the IEA report bring to light might, one hopes, eventually act as the necessary constraint to forge more genuinely innovative and sustainable AI pathways.
"Nature didn’t evolve intelligence by throwing infinite energy at it—it evolved intelligence by using constraints as a feature."
In fact, it's evolved to succeed through conservation of energy!!
I also come from a career of implement large multi-site systems, and agree completely with your thesis. "My experience implementing large enterprise systems has shown that constraints breed creativity, just as nature has done for intelligence. The current AI race without constraints is a race to nowhere. Constraints are not barriers—they are the forge in which true innovation is born"
This is an extremely important insight. It is constraints, not abundance that carries us forward with success. The whole premise of what AI is supposed to achieve is flawed.
Thank you Susan, I added a comment to MG's comment, which also answers your solid point)
Hubris + profiteering. None of this would be happening if there weren't billions to be raked in.
Yes, power and money are strong motivators :)
This is excellent. One thing I’d add: the true bottleneck isn’t just generation capacity. It’s grid latency, permitting cycles, and the slow churn of transmission buildout. AI is scaling on VC timelines, but it’s running headfirst into infrastructure that moves on utility commission timelines.
Second: model efficiency gains are real, but the Jevons Paradox still dominates. Every watt saved just funds more prompts. Intelligence isn’t becoming cheap; it’s becoming entropic. The future isn’t software-native. It’s kilowatt-native.
Thank you for these incredibly insightful additions. They are core bottlenecks and paradoxes.
Your first point is crucial and something I aimed to underscore in the essay: the true chokepoint isn't solely generation capacity, but precisely, as you said, grid latency, permitting cycles, and the agonizingly slow pace of transmission buildout. As I wrote, the IEA's warning that new transmission lines can take 4-8 years, with component wait times doubling, potentially stalling a significant percentage of data center projects. This mismatch between AI scaling on 'VC timelines' and infrastructure on 'utility commission timelines' perfectly captures the critical challenge.
And your second point about the Jevons Paradox is equally vital. The essay touches on this by mentioning that model efficiency gains, while real, are often being 'offset by the explosive scale of deployment.' Your phrasing that 'every watt saved just funds more prompts' is a stark and accurate way to put it. The idea that intelligence isn't becoming cheap in energy terms but rather 'entropic' is excellent, and it aligns directly with the essay's opening that intelligence is circuitry, and 'circuitry is greedy,' though I prefer 'entropic'.
The concept of the future being 'kilowatt-native' is brilliant and succinctly encapsulates the fundamental argument: energy isn't just an operational cost for AI; it's becoming an intrinsic, defining characteristic of this technological era.
These observations powerfully reinforce the 'reckoning'. We are not just dealing with a new type of software; we are dealing with a new, very physical, and incredibly power-hungry form of infrastructure that demands a fundamental rethinking of our energy systems and timelines.
With a background in nuclear energy, I’ve followed the AI-power conversation closely. For over a century, we’ve assumed energy must be centrally generated and distributed—a model shaped by industrial-era constraints. But today, we have the technology to rethink that entirely.
Ironically, outsourcing manufacturing left us with depreciated infrastructure, freeing us to start fresh. We’re no longer bound to retrofit old systems. We can design a new blueprint that fits the needs of a 21st-century economy.
Rather than scarring landscapes with massive grids and wind farms, we can localize energy—placing it next to the businesses and communities it serves. It’s now practical, scalable, and offers direct accountability. Cost is tied to use. Risk is contained. Growth can happen in step with demand.
One of the biggest mistakes we make in thinking about AI is trying to fit it into old frameworks. AI isn’t an upgrade to the current model—it’s a reason to design something new. If we build tomorrow’s systems on yesterday’s assumptions, we’ll waste the opportunity.
Historically, industry formed around access to power. But that need is dissolving. With the right mindset, we could decentralize energy entirely. Forward-thinking businesses are already doing this—building operations that include their own power supply, tailored to location, values, and budget.
And this shift shouldn’t stop with business. Personally, I’d rather live in a self-powered home than rely on aging infrastructure and corporate providers—especially in extreme weather. It’s time to stop patching the old model and start building the one we actually need.
This is a really forward-thinking comment Susan, that challenges conventional energy paradigms, and it’s great to get a perspective from you with a nuclear energy background. Like you I agree AI presents a 'reckoning' for our current systems.
The notion that the 'depreciated infrastructure' in some regions, ironically resulting from outsourcing, might offer a 'fresh start' for innovative energy blueprints is a really interesting take. It suggests an opportunity to leapfrog rather than just retrofit.
The IEA report notes that some tech companies are looking into dedicated power solutions, including SMRs [Small Modular Reactors], which could be seen as a step towards more localized and responsive energy supply as you suggest but this should be broader.
I especially agree with your point that 'AI isn’t an upgrade to the current model, it’s a reason to design something new.' I aimed to highlight the sheer scale of AI's energy needs and the strain this puts on existing frameworks. Your comment pushes this further, suggesting we should see this strain not just as a problem to be managed within the old system, but as a fundamental impetus to innovate the system itself.
The idea of industry historically forming around power access, and that need potentially dissolving with decentralized options, is a significant shift. While the scale of AI's energy demand is vast, exploring a diverse portfolio of solutions, including more localized and potentially decentralized systems alongside necessary grid enhancements, seems crucial. The IEA report also points to a mix of strategies, from large-scale renewables and grid investments to new technologies like SMRs and geothermal, to meet this demand.
I like your desire for a 'self-powered home' and energy autonomy and resilience, particularly given the vulnerabilities of aging infrastructure in the face of extreme weather.
Overall I agree with your powerful call to action: to move beyond 'patching the old model' and to actively design and build the energy systems truly suited for the 21st century, with AI's unique demands perhaps being the critical catalyst for that change.
This approach would encourage those responsible for powering AI facilities to prioritize energy efficiency by reintroducing financial constraints. As long as the current regulatory-controlled power grid is subsidized by public funds, there is little incentive to improve efficiency. This is true at the individual level as well - personally, I didn't get really good at turning the light out when I left the room until I was paying the light bill :). This brings us back to the conversation with MG about how constraints can drive innovation.
Yes, for sure those constraints are essential. I see Google announced squeezing out even 0.7 percent improvement in efficiencies (and likely costs) in their data centers - which is actually quite significant in the overall parameters.
"AlphaEvolve discovered a simple yet remarkably effective heuristic to help Borg orchestrate Google's vast data centers more efficiently. This solution, now in production for over a year, continuously recovers, on average, 0.7% of Google’s worldwide compute resources. This sustained efficiency gain means that at any given moment, more tasks can be completed on the same computational footprint."
https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/?fbclid=IwY2xjawKRwH5leHRuA2FlbQIxMQBicmlkETFuRXBLQ3plM0pkVWlxTkpSAR4J4vITiE7M1OS40cEJoY_N3vsFbMRjOOVQTkeUs45EuVOcT0jLrwayztcQYw_aem_LU5BINrv1rHpGKPiPE9inQ
Small things become big when scaled, for sure. I appreciate that other countries are stepping up to compete - like China's Deepseek and it looks like Saudi's may be joining the race as well. The competition is the only thing that will achieve advancement in this space. America and actually the West, have become too complaisant as the tech leaders. China is the perfect example of how constraint and struggle lead to advancements - in both chips and energy. Hopefully America will be able to re-engage in the competition and not lose the moat completely.
One could argue: Great advances in the development of humanity always required energy. Indeed, the essential components of functioning economic activity any time are labor, land, capital, and energy (looking back: wood => coal => oil => gas, nuclear energy ...). Economic challenges remain the same: scarcity, efficiency, costs, performance ...
Thank you for such an important post, Colin. The more I have learned about the energy costs of AI, the more I lament that we aren't investing enough energy/resources in the electricity of the human mind. Indeed, we seem to have given up on this form of electricity in favor of sending electricity to artificial minds.
Excellent Norman, yes where should we be focused, of course the AI Labs claim that they will augment natural intelligence, but I am still concerened, like you that we could dumb down generations of people. Highlighting the immense potential within human cognition and questioning where our collective focus and resources are being channeled in the age of AI is the right one - very well said.
Staggering numbers - "The UK's 500 data centres currently consume 2.5% of all electricity in the UK, while Ireland's 80 hoover up 21% of the country's total power, with those numbers projected to hit 6% and 30% respectively by 2030." https://www.bbc.com/news/articles/cewd5014wpno
Depending on one's criteria, it looks like (with AI) it might cost more than we gain. For some reason the argument of war-industry executives proudly announcing how many jobs they provide globally in a $2trillion/year industry, springs to mind.