Essay · Technology & Climate

The Debt
We Don't Count

On software, wasted energy, and why the industry that could lead the fight against climate change is instead making it worse

By Anonymous

"The most powerful way to reduce your carbon footprint is not to eat less meat or fly less. It's to change the systems that generate waste invisibly, at scale, without anyone noticing."

— paraphrased from Paul Hawken, Drawdown (2017)

Every program ever written runs on electricity. Most of that electricity comes from burning something. The software industry has understood this for decades and, with impressive consistency, has chosen not to care. The result is not merely technical debt. It is climate debt — accumulated invisibly, charged to the atmosphere, owed by everyone and acknowledged by almost no one.

The Information and Communication Technology sector currently accounts for somewhere between four and ten percent of global electricity consumption, depending on which subsystems you include and which year's measurements you trust.1 That range is itself an indictment: we cannot agree on the size of the problem because almost nobody is systematically measuring it. What the estimates do agree on is the direction of travel. Ericsson's analysis placed the sector at four percent of global electricity use in 2020. Enerdata, drawing on a wider perimeter, estimates the figure at between five and nine percent. Projections to 2030, made before the current wave of generative AI had fully arrived, suggested the share could reach twenty percent if demand continued to compound without efficiency improvements. The AI wave arrived. The efficiency improvements did not. The atmosphere absorbs the difference.

This essay is not about AI, although AI is downstream of everything it describes. It is about the foundational choices software makes every day before a prompt is typed: which language runs the program, where the computation happens, what sits idle on a developer's machine for eight hours while someone edits a config file. These choices are made as though energy were free. It is not free. It is costing us the climate, and the ledger is not metaphorical.

· · ·

Begin with the simplest possible program. Not a production system, not a microservice cluster, not a machine learning pipeline — just the first thing any new programmer writes in any language, the program whose entire purpose is to print eleven characters to a screen: Hello, World!. In C, this program compiles to a handful of machine instructions. On a modern processor it executes in microseconds. Its memory footprint is measured in kilobytes. The task it performs and the resources it consumes are, for once in software's history, roughly proportional.

Now write the same program in Java. The eleven characters still appear. But before they do, the Java Virtual Machine must be loaded: a runtime environment that carries with it a class loader, a JIT compiler, a garbage collector with its attendant metadata structures, a thread scheduler, a code cache, a metaspace to store information about every class that has been loaded, and a minimum of several hundred classes that must be initialised before the JVM will permit a single line of application code to execute. A Swing-based "Hello World" alone is reported to require the initialisation of approximately eight hundred classes before the greeting appears.2 A Spring Boot microservice doing nothing more interesting — just listening on a port and returning a fixed string — will consume in the region of two hundred to three hundred megabytes of RAM at rest, with the JVM overhead alone accounting for roughly half of that even when the heap is capped at a modest ceiling.3 The memory consumed by the runtime environment is, in many configurations, larger than the application it is hosting. The runtime exists to serve the program. In a simple program, the program exists to justify loading the runtime.

The .NET Common Language Runtime carries the same structural obligation. The framework is, like the JVM, a remarkable engineering achievement: it provides garbage collection, cross-language interoperability, a rich standard library, just-in-time compilation, and a set of safety guarantees that have prevented entire categories of bug from ever being written. These are not nothing. But they are not free. Every .NET process that starts must load the runtime, initialise the framework, resolve its assemblies, and stand up a managed execution environment — all before doing whatever the program was actually written to do. The standard defence at this point is that complex enterprise applications justify the overhead. It is worth examining what complex enterprise applications actually do. The overwhelming majority of them accept data entered into a form, write it to a database, and subsequently retrieve and display it in a report. That is the function. That is what the insurance claims system does, and the HR management platform, and the procurement workflow engine, and the customer relationship database, and the financial ledger, and the regulatory reporting tool, and the logistics tracking application, and the vast majority of the internal line-of-business software that runs the operational infrastructure of the developed world. They take input. They store it. They retrieve it. They present it. The computation involved is, in most cases, a handful of SQL queries and a loop over the results. The idea that this requires a managed runtime environment carrying several hundred megabytes of framework before the first row is read from the database is not a technical necessity. It is a habit — one that has been industrialised, packaged into frameworks, and normalised by an ecosystem with a strong commercial interest in its own complexity. The runtime cost is paid regardless. It is paid thousands of times a second, across millions of servers, in data centres drawing power from grids that are still overwhelmingly fossil-fuelled, to execute programs whose actual computational work would fit comfortably in a kilobyte of native code.

The framework is loaded whether the task needs it or not. The runtime cost is fixed. What varies is the work performed inside it — and for most programs, most of the time, that work is almost nothing.

The question of how much energy different programming languages consume has attracted serious academic attention. A widely cited study by Pereira and colleagues, which ran benchmark solutions across twenty-seven languages while measuring energy consumption via Intel's RAPL interface, placed C at the top of the efficiency ranking, followed closely by Rust and C++.4 Interpreted and virtual-machine languages trailed significantly: Python, Ruby, and Perl clustered at the bottom. Java occupied an interesting middle position — better than the interpreted languages on many benchmarks, but carrying the structural overhead of the JVM that no amount of JIT optimisation can fully amortise on short-lived or low-complexity workloads. A more recent paper by van Kempen and colleagues, submitted to arXiv in October 2024, importantly cautions against naive causal interpretations of this data: the relationship between language choice and energy consumption is entangled with implementation quality, the number of active cores, and memory access patterns in ways that earlier studies did not adequately control for.5 This is a genuine methodological refinement. But it does not dissolve the core observation. It clarifies it. The choice of language, combined with the choice of implementation approach, combined with the choice of runtime architecture, produces energy profiles that differ by orders of magnitude across the same computational task. Those differences compound across billions of program executions. They are not rounding errors. They are the shape of the problem.

· · ·

The server, at least, is a single point of consumption. A data centre running on renewable energy — and the large hyperscalers have made serious investments in wind and solar, however imperfect the accounting — can convert the same computation into a fraction of the carbon cost of the same computation running on a coal-backed grid. There is a path, in principle, from today's data centre to a net-zero one: build where the wind blows, buy where the sun shines, improve the PUE, turn off what is idle. It is difficult and expensive and only partially achieved, but it exists as a coherent goal.6

Now consider what the web development industry did with this geography. It took computation off servers — where it might eventually be powered by renewables, where it could at least be centralised and optimised — and distributed it across every device in every user's pocket and on every user's desk, burning battery and grid power under conditions that no operator controls and no renewable target covers. It called this innovation. It called it the Single Page Application. And it deployed it at civilisational scale.

The architecture of modern web development, built around frameworks like React, Angular, and Vue, operates on a straightforward principle: download a large JavaScript bundle to the client's device, execute it there, and allow the client's processor to perform the rendering, state management, virtual DOM reconciliation, and event handling that, in an earlier era of web development, a server would have performed once and delivered as finished HTML. The stated justifications are real: richer interactivity, faster perceived navigation between pages, offline capability, reduced server load. These are not fabricated benefits. But the energy accounting of the trade is almost never performed, because the cost of client-side computation is invisible to the person paying the server bill.

Research published in 2021 comparing React-based and vanilla JavaScript web stacks found that between sixty-nine and seventy-four percent of the total energy consumed by running a web application was consumed by the user's device — parsing HTML and CSS, compiling JavaScript, executing the framework, painting the screen, handling transitions.7 The server, even running Node.js on the backend, accounted for less than two percent of the total energy budget. This ratio is not a flaw in the study. It is the architecture working as designed. The server has been deliberately freed from rendering work. That work has been pushed to the client. The client is everywhere: a phone in a pocket, a laptop on a lap, a tablet running on a battery charged from a socket that may be powered by anything at all. The renewable data centre at the edge of a wind farm has been bypassed. The computation has been moved to the device on the kitchen table in a country where the grid is still sixty percent coal.

When a server renders a page, it renders it once and serves it to a thousand users. When a client-side framework renders a page, it renders it a thousand times, on a thousand devices, burning a thousand separate pools of energy to produce an identical result.

The aggregate consequence of this architectural preference is not small. The top million websites by traffic, the majority of which now deploy some form of client-side JavaScript framework, collectively download and execute their bundles billions of times per day. Each execution burns CPU cycles on a device drawing power. The JavaScript framework itself — React alone, in a typical production bundle — ships tens of kilobytes of framework code that must be parsed, compiled, and executed before any application logic runs at all. This is the Hello World problem again, expressed at the level of architecture rather than language: a fixed overhead, paid regardless of how simple the underlying task, paid again and again for each user on each page load, never amortised across instances because each instance is independent. A server-rendered equivalent would compute the result once. A static pre-rendered page would compute it once at build time. The client-side framework computes it, on user hardware, indefinitely, for every visitor who ever arrives.

· · ·

Step back from the web browser for a moment and look at the actual shape of consumer computing. The vast majority of the world's digital interactions now occur on one of two surfaces: a smartphone screen or a web browser on a laptop. This is not an approximation. Mobile devices account for well over half of all global web traffic, a share that has been growing for a decade and shows no sign of reversing.8 The remainder is overwhelmingly browser-based. The installed native application — the program that runs locally, uses local storage, and requires local compute in ways that are architecturally irreducible — is a shrinking fraction of what most people actually do with their devices most of the time. They open a browser. They open an app that is, functionally, a browser with a custom chrome. They read, they type, they submit forms, they watch video, they send messages. The computation required to support these activities is, in almost every case, occurring somewhere else — on a server handling authentication, a database returning query results, a CDN delivering a video stream encoded once and replayed identically for every viewer. What the device contributes is, primarily, a display, an input mechanism, and enough local processing to receive a stream of pixels or a parcel of HTML and render it to the screen.

This is an almost perfect description of a thin client. The concept is not new. Remote Desktop Protocol, developed by Microsoft and first shipped with Windows NT 4.0 Terminal Server Edition in 1998, does exactly this: it runs a full computing session on a remote server and transmits only the compressed screen output to the local device, receiving back only keystrokes and mouse events. Citrix had built a commercial empire on the same idea years earlier. The thin client terminal was the standard model of enterprise computing for much of the 1980s: a dumb screen and keyboard connected to a mainframe that did everything, centralised, maintainable, efficient. The industry then spent thirty years reversing this arrangement — distributing fat clients, shipping operating systems that assumed local storage and local CPU, building frameworks that pushed computation progressively further from the data centre and closer to the user's lap — for reasons that made sense in an era of expensive network bandwidth, limited server capacity, and the novelty value of local interactivity. Those reasons have substantially expired. Bandwidth is cheap. Server capacity is elastic and can be colocated with renewable generation. The novelty of local computation has been entirely absorbed by the expectation of it. What remains is the energy cost of an architecture whose original justifications have been quietly withdrawn.

The thin client was abandoned not because it was wrong but because bandwidth was expensive and local processors were impressive. Bandwidth is now cheap. The processors are now the problem.

Consider what a fully server-side model would mean in practice. A consumer device in this model is a screen, a network interface, an input subsystem, and just enough local processing to decode a compressed display stream and encode input events for transmission. The computation — the JavaScript execution, the framework reconciliation, the application logic, the database queries, the rendering — happens once, on a server in a data centre that the provider has chosen to locate next to a wind farm or a solar installation or a hydroelectric plant, running on processors whose energy consumption is measured, managed, and attributed. The device itself becomes, in the language of the data centre, a zero-client: something closer to a television than a computer, drawing perhaps five watts at full attention rather than the thirty to sixty watts that a smartphone SoC draws under load, or the fifteen to forty-five watts that a laptop CPU draws processing a heavy web application locally.

The objection will be latency. RDP and its successors — protocols like QUIC, codecs like H.265 and AV1, the proprietary display compression used by NVIDIA GeForce Now, Amazon Luna, and Microsoft Xbox Cloud Gaming — have made the latency objection substantially weaker than it was in 1998. Game streaming, the most latency-sensitive consumer application imaginable, is now commercially viable at scale. If a first-person shooter can be rendered on a server and played on a screen with acceptable input lag, a spreadsheet can. A word processor can. A web browser serving the forms-and-database applications that constitute the overwhelming majority of what enterprise software actually does can — and in fact already does, since most enterprise web applications are already being rendered in a browser that is already receiving HTML from a server. The thin client model for enterprise software is not a vision of the future. It is a description of what already happens, seen clearly. The only missing element is the removal of the local computation that the browser performs after receiving that HTML: the JavaScript framework execution, the virtual DOM, the client-side routing, the bundle that must be downloaded and parsed before any content appears. Strip those away — return to server-rendered HTML delivered to a display protocol, or to a browser so stripped of local execution as to be functionally equivalent — and the device can be simpler, cooler, lighter, and much cheaper to manufacture.

Which brings the argument to a place that the technology industry has every commercial reason to find interesting, and has thus far shown almost no interest in at all. If the device is a dumb terminal, it is a commodity. It costs almost nothing to manufacture relative to the sophisticated multi-core SoC, the high-density NAND storage, the carefully engineered power management subsystem, and the radio stack that a modern smartphone or laptop requires. A display, a network interface, a low-power ARM chip capable of decoding a video stream, a battery: the bill of materials for such a device, at scale, is perhaps twenty to thirty dollars. The company providing the server infrastructure, by contrast, is running processors whose usage can be metered to the microsecond, billed to the consumer at a rate per CPU-hour or per gigabyte-transferred, and priced at whatever the market will sustain — which, given that the alternative is the consumer buying their own four-hundred-dollar phone and bearing the full capital cost themselves, is considerably more than the marginal cost of the compute. The provider absorbs the hardware cost of the terminal — effectively gives the device away, as mobile network operators have given away handsets tied to service contracts for thirty years — and recoups it through the service relationship, which is now a genuine compute relationship rather than a nominal one dressed up as a data plan.

The phone manufacturer sells you a processor you own and a battery you replace and a chassis you drop. The compute provider gives you a screen and charges you for thinking. The economics are not even close.

The model is not hypothetical. It exists in partial form in every cloud gaming service, in every Chromebook running applications in a browser backed by Google's infrastructure, in every enterprise Citrix deployment where a thin client on a call centre desk runs a full Windows session from a rack three floors below. What does not yet exist is the consumer version of this argument made honestly — not as a lock-in strategy disguised as convenience, but as an explicit trade: your device is free, or nearly free, because its simplicity is what makes the economics work; your compute is metered, because metered compute at scale, co-located with renewable generation and shared efficiently across millions of simultaneous users, is an order of magnitude less carbon-intensive than the same computation scattered across a billion privately-owned processors that are idle eighty percent of the time, drawing standby power continuously, manufactured from rare materials with a carbon cost that begins before the first instruction executes.

The objection to this model is not technical. It is political. It requires the consumer electronics industry to accept that the device is not the product — that the device is the loss-leader for a service relationship, and that its value proposition is simplicity rather than capability. Every company whose revenue depends on selling increasingly powerful consumer hardware has an obvious interest in the current arrangement. Every company whose revenue depends on selling cloud compute has an obvious interest in the alternative. The second group is now considerably larger and more profitable than the first. The climate has an unambiguous preference. What is missing is not the technology, the economics, or the infrastructure. What is missing is the decision to treat the current arrangement as a choice rather than as a fact of nature — and to ask, plainly, whether it is the right one.

· · ·

Here is an exercise in deliberate absurdity, offered not as a proposal but as a measuring instrument. Suppose that every program currently running on every server and device in the world were rewritten in hand-optimised assembly language. Not compiled from C with aggressive flags, not generated by a sufficiently clever compiler, but written by hand, instruction by instruction, by programmers who understood the target architecture with the depth that was once required to write operating systems for machines with forty-eight kilobytes of RAM. The energy savings would be extraordinary. A carefully hand-written assembly routine for a common operation — sorting, hashing, string comparison — can run in a fraction of the cycles that a JVM-compiled Java method or a CPython bytecode interpreter requires. Where a Python script might consume seventy-five times more energy than its C equivalent on a computation-bound benchmark, a hand-optimised assembly implementation of the same algorithm might consume perhaps half as much as the C version, or less, on a processor whose instruction set it has been written to exploit precisely.9 Apply that across the ICT sector's current electricity budget of roughly nine hundred terawatt-hours per year and the arithmetic is staggering. Even a fifty percent reduction — achievable in principle, grotesque in practice — would represent four hundred and fifty terawatt-hours annually: more than the entire electricity consumption of Spain. A ninety percent reduction, which is not physically impossible on certain workloads if you are willing to abandon every abstraction that makes software maintainable, would eliminate electricity demand equivalent to the combined consumption of Germany and France.

The point of this calculation is precisely its absurdity. Nobody is going to rewrite the world's software in assembly. The productivity cost would collapse the industry that produces software in the first place. The maintenance burden would be incomprehensible. The first security vulnerability in a hand-optimised memory allocation routine on a production server would cost more to fix than the lifetime energy savings of fixing it. The exercise is not a policy recommendation. It is a scale indicator. It says: the gap between what we are doing and what is physically possible is enormous. It is not merely the gap between good engineering and bad engineering. It is the gap between an industry that has treated energy as an externality — someone else's problem, absorbed by the atmosphere on our behalf — and an industry that has taken its physical footprint seriously. Most of that gap will never be closed, and most of it does not need to be. But a meaningful portion of it can be closed by choices that are not absurd at all: by preferring compiled languages over interpreted ones where the performance difference is significant and the developer productivity cost is manageable, by building server-side rendering paths for content that does not require client-side interactivity, by choosing data centres powered by renewable energy for workloads that can tolerate geographic flexibility, by writing less code, by depending on fewer frameworks, by treating megabytes of memory as something that must be justified rather than as a resource to be assumed.

· · ·

While the programs run, the programmers write them. And the environments in which programmers write code have themselves become extraordinary consumers of resources, in ways that are almost never discussed because the cost, again, is invisible — paid by the developer's laptop battery or the office electricity bill, categorised as overhead, and never attributed to the software being produced.

Visual Studio, Microsoft's flagship development environment for .NET and C++ development, is among the most feature-rich pieces of software ever shipped. It provides real-time syntax analysis, integrated debugging, build tooling, version control, test running, code generation, refactoring tools, profiling, database connectivity, and a plugin ecosystem of several thousand extensions. On a Windows machine in active use, it regularly consumes enough power to generate visible heat from the laptop chassis. Users have documented sustained high CPU utilisation even when no build is running and no file is being edited — the IDE performing background indexing, language server operations, and telemetry collection on behalf of capabilities that may never be invoked in the current working session.10 Research comparing IDE energy consumption found that IntelliJ IDEA and Eclipse, the two dominant Java development environments, each consumed significantly more energy than Visual Studio Code, itself not a lightweight application, when performing equivalent tasks — with the heavier IDEs drawing roughly fifteen percent more energy over the course of a development session for a simple Java program.11

Fifteen percent sounds modest. But consider what is being compared. IntelliJ and Eclipse are being used, in most cases, to write programs that could be written in a plain text editor with syntax highlighting. The additional capabilities — the project structure analysis, the framework-aware refactoring, the integrated build graph visualisation, the live template engine — exist primarily because the languages and frameworks they support are complex enough to make those capabilities necessary. Java's verbosity generates demand for code generation tools. Maven's configuration complexity generates demand for IDE-integrated build management. Spring Boot's annotation-heavy wiring generates demand for framework-aware inspection and injection mapping. The IDE grew to accommodate the complexity of the language and framework ecosystem. The complexity grew, in part, because the IDE was there to manage it. The energy cost sits at the intersection of all of this: paid not just when the program runs, but while it is being written, in an environment that consumes continuously in order to support a workflow whose underlying complexity was itself a choice.

For a large development team — a hundred engineers, each running an IDE for eight hours a day, five days a week — the electricity consumed by development tooling alone is not negligible. It is also entirely upstream of any line of production code. It is the cost of the process, not the product. And the process has become, over the past two decades, substantially more energy-intensive not because the programs being produced are more computationally ambitious, but because the frameworks and languages chosen to produce them demand infrastructure to manage their own complexity.

The IDE grew heavy to accommodate frameworks that had grown heavy to accommodate languages that had grown heavy because nobody was paying the energy bill. The cost was always there. It was simply charged to the atmosphere.
· · ·

There is an instrument that sits at the precise intersection of every problem described in this essay, and which does not yet exist in any mainstream toolchain. A compiler that can be instructed to optimise for energy consumption rather than, or in addition to, execution speed. The concept is not exotic: modern compilers already accept dozens of flags that trade one dimension of performance against another. -O2 and -O3 in GCC and Clang produce binaries that run faster at the cost of longer compilation and larger instruction caches. -Os optimises for binary size. Profile-guided optimisation trades a measurement run for better branch prediction on the hot paths. The infrastructure for multi-dimensional optimisation is already there. What is missing is the energy dimension entirely.

What such a flag — call it -Oenergy for the sake of argument — would actually do is not straightforward, because energy consumption is not simply a function of instruction count. It depends on memory access patterns, cache behaviour, vector unit utilisation, and the frequency at which the processor can be permitted to idle between bursts of work. But none of these are unmeasurable. Intel's RAPL interface, which underpins the academic benchmarks cited in this essay, already exposes per-package and per-core energy consumption to software running with appropriate privileges. ARM equivalents exist. The measurement infrastructure is there. What does not exist is a compiler that uses it — that, during profile-guided compilation, measures the energy consumed by different code generation strategies for the same function, selects the strategy that minimises joules rather than nanoseconds, and records its choice.

More consequential than the optimisation flag is what would accompany it: an energy estimate emitted as part of the build output. Not at runtime, not as a profiling exercise requiring specialist tooling, but as a standard compiler artefact — a figure, expressed in expected millijoules per thousand invocations under a reference workload, printed alongside the binary size and compilation time that build systems already report. This would be, for energy, what static analysis is for correctness: a way of making an invisible property visible at the moment of creation rather than years later when the electricity bill has already been paid. A team that could see, on every build, that their new service was expected to consume forty percent more energy per request than the version it replaced would have information they currently do not have. They might choose to optimise. They might choose to accept the cost. What they cannot currently do is choose at all, because the number does not exist in any form they ever encounter.

The build system already tells you how long compilation took and how large the binary is. It does not tell you how much electricity the program will consume. There is no technical reason for this omission. There is only the habit of not asking.

The Green Software Foundation's Software Carbon Intensity specification has begun to formalise some of this thinking at the application level — a standard for measuring and reporting the carbon cost of running software in production.12 It is a meaningful step. But it operates downstream of the decisions that matter most: the language chosen, the framework adopted, the rendering architecture selected. By the time a production system is emitting SCI metrics, the structural energy cost has been baked in by choices made months or years earlier, in design documents and architecture meetings where nobody had a joule estimate to consult. The compiler flag and the build-time energy estimate would move the accounting to where the choices are actually made. They would make energy a first-class output of the software development process rather than an afterthought discovered during a sustainability audit.

· · ·

None of this is accidental. The software industry has made a series of coherent choices across several decades, and those choices have consistently traded energy efficiency for developer productivity, startup speed, and ecosystem richness. Those trades solved real problems in the conditions that produced them. Java's write-once-run-anywhere promise was a genuine answer to platform fragmentation in the 1990s. React's component model was a genuine response to the unmanageable complexity of stateful DOM manipulation at scale. Visual Studio's integration was a genuine response to the difficulty of navigating large codebases without mechanical assistance. The problem is not that these tools were built. It is that the conditions in which they were built — energy cheap, atmosphere treated as a free dumping ground, enterprise applications assumed to require complexity proportional to their cost — no longer hold, and the industry has declined to notice. The tools remain. The assumptions that justified them have been voided by the climate. Only the energy bill has changed, and it has changed the whole world's bill, not the industry's.

The specific failure is a failure of accounting. If a company ships a web application built on a client-side JavaScript framework, it saves money on servers. The server costs are on the balance sheet. The energy consumed by several billion client-side JavaScript executions per year is not on any balance sheet. It is externalised to the users' electricity bills and, through the carbon intensity of the grids those bills describe, to the atmosphere. The same externalisation applies to developer tooling: the electricity consumed by an IDE is paid by the employer or the developer, categorised as an operational expense, and never connected to the software being produced. The same externalisation applies to the choice of language and runtime: a Java application running on a JVM consumes more energy than a C application performing the same computation, but the energy is paid by whoever runs the servers, categorised as infrastructure cost, and never surfaced as a consequence of the language choice made years earlier in a meeting where nobody mentioned watts.

What the climate requires is not an end to Java or React or integrated development environments. It requires that the industry stop treating energy as someone else's problem. It requires that architectural decisions — server-side rendering versus client-side, compiled versus interpreted, lightweight tooling versus heavyweight IDE — be made with the energy implications visible rather than hidden. It requires that the enormous renewable energy investments being made by cloud providers not be squandered by frameworks that multiply computational work across client devices operating on uncontrolled grids. It requires, at minimum, that the people who design programming languages and web frameworks and development environments understand that they are making energy policy, whether or not they have chosen to think of it that way.

The software industry has a structural advantage that no other sector of comparable scale possesses: its product is made of logic, and logic can be changed. A steel mill cannot choose to produce less heat. A cement plant cannot choose to emit less CO₂ without changing its chemistry. A software team can choose, tomorrow, to render on the server instead of the client, to ship less JavaScript, to write the performance-critical path in a language that respects the cost of a CPU cycle, to open their code in a text editor with syntax highlighting instead of an IDE that performs continuous background indexing. These are not sacrifices. They are practices. The industry adopted them once, before the frameworks arrived, and can adopt them again — this time with the understanding of what the alternative is costing, and who is paying.

The atmosphere does not send invoices. This has been the industry's longest-running and most expensive subsidy. The bill exists nonetheless. It is being paid in heat records and displaced populations and ecosystem failures that no software update will fix. The question is not whether the industry will eventually be held to account for the energy its choices consume. The question is whether it will choose to account for that energy itself, before the accounting is done for it by events it cannot optimise away.

1The range in estimates reflects genuine disagreement about how to define the ICT sector's perimeter — whether to include user devices, whether to count manufacturing and embodied carbon, whether to include entertainment and media systems. The most commonly cited figures are those from Ericsson (approximately 4% of global electricity in the use stage in 2020), the UK Parliamentary Office of Science and Technology (4–6% for 2020, excluding televisions), and higher estimates from Enerdata placing the figure at 5–9%. The trajectory toward 10–20% by 2030, absent major efficiency improvements, comes from Enerdata's executive briefing on digitalization and electricity demand. These figures cover electricity consumption in the use stage and do not include manufacturing and supply chain emissions, which are substantial.

2The figure of approximately eight hundred classes being initialised before a Swing-based Hello World program displays its output was reported in a widely circulated technical discussion on OSnews in the mid-2000s, attributed to Werner Randelshofer. The precise number varies by JVM version and configuration, but the underlying point — that the runtime cost of a JVM application bears no relationship to the complexity of the application itself — is structurally correct and uncontested. A simple Java application, regardless of what it does, must load the JVM, which loads its class hierarchy, which loads its just-in-time compiler, which loads its garbage collector, before any user code is executed. The application's complexity determines the work it performs; the runtime's complexity is constant.

3The Spring Boot memory figures cited here reflect observed measurements from production and near-production configurations. Work by Parth Mistry, published on Medium in 2024, documented a simple Spring Boot application with a 128MB heap cap consuming approximately 309MB of total memory — roughly 181MB of overhead above and beyond the application's heap allocation, attributable to JVM internals including the code cache, metaspace, thread stacks, and garbage collector metadata. The Spring documentation on JVM memory footprint, published via the Spring.io blog, documented similar patterns. These numbers are not constant: GraalVM native image compilation, CRaC, and various JVM tuning flags can reduce them materially. But they reflect the default conditions under which the majority of Java applications are deployed.

4The foundational study is Pereira et al., "Energy Efficiency across Programming Languages: How Do Energy, Time, and Memory Relate?", published in Science of Computer Programming in 2021, extending earlier work from 2017. The study used Intel's RAPL (Running Average Power Limit) interface to measure energy consumption across twenty-seven programming languages running benchmarks from the Computer Language Benchmarks Game and Rosetta Code. The headline finding — that C, Rust, and C++ were the most energy-efficient, while Python, Ruby, and Perl were among the least — has been widely reproduced in the popular press and widely debated in the technical literature. The finding that interpreted languages can consume up to seventy-five times more energy than C on compute-bound benchmarks is from this work; practical assessments suggest the real-world differential for typical applications is smaller, perhaps four to ten times, because real programs spend significant time on I/O and system calls where the language choice matters less. The differential is real nonetheless.

5Van Kempen, Kwon, Nguyen, and Berger, "It's Not Easy Being Green: On the Energy Efficiency of Programming Languages," arXiv:2410.05460, submitted October 2024. This paper directly addresses the causal interpretation problem in earlier work, arguing that associations between language choice and energy consumption identified by Pereira et al. were misread as causal by subsequent commentators, when in fact the relationship is mediated by implementation quality, memory activity patterns, and core utilisation in ways the earlier methodology did not separate. The paper does not argue that language choice is energy-irrelevant; it argues that the relationship is more complex than a simple ranking implies, and that distinguishing between language implementations (for instance, CPython versus PyPy for Python, or different JVM implementations for Java) is essential to drawing valid conclusions.

6The renewable energy commitments of the major hyperscale cloud providers — Google, Microsoft, Amazon — are real but require careful interpretation. Google reports matching its global electricity use with renewable energy purchases, but this is a market-based accounting approach that does not guarantee that any given computation is powered by renewable electrons at any given moment. The more meaningful figure is the hourly carbon-free energy percentage, which Google has also begun reporting and which is substantially lower than the annual average match. The point about centralising computation in renewable-powered data centres is structurally valid: a server running in a data centre with a high carbon-free energy percentage will produce lower emissions for the same computation than a user device running on an average grid. The argument for server-side rendering over client-side rendering is not merely about aggregate energy consumption; it is about where that consumption occurs and who controls its carbon intensity.

7The figures on client-versus-server energy distribution come from the Marmelab "Argos" study published in March 2021, comparing React and vanilla JavaScript web stacks running the RealWorld benchmark application. The finding that 69–74% of total energy consumption occurred on the client device, with the server accounting for less than 2%, reflects the architecture of a simple data-driven web application with minimal server-side processing. The authors caution that their results come from a small application and would benefit from validation on more complex systems; a ResearchGate-hosted study on "The Ecological Impact of Server-Side Rendering" (2023) takes up this direction, finding "substantial carbon emission savings potential" from server-side rendering approaches even with conservative estimates. The architectural logic is independent of the precise numbers: client-side computation is distributed across user devices operating on uncontrolled grids; server-side computation is centralised and can be optimised or relocated.

8Mobile's share of global web traffic has exceeded fifty percent since approximately 2017 and stood at around sixty percent as of 2024, according to aggregated data from StatCounter Global Stats. The figure varies significantly by region — in South Asia and sub-Saharan Africa the mobile share exceeds eighty percent — and by application type, with social media, messaging, and video skewing more heavily mobile. The argument here does not depend on a precise figure; it depends on the structural observation that the dominant consumer computing surface is already a device whose primary function is to display content retrieved from a remote server, and whose local computation is therefore largely redundant overhead rather than architectural necessity. The thin client economics described in this section are consistent with the existing business model of mobile network operators, who have subsidised handset hardware against service contracts for three decades. The specific claim that a device capable only of decoding a compressed display stream and transmitting input events could be manufactured for twenty to thirty dollars at scale is consistent with the bill-of-materials analysis of existing low-cost Android devices and Raspberry Pi-class single-board computers, neither of which represents the engineering floor for a purpose-built zero-client terminal.

9The comparison between hand-optimised assembly and higher-level language implementations is deliberately illustrative rather than precisely documented, because systematic benchmarks of hand-written assembly against compiler-generated code are rare in the open literature — in part because the compilers are often surprisingly competitive, and in part because the exercise requires specialist knowledge that few researchers possess. The Python-to-C energy ratio of approximately seventy-five times on compute-bound benchmarks is from Pereira et al. The claim that hand-optimised assembly can outperform compiler-generated C by a factor of roughly two on suitable workloads is consistent with published results from domains where such optimisation is routinely performed: cryptography, signal processing, video codec development. The estimate that a fifty percent reduction in ICT energy consumption would represent approximately 450 TWh annually is based on Ericsson's 2020 figure of approximately 915 TWh for ICT use-stage electricity consumption. Spain's total electricity consumption in 2022 was approximately 250 TWh; Germany's was approximately 490 TWh; France's approximately 460 TWh. The arithmetic is correct. The conclusion it points to is a matter of proportion, not precision.

10Complaints about Visual Studio's background power consumption are extensively documented in Microsoft's Developer Community forums, where multiple threads from 2019 onward document users observing "Very High Power Usage" in macOS Activity Monitor or Windows Task Manager even when the IDE is open but idle. The background processes responsible include the Roslyn language service (continuous background compilation for IntelliSense), background indexing of the solution, telemetry collection, and various extension processes. These are not bugs; they are the IDE providing its feature set. The energy cost of providing that feature set continuously, for an engineer who may be in a meeting or reading documentation, is real and undisclosed.

11The IDE energy comparison is from a study published on the Sustainable Software Engineering course site at TU Delft (luiscruz.github.io), measuring energy consumption of IntelliJ IDEA Community Edition, Eclipse, and Visual Studio Code running a simple Java program using the EnergiBridge measurement framework. The finding of approximately fifteen percent lower energy consumption for VS Code compared to the most efficient IDE (Eclipse) in this study is for a simple task; the researchers note that for complex projects requiring the extended capabilities of a full IDE, the comparison may not hold in the same direction. The broader point — that tool complexity carries an energy cost that is rarely visible to the teams choosing those tools — stands independently of the precise differential.

12The Software Carbon Intensity (SCI) specification is published by the Green Software Foundation and defines a methodology for calculating a rate of carbon emissions for a software system — expressed as grams of CO₂ equivalent per unit of functional output — covering both operational emissions from energy consumption and a proportional share of embodied emissions from the hardware the software runs on. The specification is available at greensoftware.foundation. The SCI's limitation, noted here, is that it is a measurement of running systems rather than a design-time instrument: it tells you what a deployed system is costing, not what a design decision will cost before the code is written. The compiler energy estimate proposed in this essay would complement rather than replace the SCI by moving the accounting upstream to the point where architectural choices are still available to be made.