Essay · Software & History

There Is
Nothing New

On software's infinite regress, and how the industry sells the same seven ideas to each generation under a different name

By Anonymous

"The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun."

— Ecclesiastes 1:9

The software industry has a creation myth it returns to compulsively: the idea that this time, the new thing is genuinely new. That what is being unveiled at this conference, in this blog post, in this breathless framework announcement, is a departure from what came before rather than a redecoration of it. The myth is necessary. It sells conference tickets, generates consulting revenue, moves job advertisements, and provides the psychological fuel that keeps engineers excited enough to rewrite, for the fifth time in their careers, something that was already working. Without the myth, the industry would have to confront what a clear-eyed reading of its own history reveals: that it has been solving the same six or seven problems since the 1960s, that it has solved several of them correctly at least once, and that it has an extraordinary talent for forgetting that it has done so.

This is not an argument against progress. Genuine progress has occurred. Processors are faster by many orders of magnitude. Storage is essentially free. Networks have become so fast and cheap that entire architectural assumptions built around their scarcity have had to be rebuilt from scratch. The tools for writing, testing, deploying, and monitoring software have improved substantially. These are real. The argument here is narrower: that at the level of ideas — the conceptual structures that the industry uses to organise computation, distribute work, isolate components, pass messages, and manage state — almost everything celebrated as a breakthrough in the last twenty years was already described, implemented, deployed, and in several cases deprecated, before most of the people celebrating it were born.

· · ·

Begin with the most fundamental reinvention of the modern era: the virtual machine. When VMware shipped its first server product in 2001 and the technology press responded as though a new concept had arrived in the world, IBM's mainframe division had been running production virtual machines for twenty-nine years.1 VM/370, announced on the 2nd of August 1972, implemented what IBM called a Control Program — a thin layer of software sitting on bare metal that presented each user with a complete, isolated virtual copy of the underlying System/370 hardware, capable of running any operating system that could run on the physical machine. The term for what this software did — hypervisor, a supervisor of supervisors — was coined precisely then. The concept itself predates even VM/370: IBM's Cambridge Scientific Center had been running CP-40, the direct ancestor of VM/370, in daily production use since April 1967, two years before the moon landing and three decades before VMware's Series A funding round.

The reason VMware felt like a discovery was that the x86 architecture, through an accident of its original design, was notoriously difficult to virtualise — seventeen of its instructions were sensitive to the privilege level at which they ran but did not trap when executed in user mode, which made naive virtualisation impossible and had led to the conventional wisdom, sustained for decades, that x86 simply could not be virtualised efficiently. VMware's innovation was a binary translation technique that worked around this constraint dynamically. It was a genuine engineering achievement. What it was not, in any meaningful sense, was the invention of virtualisation. The idea was thirty years old. IBM's engineers had solved every conceptual problem it presented, and z/VM, the direct descendant of VM/370, was running hundreds of thousands of virtual machines on production mainframes throughout the period when the trade press was announcing VMware as a revolution.

Docker arrived in 2013 and was received with the same quality of astonishment. Containers — isolated process environments sharing a kernel but separated by namespaces and control groups — were presented as a fundamental breakthrough in how software was deployed and run. The chroot system call, which creates an isolated filesystem root for a process and is the conceptual ancestor of everything that Docker does, was introduced in Unix in 1979. FreeBSD jails, which extended this isolation to include process and network namespaces, shipped in 2000. Solaris Zones, which provided full OS-level virtualisation including resource controls and network isolation, shipped in 2004 — nine years before Docker. Linux Containers (LXC), the direct technological substrate on which the first versions of Docker were built, were available from 2008. The innovation Docker contributed was a packaging format and a workflow that made existing container primitives accessible to developers who had never heard of LXC or Solaris Zones. The idea was not new. The user experience was. These are different things, and the industry has a persistent habit of celebrating the latter as though it were the former.2

Every generation discovers containers. Every generation believes it has invented them. The kernel does not share the excitement.
· · ·

Kubernetes arrived in 2014, presented by Google as an open-source container orchestration system derived from their internal Borg infrastructure, and received by the industry as the definitive solution to the problem of running large numbers of distributed services reliably across many machines. It was, in the terms of its own moment, genuinely impressive. It was also a precise reimplementation of capabilities that Microsoft's COM+ and its predecessor Microsoft Transaction Server had provided for Windows-based distributed applications in 1999, that the Open Software Foundation's Distributed Computing Environment had provided across Unix platforms from 1993, and that CORBA — the Common Object Request Broker Architecture, published by the Object Management Group — had specified as an open standard from 1991.3

The capabilities in question are not subtle. Service discovery: the ability for one component to find another by name, without hardcoding a network address. Health monitoring: the ability to detect that a component has stopped responding and route traffic away from it. Load balancing: the ability to distribute requests across multiple instances of the same service. Lifecycle management: the ability to start, stop, restart, and replace service instances under policy control. Secrets management: the ability to pass credentials to a service without embedding them in its configuration. These are the things Kubernetes does. They are also the things DCE did, the things COM+ did, the things CORBA's Naming Service and Trading Service and Lifecycle Service did — all of which were themselves responses to problems that had been articulated in the distributed systems literature since at least the early 1980s, when Sun Microsystems was inventing Network File System and remote procedure calls and the engineers involved were already writing papers about the difficulties of location transparency and partial failure that Kubernetes users would rediscover, with an air of novelty, thirty years later.

The defensive response to this comparison — and it arrives reliably whenever the comparison is made — is that Kubernetes is simpler, more open, more cloud-native, and easier to operate than its predecessors. This is largely true. CORBA's Interface Definition Language was a bureaucratic nightmare. DCOM was so difficult to configure securely across a network firewall that entire consulting careers were built on the attempt. The Enterprise Service Bus implementations of the SOA era created central dependency structures so fragile and so expensive to modify that teams who had adopted them sometimes found it easier to route around them entirely than to add a new service endpoint. The new implementations are, in many respects, improvements on the old ones. The point is not that the improvements are fake. The point is that the industry chose to describe them as inventions rather than refinements — chose to erase the prior work, rename the concepts, and sell the result to each new cohort of engineers as though the field had, this time, finally found its way out of the wilderness. It had not. It had found a nicer path through the same wilderness it had been lost in for forty years.

· · ·

The pattern runs through every layer of the stack. Remote procedure calls — the mechanism by which one program invokes a function in another program running on a different machine, receiving a result as though the call were local — were formalised by Sun Microsystems as ONC RPC in 1984, having been conceptually present in the distributed systems literature since the late 1970s.4 CORBA's IDL-based remote invocation built on this in 1991. Microsoft DCOM extended it across Windows networks in 1996. SOAP wrapped it in XML and called it a Web Service in 1998. REST stripped the envelope off and called SOAP overcomplicated in 2000, using the same HTTP transport that SOAP used but declining to specify a schema. JSON-RPC did what XML-RPC had done, but with a lighter serialisation format. gRPC, released by Google in 2015, reintroduced a contract-first interface definition language, binary serialisation, and bidirectional streaming — which is to say it reintroduced the core features of CORBA's IDL, CORBA's CDR wire format, and CORBA's event service respectively, having spent fifteen years establishing that REST was the right answer, discovering that REST at scale requires all the things REST had abandoned, and rebuilding them. The wheel, reshaped slightly at each iteration, turned.

The message queue followed an identical arc. IBM MQ — originally called MQSeries — shipped in 1993 and provided exactly what every subsequent messaging system would provide: durable, ordered queues of messages between producers and consumers, decoupled by the queue so that neither side needed to be available simultaneously. The Java Messaging Service standardised the programming model in 1998. AMQP published an open wire protocol in 2006. RabbitMQ implemented it. Apache Kafka arrived in 2011, built at LinkedIn, and was received as a paradigm shift — the discovery that messages could be retained in an ordered, replayable log rather than being consumed and discarded. The log is persistent. Consumers maintain their own offsets. Topics can be replayed. This was called event streaming, and then event sourcing, and both names implied that something fundamentally new was happening to the concept of a message queue.5 IBM's CICS, which had been processing transactional messages on mainframes since 1969, maintained a recoverable log of every transaction it processed. The concept of the replayable, ordered, durable message log is as old as double-entry bookkeeping. Kafka implemented it efficiently on commodity hardware at a scale that earlier systems could not have reached. But the idea that the log is the database — that the true record of a system's state is the ordered sequence of events that produced it, not the current snapshot — was not invented in 2011. It is the architecture of every accounting system ever designed.

Event sourcing is what accountants have called a ledger for five hundred years. The innovation was calling it event sourcing.
· · ·

Consider the trajectory of microservices, the dominant architectural fashion of the 2010s. The term was coined at a workshop of software architects near Venice in May 2011, and popularised by Martin Fowler and James Lewis in a widely-read blog post in 2014.6 The idea: decompose a large application into small, independently deployable services, each owning its own data store, communicating over lightweight protocols, deployable and scalable without coordinating with the rest of the system. This was presented as a break from Service-Oriented Architecture, which had been the dominant architectural fashion of the 2000s and had manifestly failed to deliver on its promises — largely because it had been implemented via Enterprise Service Buses, heavyweight XML schemas, and shared databases in ways that produced all the coordination overhead of a monolith with none of its efficiency. The microservices movement positioned itself as the correction of these errors. In the words of observers who were present during both cycles, microservices were SOA done right — or, in the more pointed formulation from the conference circuit: SOA is a superset of microservices, microservices is a restatement of SOA principles, and the reason we needed a new name was that the old name had been ruined by bad implementations rather than by a bad idea.7

SOA, in turn, was the correction of the failures of CORBA and DCOM, which had attempted the same decomposition at the object level rather than the service level but had produced such impenetrable infrastructure that development teams could not use it without a dedicated middleware team to manage the broker, the registry, and the security configuration. Which were, in the CORBA case, refinements of the RPC paradigm that had already been through Sun RPC, Apollo's Network Computing Architecture, and Xerox PARC's Courier protocol before any of them arrived. The lineage is unbroken. Each generation of distributed architecture encountered the same set of problems — how to find services, how to handle partial failure, how to pass data across a process boundary, how to manage versioning when client and server must evolve independently — and each generation produced a solution that solved the problems its predecessor had most visibly failed at, while planting the seeds of the failures that the following generation would inherit and rebrand.

The database followed its own version of the same cycle. IBM's Information Management System, built originally to track the two million parts of a Saturn V rocket for NASA in 1966, was a hierarchical database: data stored in a tree structure, accessed by navigating from parent to child along predefined paths.8 The relational model, proposed by Edgar Codd in 1970, replaced navigation with query — instead of traversing the tree, you described the data you wanted and the database found it — and the SQL-based RDBMS became the dominant storage paradigm for forty years. Then, in the 2000s, NoSQL arrived. MongoDB stored data as JSON documents — nested, hierarchical, accessed by navigating into a document structure that had been predefined at schema-less design time. When a commenter on a technical blog noted, in 2017, that MongoDB was essentially IMS with JSON instead of segments, they were not being glib. The observation was structurally precise: the move from relational to document-oriented storage was, at the data model level, a partial reversion to the navigational model that the relational model had been invented to replace. It solved the problems the relational model had accumulated at scale — impedance mismatch, JOIN overhead, schema rigidity — by accepting back the problems the navigational model had been abandoned for: the difficulty of querying relationships that were not anticipated when the data was written. The wheel, again. A different spoke facing upward. The same wheel.

· · ·

Cloud computing — the idea that computation should be purchased as a service from a shared infrastructure rather than owned as dedicated hardware — was described as the defining architectural shift of the early twenty-first century. It was the mainframe model with better marketing. IBM's time-sharing systems of the 1960s offered exactly this: computing power delivered over a network, billed by consumption, maintained by a provider, accessed from a terminal that contributed nothing to the computation. The Multics project, begun in 1964 as a collaboration between MIT, Bell Labs, and General Electric, had as its explicit ambition the provision of computing as a utility — the phrase "computing utility" appears in the original 1965 project plan — in which users would be billed for the resources they consumed in the same way they were billed for electricity or water.9 This vision was technically realised, proved commercially premature, and then abandoned as the microcomputer revolution made local processing cheap enough to displace the timesharing model. The cloud restored it, fifty years later, under a different name, in the hands of Amazon, which had discovered that the spare capacity on its retail infrastructure could be sold as a service and had the engineering talent to build the automation layer that the 1960s time-sharing vendors had lacked. The idea was recovered, not invented. The recovery was valuable. It is not the same as invention.

Serverless computing — the abstraction of computation to the level of individual functions, with no server management, billed by invocation — was announced as the final stage of this evolution, the point at which the developer was fully insulated from infrastructure concerns and could focus entirely on business logic. The mainframe operator of 1968, submitting a job to the IBM 360 job queue and receiving results without knowing or caring which physical processor executed the work or how the memory was allocated, was doing serverless computing. The concept of function-as-a-service is the concept of the batch job, refactored for the HTTP era and priced by the millisecond rather than the CPU minute. Amazon Lambda, which popularised it in 2014, added genuine innovations in automation, pricing granularity, and integration with the wider AWS ecosystem. None of those innovations were the idea itself. The idea was batch processing, observed from a different angle.

The mainframe operator submitting a job to the queue in 1968 was doing serverless computing. She was not invited to the re:Invent keynote.
· · ·

The process reinventions have followed the same pattern. Agile software development, formalised in the Manifesto published in February 2001 by seventeen software practitioners at a ski resort in Utah, was presented as a response to the failures of waterfall methodology — the heavyweight, document-driven, phase-sequential approach to software development that had dominated enterprise IT since the 1970s.10 The Agile Manifesto itself was careful: it positioned itself as a set of values, not a method. But the practices that followed — Scrum's two-week sprints, Kanban's continuous flow, XP's pair programming and test-first development — were framed as departures from prior practice. They were, more precisely, returns to how software had been written before waterfall arrived. The iterative, feedback-driven, small-team approaches that Agile canonised were recognisable to anyone who had read descriptions of how software was developed at MIT, at Bell Labs, or at Xerox PARC in the 1970s. Waterfall was the anomaly: an attempt to apply manufacturing process models to a knowledge work activity whose fundamental characteristics made such models inapplicable. Agile was the correction. The correction was named, certified, and sold as a methodology. Consultants charged for it. It was, in the end, iterative development, which had always been how software was actually written when the people writing it were left to their own devices.

DevOps — the movement to close the organisational gap between software development and infrastructure operations — was described as a cultural transformation, a new way of thinking about the relationship between building and running software. What it described was the normal working arrangement of every small software organisation that had ever existed, in which the person who wrote the code was also the person who deployed and operated it, and who had therefore developed a direct and consequential interest in whether it worked. The organisational separation of development from operations was an artefact of scale: as teams grew, specialisation was imposed, and the gap that DevOps spent a decade trying to close was a gap that had been created, deliberately, by the same organisational growth that made closing it difficult. DevOps was the attempt to recover, at scale and under a banner, what had always been the natural working arrangement of software at human scale. The Puppet and Chef configuration management tools that enabled it were themselves reinventions of the shell scripts and Makefiles that administrators had been using to automate server configuration since the early Unix era. Infrastructure as Code — the principle that server configuration should be version-controlled, reviewed, and deployed with the same discipline as application code — was the principle that good system administrators had always applied, now formalised as a movement.

· · ·

The mechanism by which this cycle sustains itself is not cynical, exactly. The people who build Kubernetes are not pretending that DCE never existed; many of them have genuinely never encountered it, because the education of software engineers is extraordinarily present-focused, because the systems that preceded the current generation are largely gone from production use or hidden behind layers of abstraction, and because the industry has powerful economic incentives to promote novelty over continuity. A new framework requires new training, new certification, new consulting engagements, new tooling vendors, new job advertisements with new skill requirements. The cycle of reinvention is the cycle of business development. This is not a conspiracy. It is an emergent property of a market that commoditises knowledge faster than any individual can acquire it, and therefore must continuously generate new knowledge to sell.

The cost is architectural. Each generation inherits not the wisdom of the previous one but its wreckage — the failed implementations, the abandoned middleware, the deprecated SDKs, the Stack Overflow answers that reference versions of tools that no longer exist in the forms described. Each generation must rediscover the hard constraints of distributed systems — that you cannot have consistency, availability, and partition tolerance simultaneously; that network latency cannot be made zero by abstraction; that a service that fails unpredictably is harder to handle than a service that fails reliably; that shared mutable state between distributed components will produce races and corruptions regardless of how the components are named or what protocol connects them — and discovers them not from the literature that documented them, which it has not read, but from production incidents, which are more expensive teachers.11

The computer scientist who could have provided continuity is rarely present in the room where architectural decisions are made. The room contains engineers whose knowledge runs deep in the current generation of tools and thin everywhere else, product managers whose frame of reference is the conference circuit of the last three years, and vendors whose interest is in the adoption of their particular implementation of the current cycle's dominant concept. Nobody is paid to say: we solved this in 1993, here is the paper, here is the constraint that made the original solution fail, here is which part of that constraint still applies and which has been dissolved by the changed environment. The room is, structurally, amnesiac. The industry has not built the institutional memory that would allow it to learn from itself. The conferences at which it gathers are organised around the new, not the continuous. The blog posts that circulate are about what is being adopted, not what was abandoned and why. The curriculum through which new engineers are formed is oriented toward employability in the current cycle, not literacy in the complete history of the field.

The industry does not have amnesia by accident. Amnesia is the product. Each cycle sells better to someone who has never heard of the previous one.
· · ·

None of this means the reinventions are without value. VMware made virtualisation accessible on hardware that previously resisted it. Docker made container workflows usable by engineers who had never touched LXC. Kubernetes solved operational problems at a scale that COM+ was never required to address. Kafka made the replayable log practical at internet scale. MongoDB made document storage accessible to developers who found relational schema design opaque. These are genuine contributions. The critique is not that they should not have been built. It is that they were described — and received, and taught, and certified — as discontinuities when they were continuities; as revolutions when they were iterations; as inventions when they were recoveries. The description matters because it determines whether the next generation will have access to the conceptual history of the thing they are building, or will instead build it in the dark, lacking the vocabulary to recognise its lineage, and therefore lacking the compressed wisdom of every previous cycle's failure built into the names they give to its components.

A programmer who knows that Kubernetes is DCE with a better user experience can look at DCE's documented failure modes — the fragmentation that came from vendor customisation, the firewall-hostility of its wire protocol, the central directory service as a single point of failure — and ask which of those failure modes Kubernetes has structurally avoided and which it has merely deferred. A programmer who has been told only that Kubernetes is a new solution to a new problem has no such resource. They must discover the failure modes by encountering them, which they will, and by that time the next cycle will have arrived with a new name for the thing that fixes what Kubernetes broke. The wheel will have turned. The new spoke will be presented as a new wheel. The press release will go out. The conference circuit will engage. The amnesia will be renewed.

There is a sentence in the literature on distributed systems that has the quality of prophecy, except that it is not prophecy but description — a description of a constraint that has not changed and will not change regardless of what the current cycle's tools are named. Peter Deutsch wrote it in 1994, as a list of assumptions that programmers new to distributed computing reliably make and that the network reliably refutes: the network is reliable; latency is zero; bandwidth is infinite; the network is secure; topology does not change; there is one administrator; transport cost is zero; the network is homogeneous.12 These are the eight fallacies of distributed computing. They were true in 1994. They were true when CORBA was being specified. They were true when DCE was being deployed. They are true now, while Kubernetes is being operated. Every generation of distributed systems architecture has been, at its core, an attempt to manage these eight falsehoods — to build systems that behave correctly in spite of them, or that fail gracefully when they cannot. The attempts improve. The falsehoods do not.

The industry's history is not a story of progress toward a correct answer. It is a story of the same questions being asked repeatedly, with improving tools and deteriorating institutional memory, by successive cohorts who each believe they are the first to have asked them. The questions are good ones. They deserve answers that build on what has been learned rather than answers that begin again from zero, wearing the prior cycles' most fashionable vocabulary as a costume. The costume fools the generation wearing it. It does not fool the questions.

1IBM's VM/370 was announced on 2 August 1972, but its conceptual and practical lineage extends to CP-40, developed at IBM's Cambridge Scientific Center and in daily production use from April 1967. The term "hypervisor" — a supervisor of supervisors — was coined in this context. VMware's first commercial server product, VMware Virtual Platform (later VMware Workstation), shipped in 1999; its first enterprise hypervisor, GSX Server, in 2001. The claim that VMware "invented" virtualisation, or that it represented the arrival of a new concept rather than the extension of an existing one to a previously resistant architecture, does not survive contact with this timeline. z/VM, IBM's direct descendant of VM/370, celebrated fifty years of continuous production operation in 2022.

2The chroot system call was introduced in Unix Version 7 in 1979. FreeBSD Jails shipped in FreeBSD 4.0 in 2000. Solaris Zones (also called Solaris Containers) shipped in Solaris 10 in 2004. Linux Containers (LXC), using the kernel namespaces and cgroups that Docker would later adopt, were available from 2008. Docker was released in March 2013. The company's own documentation acknowledges that Docker uses Linux kernel features for containerisation; its contribution was the image format, the registry, and the developer workflow that made these kernel features accessible without requiring knowledge of their direct configuration. The distinction between "accessible implementation of an existing idea" and "new idea" is significant and is systematically elided in technology marketing.

3CORBA 1.0 was published by the Object Management Group in 1991. The Open Software Foundation's Distributed Computing Environment (DCE), which provided many of the same capabilities — remote procedure calls, a directory service, a time service, security services — was published in 1993 and implemented by major Unix vendors. Microsoft COM+ and its predecessor Microsoft Transaction Server provided service registration, lifecycle management, distributed transactions, and connection pooling for Windows-based distributed applications, shipping as part of Windows 2000 in 1999. The comparison to Kubernetes is not the essay's own invention: a widely-circulated piece on Medium titled "Is Kubernetes the New DCE?" made the structural comparison explicit in 2023, noting that DCE provided "all the necessary services like directory, security, time services to the applications that connected to each other" — an accurate description of Kubernetes's service discovery, secrets management, and health-checking capabilities.

4Sun Microsystems' ONC RPC (Open Network Computing Remote Procedure Call), also known as Sun RPC, was published in 1984 and formed the basis of NFS. Prior RPC systems include Apollo's Network Computing Architecture (1981) and Xerox PARC's Courier protocol (1981). The conceptual paper "Implementing Remote Procedure Calls" by Birrell and Nelson at Xerox PARC was published in 1984 and remains one of the most-cited papers in distributed systems. gRPC, released by Google in 2015, reintroduces protocol buffer IDL (comparable to CORBA's IDL), binary wire encoding (comparable to CDR), and bidirectional streaming (comparable to CORBA's event service). The acknowledgement of this lineage in gRPC's own documentation is notably absent.

5IBM MQSeries was released in 1993 and provided durable, ordered, transactional message queuing. The Java Message Service (JMS) API standardised the programming model in 1998. AMQP, the open wire protocol, was published in 2006; RabbitMQ, its most widely-deployed implementation, followed. Apache Kafka was created at LinkedIn and open-sourced in 2011. The central innovation attributed to Kafka — the retention of messages in a replayable, ordered log rather than discarding them on consumption — is structurally identical to the transaction log model used by CICS on IBM mainframes since 1969 and described as a general architectural pattern by Pat Helland and others in the distributed systems literature. Jay Kreps' 2013 essay "The Log: What every software engineer should know about real-time data's unifying abstraction" gave the pattern its contemporary framing; the pattern itself predates the essay by several decades.

6The term "microservice" was first used at a software architects' workshop near Venice in May 2011. James Lewis and Martin Fowler's defining blog post, "Microservices," was published at martinfowler.com in March 2014 and became the canonical reference. Fowler himself acknowledged that the pattern had been called "fine-grained SOA" before the Venice workshop, and that the core idea — autonomous, independently deployable services communicating over lightweight protocols — was continuous with SOA practice rather than discontinuous from it.

7The phrase "SOA done right" as a characterisation of microservices appears in multiple sources in the 2014–2017 period; Jim Webber's "guerrilla SOA" formulation predates even the Venice workshop. Gartner Group's Alexander Pasik is credited with coining the term "Service-Oriented Architecture" in 1994, naming a pattern that the analyst firm acknowledged had already been in practice since the early 1980s. The specific claim at the API World conference in 2023 that "microservices are simply a restatement of SOA principles" is representative of a now-widespread acknowledgement within the architecture community that the conceptual content of the two movements is largely identical and that the differences are primarily in deployment technology and organisational context rather than in the underlying ideas.

8IBM's Information Management System (IMS) was developed beginning in 1965 at the request of NASA and North American Aviation to manage the parts inventory for the Saturn V programme. It was commercially released in 1968 and remains in production use: IBM estimates that approximately two thousand companies, including ninety-five percent of the Fortune 1000 and all five of the largest US banks, use IMS in some capacity as of 2022. The comparison between IMS's hierarchical data model and MongoDB's document model was made explicitly in a 2017 essay on twobithistory.org: "If you choose to store some entity inside of another JSON record, then in effect you have created something like the IMS hierarchy." The observation is architecturally precise. The relational model was invented in 1970 specifically to provide query flexibility that the navigational/hierarchical model could not; the move to document-oriented NoSQL storage in the 2000s partially relinquished that flexibility in exchange for scale and schema flexibility, recreating the navigational model's primary constraint in the process.

9The Multics project — Multiplexed Information and Computing Service — began in 1964 at MIT and explicitly used the phrase "computing utility" in its original planning documents to describe the intended service model. The 1965 paper by Corbató and Vyssotsky describing the project's goals is the canonical source. Multics was technically ambitious beyond the engineering capacity of its era and is better remembered for what it inspired — Dennis Ritchie and Ken Thompson wrote Unix partly as a reaction to Multics' complexity — than for what it delivered. The phrase "cloud computing" in its contemporary sense is generally attributed to a 2006 speech by Google's Eric Schmidt; the underlying service model it names is the Multics vision, implemented forty years later on hardware that made it practical.

10The Manifesto for Agile Software Development was signed at the Snowbird ski resort in Utah in February 2001 by seventeen software practitioners. Its four values and twelve principles were presented as a response to "heavyweight" process methodologies, principally those derived from Winston Royce's 1970 paper "Managing the Development of Large Software Systems" — the paper from which the waterfall diagram was derived, and which Royce himself presented as a description of a flawed process rather than a recommended one, a context that was almost universally ignored by those who adopted the diagram as a model. The iterative, small-team practices that Agile canonised were documented throughout the 1970s and 1980s in descriptions of software development at research institutions, and were never absent from practice in well-functioning small teams. The movement's primary contribution was the political legitimacy it provided for resisting heavyweight process in large organisations, not the novelty of the practices it endorsed.

11The observation that distributed systems must be rediscovered by each generation from production incidents rather than from literature is empirical rather than argued here; it is consistent with the pattern of major distributed systems failures documented by various post-mortems, many of which trace to constraints identified in the academic literature of the 1980s and 1990s. The specific constraints referenced — the CAP theorem (Brewer, 2000; Gilbert and Lynch, 2002), the impossibility of perfect failure detection in asynchronous systems (Fischer, Lynch, Paterson, 1985), the eight fallacies of distributed computing — are all available in the public literature and are routinely omitted from the curricula and bootcamps through which software engineers are presently formed.

12The eight fallacies of distributed computing are commonly attributed to Peter Deutsch, who formulated the first seven while at Sun Microsystems in 1994; James Gosling added the eighth ("the network is homogeneous") later. They are: the network is reliable; latency is zero; bandwidth is infinite; the network is secure; topology does not change; there is one administrator; transport cost is zero; the network is homogeneous. The fallacies have been in print for thirty years. They describe constraints that cannot be engineered away by any combination of service mesh, container orchestration, consensus protocol, or eventual consistency model. Each generation of distributed architecture has been surprised by them. They will surprise the next one.