On object-orientation's biological promise, its philosophical betrayal, and why the industry is quietly dismantling what it spent thirty years building
"I invented the term Object-Oriented, and I can tell you I did not have C++ in mind."
— Alan Kay, 2003
"I'd leave out classes."
— James Gosling, inventor of Java, asked what he would change if he could do it again
Object-oriented programming is the most successful idea in the history of software that nobody agrees on the meaning of. Its inventor disowns the mainstream interpretation of it. The creator of its most widely deployed language would, given the chance, excise its central mechanism. A generation of programmers learned it as doctrine, built careers inside it, and are now watching the languages that will define the next decade refuse to include it. And the paradigm itself — the promise of software as biology, programs as living cells, complexity managed through encapsulation and message and late binding — was genuinely visionary, genuinely beautiful, and almost completely wrong about how complexity in software actually behaves.
To understand what went wrong requires understanding what was actually proposed, which is something most practising programmers have never been told, because what they were taught as object-oriented programming was not what Alan Kay meant when he coined the phrase. The gap between the original vision and the industrial implementation is not a matter of detail. It is structural. The industry took a biological metaphor and implemented an organisational chart. It took a proposal about the autonomy of communicating components and turned it into a framework for building rigid taxonomies. The name survived the translation. The idea did not.
Alan Kay began developing his ideas about programming in the late 1960s, in the environment of Simula — the Norwegian language that had introduced the concept of objects and classes as a way of modelling physical simulations — and biology. The biological metaphor was not decorative. Kay was genuinely drawing on cell biology, and the insight he drew from it was precise: a human body contains something in the order of a hundred trillion cells, each one an independent computing unit, each maintaining its own internal state, communicating with its neighbours by releasing and receiving chemical signals, with no global coordinator, no central registry, no shared memory accessible to all. And yet from this profusion of isolated, message-passing automata, organisms of extraordinary complexity and reliability emerge — capable of correcting their own errors, healing their own damage, operating continuously without scheduled downtime.1
The programming insight Kay extracted from this was equally precise: the failures of large software systems were, at their root, failures of interconnection. Programs failed because they shared state promiscuously — because one part of a program could reach into another and change its data directly, because everything was accessible from everywhere, because there was no boundary around anything. The solution was to make the boundaries the primary structure: small, independent units of computation, each owning its own state entirely, incapable of being modified from outside, communicating only by sending messages that could be acted upon or ignored at the receiver's discretion. The receiver decided what to do with the message. The sender did not command; it asked. Nothing shared memory. Nothing coupled directly. The program would be, as Kay later said, a collection of cellular computers connected by a network, each one a closed world, the whole composed from the interactions of autonomous parts.
In a 2003 email exchange, asked to define what he had meant, Kay wrote: "OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things." Note what is absent from this definition. There are no classes. There is no inheritance. There is no subtyping, no class hierarchy, no taxonomy of objects arranged in a tree with a root. These are the features that every Java textbook presents as the pillars of object-orientation. They are not in Kay's definition. They are not what he was trying to build. The features he considered essential — autonomous state, message passing, late binding that defers commitment to specific implementations as long as possible — are precisely the features that the mainstream implementation of OOP most thoroughly abandoned.2
What happened between the vision and the implementation is traceable. Simula, the language Kay acknowledged as a catalysing influence, was built around an abstract data type model rather than a messaging model — it organised computation around classes as type definitions, and it was this branch of the forking path that the mainstream followed.3 Bjarne Stroustrup, designing C++ in the early 1980s as an extension of C, incorporated Simula's class and inheritance model into the most widely used systems programming language in the world. When Sun Microsystems introduced Java in 1995, it explicitly positioned the language as C++ simplified and made safe — and it took C++'s class-and-inheritance model with it, carrying the Simula lineage intact into the language that would define enterprise software development for the next two decades. Every textbook printed for a generation of computer science students presented this lineage as the definition of object-oriented programming: the class as the fundamental unit, inheritance as the primary mechanism of reuse, the class hierarchy as the primary structure of a well-designed system.
The promise, as taught, was compelling. Encapsulation: the bundling of data and the methods that operate on it, protecting internal state from arbitrary external modification, hiding implementation details behind a clean interface. Inheritance: the ability to define a new type by extending an existing one, inheriting its behaviour and modifying only what differed, enabling reuse without duplication. Polymorphism: the ability to treat objects of different types uniformly if they shared a common interface, writing code that worked on abstract shapes without caring whether the specific shape was a circle or a rectangle. These three properties were the catechism. They were taught in every university course, embedded in every certification examination, repeated in every job interview. And in the simple examples — the shape hierarchies, the animal taxonomies, the bank account base classes — they appeared to demonstrate exactly what they promised: clarity, reuse, extensibility, the world modelled in software as it presented itself to the human mind.
The world did not cooperate with the model.
The first failure was inheritance. The promise was code reuse: define behaviour once in a base class, let subclasses inherit it, avoid duplication. The reality was coupling. When a subclass inherits from a parent class, it takes not just the behaviour it wants but every assumption embedded in the parent's implementation — every field, every internal state transition, every invariant the parent maintains and the subclass does not know about. Changing the parent class to fix a bug or add a feature can break every subclass silently, because the subclass was depending on behaviour the parent was never documented as guaranteeing. This fragility was identified early enough that it has a name: the fragile base class problem. It was documented in the academic literature before Java was two years old, and it has been reproduced, in production systems, millions of times since.
Joe Armstrong, the principal designer of Erlang, gave the most vivid formulation of the structural problem. Asked about the claim that object-oriented languages promote reuse, he replied: "The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." The gorilla is the parent class, pulled in to get access to the banana — the one method you needed. The jungle is everything the parent class depends on in turn: its own parent class, the objects it holds references to, the objects those hold references to, the sprawling implicit context that must be present for any part of the structure to function.4 The inheritance hierarchy promised to organise complexity. It delivered a different kind of complexity: deep, hidden, unchartable by the programmer who arrived after the hierarchy had been established, obligatory rather than chosen.
The Gang of Four design patterns book, published in 1994, is the canonical text of object-oriented design at industrial scale. It is a book of twenty-three solutions to recurring problems. What is frequently unobserved is that the majority of those solutions are workarounds for the problems that inheritance creates. The Decorator pattern exists because inheritance cannot add behaviour to a single instance — it can only add behaviour to all instances of a subclass. The Strategy pattern exists because inheritance hard-codes algorithms into type hierarchies, making them impossible to swap at runtime without restructuring the hierarchy. The Composite pattern exists because inheritance cannot naturally express recursive part-whole structures. The Observer pattern exists because objects with direct references to each other create exactly the coupling that inheritance was supposed to prevent. The book's opening chapter states, as one of its two foundational principles: favour object composition over class inheritance.5 It was published eight years after object-oriented programming became the dominant paradigm in software education. The manual for using the paradigm correctly recommended, as its first principle, not using its most celebrated feature.
The second failure was encapsulation — not the concept, but the way it was implemented and then immediately circumvented. The concept is sound: a module should own its state, expose only what it must, hide everything else. The implementation, in most mainstream OOP languages, was the class with its public and private fields and methods. The immediate circumvention was getters and setters: the convention of providing a getX() method and a setX() method for every private field, wrapping the field in a thin method veneer while exposing it just as freely as if it had been made public in the first place. The practice became so universal that modern IDEs generate getter and setter pairs automatically, at the press of a button, as a standard step in class creation. The encapsulation was theatrical. The state was not hidden; it was given a costume. Any caller could reach any field; the only difference was that the reach now went through a method that could be individually overridden by a subclass — which returned the problem to inheritance, where it had started.
The deeper encapsulation failure was not the getter-setter problem but the sharing problem. Object-oriented programs, as typically written, do not consist of isolated cells that communicate by message. They consist of objects that hold references to other objects — direct handles to the other object's location in memory — and that call methods on those references directly. This is not messaging in Kay's sense. It is direct invocation of a function on a shared memory address. The object on the receiving end of the call does not get to decide whether to respond; the calling object reaches into it and runs its code. Two objects that hold references to the same third object are sharing state — not explicitly, not visibly, but structurally, in the only way that matters: either one can change the third object, and the other will see the change. The program has no global variables in the sense of the procedural programming it replaced, but it has something functionally equivalent: a web of shared, mutable, reference-linked state that the programmer must track through their understanding of which objects hold references to which, and what mutations are permitted at which times. This web grows with the size of the codebase. In a large system, it is unchartable.
The third failure was the one that the industry could paper over until 2005, and then could not. Shared mutable state is survivable in a single-threaded program. It is catastrophic in a concurrent one. When Intel released its first mainstream dual-core processor in 2005, and when the industry acknowledged that the era of single-core performance scaling was over — that the only path to faster programs was parallelism, and parallelism meant multiple threads of execution operating simultaneously on the same program's memory — the structural weakness of object-oriented programs became an engineering emergency rather than an academic concern.6
In an OOP program, assignment copies a reference rather than a value. When you write b = a where both are objects, you do not get two copies of the object; you get two names pointing at the same object. This is convenient in a single-threaded context, where only one piece of code is running at any moment and the identity of the object is stable between any two operations on it. In a multi-threaded context, another thread may be modifying the same object between any two operations. The standard responses — locks, mutexes, synchronized methods — are not solutions to this problem. They are serialisation mechanisms: they prevent two threads from accessing the same object simultaneously by making them take turns, which is to say by preventing the parallelism that multi-threading was introduced to achieve. A lock-heavy OOP program running on eight cores may run more slowly than a single-threaded program running on one, because the lock contention serialises execution more completely than any benefit of parallelism offsets. The architecture that promised to model complex systems in maintainable ways had built shared mutable state into its assignment semantics and its reference model, and could not remove it without becoming a different kind of language entirely.
Luca Cardelli, a computer scientist at Microsoft Research, wrote with formal precision what practitioners were discovering in production: that OOP languages have "extremely poor modularity properties with respect to class extension and modification," and that they tend toward extreme complexity as they grow.7 John Ousterhout, the designer of Tcl, observed that implementation inheritance produces "the same intertwining and brittleness that have been observed when goto statements are overused" — and the comparison to the goto, the mechanism whose overuse structured programming had been invented to eliminate, was not accidental. Both goto and inheritance create long-distance, non-local dependencies that prevent a programmer from understanding a piece of code by reading only that piece of code. Both were adopted for their expressiveness in small programs. Both become maintenance nightmares in large ones.8
The response within the paradigm was SOLID — a set of five principles assembled by Robert Martin in the early 2000s to guide the design of object-oriented systems. Single responsibility: each class should do one thing. Open-closed: classes should be open to extension but closed to modification. Liskov substitution: subtypes should be substitutable for their base types. Interface segregation: prefer small, specific interfaces to large general ones. Dependency inversion: depend on abstractions, not on concretions. These are not bad principles. They are, in fact, good principles. The problem with them is that they are a repair manual for the paradigm rather than a feature of the paradigm — that a programmer who follows SOLID faithfully produces a system that looks less like the class-hierarchy model taught in every introductory course and more like a system of small, composable functions connected by interfaces, in which class inheritance is rare and composition is the primary mechanism of reuse. The principles, followed rigorously, produce code that would be natural in a functional or protocol-oriented language and awkward in the classical OOP model they are supposed to improve. They are the inheritance paradigm correcting itself toward something it was not designed to be.9
The pattern books followed suit. After the Gang of Four established composition over inheritance as a design principle, subsequent literature progressively eroded the case for the classic OOP model. The advice became: use interfaces, not base classes; inject dependencies, do not inherit them; prefer small, stateless functions where possible; avoid mutable state in objects that will be shared. Each piece of advice was sound. Each moved the recommended practice away from the paradigm as originally taught. By the time a thoughtful Java programmer in 2015 was following current best practice — using dependency injection containers to wire objects together, programming to interfaces throughout, keeping data objects immutable wherever possible, using streams and lambdas for data transformation — they were writing code whose structure had more in common with functional programming than with the class-hierarchy model that had been the face of OOP for twenty years. They were still calling it object-oriented. The label had outlasted the model.
The generation of languages designed after 2007 drew the architectural conclusions that the OOP paradigm's internal critics had been pointing toward for a decade, and drew them as language design decisions rather than as style guides. Go, designed at Google by Ken Thompson, Rob Pike, and Robert Griesemer and released in 2009, contains no classes and no inheritance. It has structs and interfaces; a struct implements an interface by possessing the required methods, not by declaring that it does so. There is no extends keyword, no class hierarchy, no base class to be fragile or to carry a jungle. Composition is the only mechanism of reuse: if you want a type to have the capabilities of another, you embed it. Rust, released in 2015, makes the same choice more emphatically: no inheritance, no classes in the classical sense, structs and traits. Where Go made its concurrency safe by communicating via channels rather than sharing memory — recovering, in its own way, something closer to Kay's original message-passing vision — Rust made its memory safety a compile-time guarantee, using an ownership and borrowing system that prevented shared mutable state from existing rather than merely discouraging it.10
Neither Go nor Rust is a functional language in the academic sense. But both make the functional insight — that shared mutable state is the primary source of complexity in programs, and that restricting or eliminating it produces programs that are easier to reason about, test, and run correctly in parallel — a structural feature of the language rather than a recommendation in a style guide. The Rust compiler refuses to compile code that would create a data race. It does not warn you; it does not suggest you add a lock; it rejects the program. The shared mutable state that OOP programs carry in their reference semantics and their object graphs — the state whose management requires locks in Java, discipline in C++, and an understanding of which objects are aliased to which in every language in the family — is simply not a state that Rust can be persuaded to construct. The problem is eliminated at the definition level, not the discipline level.
The irony is that this brings the story back to where it began. The language design most hostile to classical OOP's shared reference semantics is, in its communication model, closest to what Alan Kay described in the 1960s. Erlang — the language designed by Armstrong, who gave us the gorilla and the banana — has no shared state between its processes. Each process is genuinely isolated, owning its state entirely, communicating only by sending messages that are copied rather than referenced, incapable of modifying another process's state by any means. Armstrong said, without irony, that Erlang might be the only language that actually implements what Kay described: the program as a community of genuinely isolated, genuinely message-passing cells. The languages that called themselves object-oriented did not implement it. The language that explicitly rejected object-orientation in its classical form came closest to the original vision.11
The practitioner reckoning has been slower than the language design reckoning, for the obvious reason that decades of OOP codebases do not become functional or Rust programs on the day their authors reconsidered their principles. Java, C++, C#, Python, Ruby — the languages in which the majority of production software is written — remain deeply committed to the class model, and the millions of programmers who use them daily do not have the option of replacing their tools because a better paradigm has been identified. But the direction is legible in the smaller choices: the preference for immutable value objects over mutable stateful ones; the adoption of functional idioms — streams, lambdas, higher-order functions — into Java, C#, and Python over the past decade; the rise of pattern matching as a language feature that makes it natural to work with data as data rather than as the behaviour-bundle that OOP insists it must be; the increasing use of records and data classes, which are objects stripped of their capacity for misbehaviour, carrying only fields and value equality.
These migrations are happening language by language and codebase by codebase, without announcements. Nobody has convened a workshop and written a manifesto against object-orientation. The correction is quieter than that — a preference expressed in new APIs, in code review comments that ask why this needs to be a class rather than a function, in the gradual shrinkage of the hierarchy diagrams that used to fill the whiteboards of enterprise design meetings. The paradigm is not collapsing. It is contracting. Its features are being evaluated rather than accepted wholesale, and the evaluation has been going against inheritance, against shared mutable state, against the class hierarchy as the primary structure of programs, for long enough that the shape of what comes after it is becoming visible in the languages that declined to inherit its mistakes.
What remains of object-orientation in the post-OOP settlement is, precisely, what Alan Kay described in 1967 and what the mainstream implementation discarded: encapsulation as genuine information hiding rather than theatrical access control; composition as the primary structure of programs; late binding as a value, expressed through interfaces and traits and protocols rather than through class hierarchies; and, in the most forward-looking systems, message passing as the communication model — isolated processes, owned state, no sharing except by explicit and controlled transfer. The biological vision was not wrong. The cells that Kay imagined — autonomous, isolated, communicating by message, composing into systems of extraordinary complexity and reliability — are the right model. The version of that vision that C++ and Java industrialised, with its fragile hierarchies and its reference aliasing and its shared mutable state serialised by locks, was a different thing that borrowed the name. The industry is now, slowly and without ceremony, reverting to the original.
1Alan Kay's biological metaphor for objects is documented extensively in his own accounts of the development of Smalltalk. The human body's approximately one hundred trillion cells as a model for autonomous, message-passing computation is described in multiple interviews and keynotes, including his 1997 OOPSLA talk, "The Computer Revolution Hasn't Happened Yet." Kay attributes his original insight to a combination of Simula, Ivan Sutherland's Sketchpad, and cell biology, and is explicit that messaging — not classes or inheritance — was the core idea from the beginning. His 2003 email to Stefan Ram is the most concise and direct statement of what he meant: "OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things."
2Kay's 2003 email to Stefan Ram is widely quoted and available via the purl.org archive of the correspondence. The observation that his three essential features — messaging, local state protection, extreme late-binding — are precisely the features most neglected by mainstream OOP implementations is not a revisionist reading; Kay makes it directly, noting that subsequent Smalltalk versions "backslid towards Simula" and that the CS establishment "pretty much did ADT and wanted to stick with the data-procedure paradigm." His remark that he invented the term Object-Oriented and did not have C++ in mind is widely attributed and consistent with the documented historical record.
3Simula 67, designed by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Center, introduced classes, objects, and inheritance in the context of discrete event simulation. Dahl and Nygaard received the ACM Turing Award in 2001 specifically for this work. The 1981 Byte Magazine special issue on Smalltalk stated explicitly that "the fundamental ideas of objects, messages, and classes came from SIMULA," and Smalltalk's own documentation acknowledged Simula as a major influence. The distinction Kay draws is between Simula's abstract data type path — which C++ and Java followed — and his own "bio/net non-data-procedure route," which led to Smalltalk's message-passing model. The two paths diverged in the late 1960s and the mainstream took the Simula path.
4Joe Armstrong's banana/gorilla/jungle formulation appears in the 2007 book Coders at Work by Peter Seibel and in various interviews and conference talks. Armstrong is quoted in multiple secondary sources: "I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle." The observation is consistent with his earlier essay "Why OO Sucks," in which he argues from first principles that the three legitimate tenets of OOP — message passing, isolation between objects, and polymorphism — are better satisfied by Erlang than by any class-based OOP language.
5Gamma, Helm, Johnson, and Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1994. The two foundational design principles stated in Chapter 1 are: "Program to an interface, not an implementation" and "Favor object composition over class inheritance." The observation that the majority of the twenty-three patterns in the book exist as workarounds for the limitations of inheritance is the author's extension of a widely-noted structural observation about the book — that it functions as a catalogue of solutions to the problems that a naïve application of OOP creates, rather than as a catalogue of features the paradigm provides.
6Intel's Pentium D, released in 2005, was the first mainstream dual-core processor. By 2007, quad-core CPUs were common in consumer hardware. The CPU clock speed ceiling had been reached at approximately 3–4 GHz, and the industry consensus shifted to the view that further performance gains would require multi-threaded parallelism rather than single-core frequency scaling. The connection between this hardware shift and OOP's structural reliance on shared mutable state through references is argued in detail in a 2026 DEV Community piece by Dayna Blackwell, "How Multicore CPUs Killed Object-Oriented Programming," and is consistent with the motivation given by the designers of Go and Rust for their language design choices. Rob Pike's stated intention that "Go is designed for the multicore world" is directly traceable to the concurrency problems that OOP reference semantics create.
7Luca Cardelli's observation about OOP's "extremely poor modularity properties with respect to class extension and modification" is cited in the Wikipedia article on object-oriented programming and in multiple academic discussions of OOP's limitations. Cardelli spent many years at Digital Equipment Corporation's Systems Research Center and later at Microsoft Research, working on type theory and programming language design; his criticisms are technical rather than polemical.
8John Ousterhout's comparison of implementation inheritance to the goto statement appears in his 1998 IEEE Computer paper "Scripting: Higher Level Programming for the 21st Century." The parallel he draws is structural: both goto and inheritance create non-local dependencies — execution paths or behavioural dependencies that cannot be traced by reading the local code — and both produce systems that become progressively harder to modify as they grow. Edsger Dijkstra's 1968 letter "Go To Statement Considered Harmful," published in Communications of the ACM, established the case against goto; the observation that inheritance has the same structural property is Ousterhout's, and has been widely repeated.
9Robert Martin assembled the SOLID principles from earlier work by various authors — the open-closed principle was first formulated by Bertrand Meyer in 1988; the Liskov substitution principle by Barbara Liskov in her 1987 keynote "Data Abstraction and Hierarchy"; the dependency inversion principle by Martin himself. The principles were given the SOLID acronym and systematised in Martin's work in the early 2000s. The observation that following SOLID rigorously produces code more characteristic of functional or protocol-oriented design than of classical OOP is the author's interpretation, consistent with the progressive erosion of class hierarchy recommendations in mainstream OOP guidance over the same period.
10Go was designed at Google by Ken Thompson, Rob Pike, and Robert Griesemer, with development beginning in 2007 and the first stable release in 2012. Rust was developed at Mozilla Research, with Graydon Hoare beginning the language design in 2006; Rust 1.0 was released in 2015. Neither language includes class-based inheritance. Go's interface system requires no explicit declaration of implementation; a type satisfies an interface by possessing the required method signatures, with no inheritance relationship. Rust's trait system works similarly. Both languages were designed explicitly in response to the concurrency and complexity problems associated with C and C++, and both decline to implement classical OOP class hierarchies as a deliberate design choice.
11Joe Armstrong's observation that Erlang may be "the only object oriented language" because it satisfies the three tenets of OOP as he understood them — message passing, isolation between processes, and polymorphism — is made in various interviews and is consistent with his essay "Why OO Sucks," where he argues that class-based OOP languages fail on the isolation criterion because their objects share memory through references. Erlang processes share no memory; state is owned entirely by the process and can only be influenced by sending a message that the process will act on in its own time. This is closer to Kay's biological model than any class-based language achieves. That the language most hostile to the commercial OOP mainstream is also the closest to OOP's original vision is an irony that Armstrong acknowledged explicitly.