On the meeting, the touch base, the calendar that ate the working day, and what the psychological literature says about the cost of the interrupt
"We found that it takes an average of twenty-three minutes and fifteen seconds to return to the original task after an interruption."
— Gloria Mark, UCI Department of Informatics, 2005
"Flow is the state in which people are so involved in an activity that nothing else seems to matter; the experience itself is so enjoyable that people will do it even at great cost, for the sheer sake of doing it."
— Mihaly Csikszentmihalyi, Flow: The Psychology of Optimal Experience, 1990
"Just a moment of your time."
— Everyone, always, before destroying an afternoon
Consider the phrase itself. Touch base. It arrives by calendar invitation or by instant message or by the head appearing around the partition wall, always with the implication of brevity, always with the vocabulary of sport — a momentary grounding, a quick tap of the bag, and then back to the game. The phrase is doing a great deal of work for its syllable count. It is simultaneously minimising the imposition, implying reciprocity, and constructing the fiction that the communication about to occur is not so much a meeting as a casual coordination, something that reasonable people in a functioning organisation do naturally, without cost, in the margins of their day. The phrase is grammatically modest and substantively dishonest, and it has colonised the calendar of the modern knowledge worker with such thoroughness that its dishonesty has become invisible, which is the most successful thing a dishonesty can achieve.
The touch base lasts thirty minutes. It was scheduled for fifteen. It begins with three minutes of waiting for the third participant to join the call, two minutes of small talk about nothing in particular, and then arrives at its substance, which is typically one of three things: a status update that could have been a written message, a question that could have been answered asynchronously, or an anxiety on the part of the person who called it — an unease about not knowing what is happening, a need to feel connected to work they are not performing — that the meeting addresses by providing the sensation of progress without any of its content. The meeting ends. The participants return to what they were doing. Something has been consumed that cannot be returned, and the consumption has been recorded nowhere.
The psychological literature on the cost of interruption is not recent, not obscure, and not ambiguous. It has been accumulating for three decades with a consistency that stands in striking contrast to the industry's consistent refusal to act on it. Its findings are not complicated. They are simply inconvenient for everyone whose job title does not include the word "engineer."
In 2005, Gloria Mark and her colleagues at the University of California, Irvine, conducted the study that produced the figure that has since made its way, in simplified form, into countless productivity articles and management presentations and been ignored, in its full implications, by the organisations that most needed to understand it. The finding was precise: it takes an average of twenty-three minutes and fifteen seconds to return to the original task after an interruption. Not to complete the task. Not to recover full cognitive immersion in the task. Simply to return to it — to get back to the screen, to reopen the document, to remember where the thread was. The return itself is followed by a period of degraded performance that the twenty-three-minute figure does not fully capture, because the figure measures resumption, not restoration.1
The same research programme found, in subsequent studies, that workers interrupted by others were not simply set back by the duration of the interruption. They were set back by the interruption's cognitive trace. The interrupted task does not pause cleanly. It bleeds. It leaves what the researcher Sophie Leroy, in a 2009 paper that deserves to be read by every person who has ever put a recurring meeting in someone else's calendar, named attention residue: the portion of working memory that remains allocated to the interrupted task while the interrupting task is being performed. The person who has been asked to join a quick call is not, during the call, fully present in the call. Part of their cognitive resource — unmeasured, unacknowledged, paid for by the employer and consumed by the meeting — is still holding the state of the problem that was interrupted. The problem does not pause. It persists in partial form, badly, inefficiently, consuming working memory that neither the interrupted task nor the interrupting task can fully use. The meeting gets less than a whole person. The work gets less than a whole person. Neither party knows this is happening. The calendar shows only the meeting.2
Leroy's attention residue concept maps onto a body of working memory research that had been developing independently for decades. Alan Baddeley's model of working memory, developed through the 1970s and 1980s and extended substantially in subsequent decades, described working memory not as a simple buffer but as a set of interacting subsystems — the phonological loop, the visuospatial sketchpad, the episodic buffer, the central executive — whose collective capacity is limited, whose contents are subject to interference and decay, and whose effective functioning depends critically on the absence of competing demands for the central executive's attentional resources. An interruption is, in Baddeley's framework, a competing demand for the central executive. The framework does not accommodate the concept of a "quick" interruption whose cost is proportional to its duration. The cost of the demand on the central executive is not a function of how long the interruption lasts. It is a function of the depth of the cognitive state that the interruption displaces.3
This is the place at which the general literature on interruption and the specific situation of the software developer diverge from each other, because the cognitive state that a software developer builds and inhabits while working on a hard problem is among the deepest and most structurally complex states that any professional occupation requires of a human mind. The claim is not rhetorical. It follows from what the work actually demands.
A software developer working on a non-trivial problem — a concurrency issue in a distributed system, a performance regression across an abstraction boundary, a security vulnerability whose surface is exposed three layers below its cause — is not consulting a document or executing a procedure. They are constructing and maintaining, in working memory, a live model of an abstract system: its current state, its desired state, the causal chain between them, the constraints imposed by the language runtime, the framework, the operating system, and the hardware, the history of decisions that produced the current structure, and the implications of the proposed change for every other part of the system that touches it. This model does not exist anywhere outside the developer's mind. It cannot be saved to disk and resumed. It cannot be delegated to a colleague mid-construction. It is built, expensively, over a period of time that varies by problem but is rarely less than fifteen minutes and is often measured in hours, and it is the only instrument that can produce the diagnosis, the design, or the solution that the work requires. Without it, the work cannot be done. With it interrupted, the work must be started again.4
Tom DeMarco and Timothy Lister measured this in 1984, in what they called the Coding War Games — a productivity study involving more than six hundred developers from ninety-two companies, in which participants completed a benchmark programming task under their normal working conditions and the results were analysed against environmental and organisational variables. The variable that best predicted individual performance was not experience, not programming language, not compensation, not company size. It was the degree to which the developer had been able to work without interruption. Developers in the top performance quartile reported, on average, significantly more uninterrupted working time than those in the bottom quartile — not slightly more, not marginally more, but roughly twice as much. The study was published in Peopleware in 1987, a book whose recommendations about the working environment for software developers have been continuously confirmed by subsequent research and continuously ignored by the organisations that employ software developers, which is its own form of finding.5
DeMarco and Lister described what they called the "flow state" — borrowing the term from Csikszentmihalyi, who had developed it through decades of psychological research — as the cognitive condition necessary for productive work on complex problems: a state of total immersion in the task, characterised by the dissolution of self-consciousness, the compression of time perception, the experience of effortless concentration, and the alignment of all cognitive resources with the problem at hand. Csikszentmihalyi's empirical research, which had begun in the 1960s and produced a sustained programme of study across dozens of disciplines, found that flow states require clear goals, immediate feedback, a balance between challenge and skill, and — critically — freedom from distraction. Flow is not a luxury. It is the operating condition under which difficult creative and analytical work is performed. It is not achieved accidentally. It is achieved only when the conditions that produce it are deliberately maintained. The meeting destroys those conditions by design.6
DeMarco and Lister introduced the E-Factor — the "environmental factor," calculated as the ratio of uninterrupted hours to elapsed hours in a working day — as a measure of the degree to which a working environment actually permitted the work it was ostensibly organised to produce. The average E-Factor they measured was approximately 0.38, meaning that the average developer, in an average environment, had access to roughly thirty-eight percent of their working day as uninterrupted time. The rest was meetings, interruptions, conversations, coordination, the various ceremonies of organisational life. In the four decades since that measurement was taken, the E-Factor of the average software developer has almost certainly worsened, because the period has seen the invention of email, the instant message, the corporate Slack workspace, the daily standup, the sprint ceremony cycle, and the video call — each of which was introduced as a productivity tool and each of which functions, in aggregate, as an interruption engine running continuously and in parallel with whatever the developer is attempting to build.7
The specific arithmetic of a developer's day in a contemporary technology organisation is worth constructing, because organisations rarely construct it, preferring instead to think of meetings as gaps in the working day rather than as the primary structure that the working day is organised around. Begin with eight hours. Remove the daily standup — fifteen minutes, scheduled at nine-thirty, which means that no serious cognitive work begins before nine-thirty and that the work that begins after nine-thirty does so with the interruption already on the horizon, which itself imposes an attentional cost, because a deadline — even a trivial social deadline, even a fifteen-minute check-in — is a background process. Remove one sprint ceremony, recurring weekly, averaged across the working day: thirty minutes. Remove one touch-base, scheduled by a manager who has not asked whether the scheduling was convenient: thirty minutes. Remove the notification that arrives during the period between the touch-base and the afternoon — a Slack message, marked urgent, requiring an immediate response that could have been asynchronous — and remove the cognitive reset it requires: twenty-three minutes of recovery that the research has measured, plus the attention residue that the research has also measured and that the organisation has not. What remains of eight hours is not six hours. It is, in practice, closer to four — and those four hours are not contiguous. They are distributed across the day in fragments between the meetings, and a fragment is not a flow state. A fragment is the beginning of a flow state, halted before it achieves the immersion that makes it useful.8
Chris Parnin and Spencer Scaffidi, researching programmer task management and interruption specifically — rather than the broader population of knowledge workers that most interruption research addressed — found that programmers interrupted during programming tasks required on average fifteen minutes to begin making progress again on a complex task, and that they could recall, on return, only a fraction of the mental state they had been holding before the interruption. The mental model of the system — the live representation of the code's structure, the state of the debugging session, the chain of reasoning about the failure mode — degraded rapidly and unevenly, with certain categories of context (variable state, call stack position, hypothesis under investigation) decaying faster than others (high-level goals, general approach). The return to the task was not a continuation. It was a reconstruction, partial and imperfect, of a structure that had taken considerable effort to build and had been evacuated in seconds.9
The Zeigarnik effect, documented by the Soviet psychologist Bluma Zeigarnik in 1927 and subsequently replicated in numerous variations, describes the finding that uncompleted tasks occupy working memory more persistently than completed ones — that the mind, confronted with an interrupted task, continues to allocate cognitive resources to it involuntarily, producing intrusive thoughts, background rehearsal, and a diffuse attentional pull toward the unfinished business that persists until either the task is completed or the mind is given some other equivalent resolution. Zeigarnik's original study used simple laboratory tasks. The subsequent literature extended the effect to complex, open-ended problems of the kind that professional work typically involves. For a software developer interrupted mid-investigation, the Zeigarnik effect does not politely wait until after the meeting. It operates during the meeting, filling the silence between the meeting's sentences with fragments of the interrupted model, consuming the attentional resources that the meeting requires to be useful and producing the characteristic cognitive dissociation of the person who is visibly present and substantially elsewhere.10
Daniel Kahneman's distinction between System 1 and System 2 thinking — the automatic, associative, effortless cognition of System 1, and the deliberate, effortful, serially focused cognition of System 2 — provides another framework for understanding the asymmetry between what the meeting costs and what the meeting produces. The meeting, as a social and communicative event, is largely a System 1 activity: it involves language comprehension, social inference, attentional tracking of a conversation, the rapid pattern-matching of faces and tones that allows us to infer mood and intention. These are not trivial cognitive activities, but they are ones that the human mind has evolved to perform automatically, in parallel, without significant depletion of the attentional resources that System 2 demands. The software problem, by contrast, is a System 2 activity in its purest form: it requires deliberate, sustained, serial attention of a kind that cannot be multitasked, cannot be time-sliced without loss, and depletes the cognitive resources available for further System 2 work as those resources are consumed. Moving a developer from the software problem to the meeting moves them from System 2 to System 1 and back. The transition cost in each direction is not zero, and it is not symmetric: entering System 2 from System 1 requires effortful re-immersion; exiting System 2 to System 1 is instantaneous and involuntary. The meeting can be joined with a single click. The flow state cannot.11
The asymmetry between who calls the meeting and who bears its cost is the structural feature of the modern organisation's use of developer time that the research illuminates and the organisation declines to examine. The meeting is called by a manager, a product owner, a Scrum Master, a stakeholder, a business analyst, any of the population of roles whose job is characterised by coordination and communication and whose working day is therefore largely composed of synchronous interaction — people for whom the meeting is not an interruption of work but is itself the primary medium of work. For these roles, the meeting is productive in exactly the way that a coding session is productive for a developer: it is the environment in which the role's primary cognitive activities occur, in which information is gathered and synthesised and distributed, in which decisions are made and commitments are recorded. The meeting is their flow state, to the extent that their work admits one. It is entirely rational for them to call more of them.
Paul Graham articulated this asymmetry in a 2009 essay that named it with unusual precision: the maker's schedule and the manager's schedule are not the same schedule. The manager's schedule is divided into one-hour blocks, each of which can be allocated independently to any task or meeting without significant loss, because the manager's cognitive activities do not require the deep immersion that hour-blocks interrupt. The maker's schedule is divided into half-days at minimum — ideally whole days — because the maker's cognitive activities require the time to build the state of immersion that makes them possible. A single meeting placed in the middle of a maker's day does not consume one hour. It consumes the day, because it divides the day into two fragments, each of which is too short to achieve the immersion that the work requires, and replaces a working day with two anxious approximations of one. The manager who schedules a ten o'clock meeting in a developer's calendar has not removed one hour from the developer's day. They have, in practical terms, removed most of it — and the calendar, which shows only the meeting's duration, does not record this. The cost is real. The account is empty.12
The open plan office is the architectural implementation of this asymmetry, scaled to the entire working environment. It is a space designed for the communication and coordination needs of managerial and collaborative roles — roles for whom visibility, accessibility, and spontaneous interaction are productive — and applied universally to all roles, including the ones for whom those same properties are actively destructive. The developer in the open plan office is not merely subject to the meetings on their calendar. They are subject to the continuous low-level interruption of the environment itself: the nearby conversation, the ringing phone, the colleague who stops to ask a question, the ambient noise of a room full of people performing the communicative activities that the space was designed to facilitate. Each of these is a smaller version of the meeting — briefer, softer, less formally scheduled — and each of them carries the same cognitive tax: the forced exit from System 2, the collapse of the mental model, the reset timer, the slow re-ascent toward the immersion that the next interruption will dissolve before it is complete.13
The Microsoft Research team of Mary Czerwinski, Eric Cutrell, and Eric Horvitz studied the timing of desktop notification interruptions in the early 2000s and found that the cost of an interruption was not constant but was a function of its timing relative to the interrupted task's cognitive state. Interruptions that arrived at natural breakpoints in a task — at the completion of a subtask, at the end of a defined step — were significantly less costly to recover from than interruptions that arrived mid-subtask, because the breakpoint interruption did not require the evacuation of a partially constructed cognitive structure. The finding has a direct implication for the design of working environments, and specifically for the synchronous notification systems that have become the primary coordination medium of the contemporary software organisation: a system that delivers notifications at arbitrary moments, at the volume and velocity of Slack or Microsoft Teams in a medium-sized engineering organisation, is a system that is overwhelmingly likely to deliver each notification at the worst possible moment — because the worst possible moment is any moment at which the recipient is doing something, and people who are paid to build software spend most of their day, or should spend most of their day, doing something.14
Slack, to its considerable credit, published research in partnership with academic institutions on meeting patterns and communication overhead in technology organisations. The findings, which its own product contributes to generating, confirmed what the academic literature had been documenting for years: that the proliferation of real-time messaging as the primary organisational communication medium was associated with increased interruption frequency, decreased periods of sustained focus, and self-reported decreases in the ability to complete complex work. The product that generated the problem also generated the data confirming the problem and sold, as a premium feature, the ability to configure focus modes that reduced the problem's most acute manifestations. The irony is not subtle. It does not seem to have been commercially inconvenient.15
Anders Ericsson's decades of research on expert performance — the empirical foundation of the ten-thousand-hour argument, though the popular formulation is a considerable simplification of the actual finding — documented that the cognitive work required to develop expert-level performance in complex domains is not merely time-consuming but qualitatively specific: it demands what Ericsson called deliberate practice, characterised by intense, focused engagement with problems at the edge of current competence, immediate feedback, and the explicit goal of improving performance rather than merely executing it. Deliberate practice, in Ericsson's data, was reliably associated with extended periods of uninterrupted concentration; expert musicians, chess players, and athletes limited deliberate practice sessions to periods of roughly ninety minutes to four hours, not because longer sessions were unavailable but because the cognitive intensity of genuine deliberate practice was not sustainable beyond those periods and because interruption within those periods destroyed the state required to make the practice deliberate rather than merely effortful. The implication for software development — where expert performance consists precisely in the ability to hold and manipulate complex abstract models under conditions of uncertainty — is not difficult to draw. The organisation that fragments a developer's day with hourly coordination ceremonies is not providing the conditions under which expertise develops or operates. It is providing the conditions under which something else happens, something that looks like development from a distance and is not, quite, on closer inspection.16
The content of the meetings is its own subject, related but distinct. The cost of the meeting in attentional terms is real regardless of what the meeting produces. But the meetings' outputs compound the injury because they are so systematically modest relative to what was sacrificed to obtain them.
The touch base, examined honestly, is almost always one of two things: a status update or a reassurance. The status update — who is doing what, how far along they are, when it will be done — is information that exists in the written record of any well-functioning project and that does not require thirty minutes of synchronous voice communication to transmit. The sender of the touch-base invitation knows this, at some level. The decision to call the meeting rather than to read the written record reveals the true purpose, which is not informational. It is relational. The manager who does not know what their team is working on has an anxiety that the written record does not fully relieve, because the written record does not perform reassurance. The meeting performs reassurance. The developer says that the work is going well and that it will be done by Thursday, and the manager hears this in real time, reads the face, registers the confidence or its absence, and the anxiety is addressed — not because new information was produced, but because the social act of producing it, synchronously, in the manager's presence, creates the experience of connection that the written record withholds. The developer has paid for this in the currency of the afternoon.17
The reassurance function of the meeting is not trivial. It reflects a genuine difficulty in the management of invisible work. Software development produces nothing visible until it produces something complete, and the intermediate states — the design being considered, the problem being debugged, the architecture being revised — are not accessible to anyone who cannot read code, which is typically everyone in the room at the touch base. The manager who cannot read the codebase cannot evaluate the work's progress from the work itself, which means they must evaluate it from the person performing it, which means they must meet with that person, which means they must interrupt the work to assess the work, which means the assessment costs more than it can possibly return, because the cost of the interruption is paid from the same account as the assessment's subject. The meeting is a tax on the work it is meant to monitor, levied at precisely the moment when the work is most in need of the conditions the tax destroys.18
The argument being made here is not that meetings are without value, that coordination is dispensable, or that software development should proceed in hermetic isolation from the people it serves and the organisations that commission it. These claims cannot be sustained and making them would require ignoring the genuine failures of the opposite pathology — the developer who disappears into a problem for six weeks and emerges with an impeccably constructed solution to the wrong question. Communication matters. The argument is more specific: that the form of communication the modern organisation has defaulted to — synchronous, frequent, scheduled in blocks small enough to guarantee inadequacy, populated by people who were not all necessary and who will leave without being certain what was decided — is the worst available form for the purposes of coordinating complex technical work, that its costs are disproportionately borne by the people doing the work rather than the people calling the meeting, and that the psychological research on this point is not ambiguous or recent or obscure. It is several decades old, comprehensively replicated, and universally ignored.
The alternative is not isolation. It is the recognition that synchronous communication is expensive, that its expense is cognitive rather than financial and therefore invisible in every ledger the organisation keeps, and that it should be used with the same deliberateness that any expensive resource demands. The written update, the asynchronous question, the documented decision — these are not inferior substitutes for the meeting. For the majority of coordination tasks that meetings currently perform, they are superior substitutes: they produce a record, they permit the recipient to engage at a natural breakpoint rather than at the moment of the sender's convenience, and they do not require twenty-three minutes of recovery. The meeting that genuinely requires real-time synchronous exchange — the design session in which ambiguity must be resolved in dialogue, the crisis in which information is moving faster than written communication can track — is a real category. It is not the category that most of the calendar contains. Most of the calendar contains the touch base, the quick sync, the status check, the standup. These are not meetings. They are the anxiety of the organisation about its own invisible work, made visible in the form of a calendar event, and distributed to the people doing the work as a thirty-minute cost that will be recorded nowhere and that no ledger will ever find.
1Gloria Mark, Daniela Gudith, and Ulrich Klocke, "The Cost of Interrupted Work: More Speed and Stress," Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2008. The twenty-three-minute-and-fifteen-second figure is frequently cited from Mark's earlier 2005 work, documented in a University of California, Irvine press release and discussed in subsequent publications; the precise experimental protocol is described in the 2008 CHI paper, which extended the methodology. Mark's subsequent book Attention Span (Hanover Square Press, 2023) synthesises two decades of field research on interruption in the workplace, including diary studies, ESM (Experience Sampling Method) studies, and computer monitoring studies. The research methodology used — direct workplace observation combined with computer log analysis — is notably more ecologically valid than laboratory interruption studies, because it measures interruption in the actual environments where work occurs rather than in conditions designed to simulate them.
2Sophie Leroy, "Why Is It So Hard to Do My Work? The Challenge of Attention Residue When Switching Between Work Tasks," Organizational Behavior and Human Decision Processes, Volume 109, Issue 2, July 2009. Leroy's experiments demonstrated that participants who had not completed a prior task before switching performed less well on the subsequent task than participants who had either completed the prior task or been given no prior task, and that the performance deficit was mediated by the degree to which the incomplete prior task occupied working memory. The attention residue effect was distinct from simple workload effects — it was specifically about the incomplete task's continued claim on attentional resources, not merely about the volume of work. Leroy's subsequent research extended the finding to organisational contexts and examined interventions, including structured mental transitions and explicit "task closure" procedures, that could partially mitigate the residue effect.
3Alan Baddeley and Graham Hitch proposed the multicomponent model of working memory in "Working Memory," published in The Psychology of Learning and Motivation, Volume 8, edited by Gordon Bower, Academic Press, 1974. The model has been revised and extended substantially in subsequent decades; Baddeley added the episodic buffer as a fourth component in 2000, described in "The Episodic Buffer: A New Component of Working Memory?" Trends in Cognitive Sciences, Volume 4, Issue 11, November 2000. The central executive's role as the attentional control system — responsible for focusing and switching attention, coordinating the subsidiary systems, and managing the interface between working memory and long-term memory — is the component most directly implicated in the interruption cost literature. The limited capacity of the central executive, and its inability to simultaneously manage two System 2 tasks without cost to both, is one of the most robust findings in cognitive psychology and the direct mechanistic explanation for the attention residue effect Leroy documented at the organisational level.
4The characterisation of a software developer's mental model as a complex, volatile, in-memory structure is not merely metaphorical. It has been studied empirically in the psychology of programming literature, beginning with the foundational work of Ben Shneiderman and others in the 1970s and developed substantially by Elliot Soloway, Kathy Ehrlich, and their colleagues in the 1980s. The construct of the "program model" — the developer's internal representation of the code's structure, behaviour, and state — has been studied through think-aloud protocols, eye-tracking, debugging performance experiments, and code comprehension tasks. The research consistently finds that constructing a program model is a time-consuming, effortful process that requires integrating information across many levels of abstraction simultaneously, and that the model degrades rapidly when not actively maintained. Ruven Brooks's 1983 paper "Towards a Theory of the Comprehension of Computer Programs" in International Journal of Man-Machine Studies provides an early but still relevant theoretical framework.
5Tom DeMarco and Timothy Lister, Peopleware: Productive Projects and Teams (Dorset House, 1987; 2nd ed. 1999; 3rd ed. 2013). The Coding War Games methodology is described in Chapter 5. The dataset of 600+ programmers from 92 companies is one of the largest empirical datasets on individual programmer productivity collected in conditions approximating ecological validity. The finding that the top-quartile performers had roughly twice the uninterrupted working time of the bottom-quartile performers has been cited widely and replicated in smaller studies. DeMarco and Lister's recommendation — private offices, or at minimum high acoustic privacy for developer workspaces — was commercially inconvenient for the real estate and facilities management functions of the companies that most needed to hear it, and was ignored accordingly. The open plan office, whose adoption in technology companies accelerated substantially in the 2000s and 2010s under the influence of Silicon Valley campus design, is the architectural anti-implementation of the book's central recommendation.
6Mihaly Csikszentmihalyi, Flow: The Psychology of Optimal Experience (Harper & Row, 1990). Csikszentmihalyi's empirical programme began with his doctoral work at the University of Chicago in the 1960s and included Experience Sampling Method studies of thousands of participants across multiple cultures and occupations. The flow state's neurological correlates have been studied in subsequent research using EEG and fMRI; work by Ulrich Keller and colleagues at the University of Ulm and by neuroscientist Arne Dietrich has associated flow states with transient hypofrontality — a reduction in prefrontal cortex activity associated with reduced self-monitoring and the experience of effortless performance. The neurological substrate of the flow state is consistent with the phenomenological description: a state of deep engagement that feels effortless precisely because the inhibitory and evaluative functions of the prefrontal cortex are temporarily attenuated. Ending this state with a calendar notification is not a minor event in the developer's cognitive life. It is the termination of a neurologically distinct state of operation.
7DeMarco and Lister's E-Factor measurement and the benchmark figure of approximately 0.38 are discussed in Peopleware, Chapter 10. Subsequent attempts to measure the E-Factor in contemporary environments have produced lower figures: a widely cited 2017 analysis by the software productivity company RescueTime, drawing on anonymised data from tens of thousands of knowledge workers, found that the average knowledge worker had fewer than three hours per day of genuinely focused work time, and that software developers specifically averaged approximately two hours and forty minutes of "focused work" as measured by continuous blocks of single-application use exceeding twenty minutes. The reduction from Pepleware's 1980s baseline to the RescueTime figure is consistent with the hypothesis that the proliferation of synchronous communication tools has materially worsened the E-Factor over the intervening decades, though the measurement methodologies are not directly comparable.
8The arithmetic of the developer's day presented here is an approximation derived from combining the meeting cadence typical of a Scrum-organised team (as described in the Scrum Guide's ceremony requirements), Gloria Mark's 23-minute interruption recovery figure, and RescueTime's baseline productivity data. The calculation is conservative: it does not account for Slack notification interruptions, ad-hoc questions from colleagues, or the attentional cost imposed by the knowledge of upcoming meetings. Research by Jackson, Dawson, and Wilson at Loughborough University's Department of Computer Science, published in a 2003 BCS conference paper, found that email interruptions alone resulted in average recovery times of 64 seconds for low-complexity tasks and substantially longer for complex tasks — and that was in an era predating the real-time messaging norm. The point of the arithmetic is not its precision but its direction: every element of the calculation moves in the same direction, and the direction is away from the conditions that productive software development requires.
9Chris Parnin and Robert DeLine, "Evaluating the Effectiveness of Workplace Interventions for Programmer Productivity," presented at the 2010 International Conference on Software Engineering (ICSE) workshop on cooperative and human aspects of software engineering, and the related work by Parnin and DeLine, "Resuming Programming Tasks," addresses the specific mechanics of how developers lose and rebuild their mental context after interruptions. Parnin's broader research programme at Georgia Institute of Technology examined programmer cognition and task management in naturalistic settings; his 2010 paper "Programmer, Interrupted" documented through instrumented IDE observation that programmers required an average of ten to fifteen minutes to reach the subgoal of a task they had been interrupted during, and that most programmers developed personal "scaffolding" strategies — comments, TODO markers, deliberate naming of in-progress variables — to preserve fragments of their mental model across interruptions. The existence of these scaffolding strategies is itself evidence of the magnitude of the interruption cost: developers would not invest effort in preserving mental state if the cost of reconstructing it were negligible.
10Bluma Zeigarnik's original study, conducted under Kurt Lewin's supervision at the University of Berlin, was published in 1927 as "Das Behalten erledigter und unerledigter Handlungen" (The Retention of Completed and Incompleted Actions) in Psychologische Forschung. The English-language summary and extension appear in her 1938 monograph. The effect's application to complex professional tasks was examined by Kenneth McGraw and John Fiala in "Undermining the Zeigarnik Effect: Another Hidden Cost of Reward," Journal of Personality and Social Psychology, 1982, and has since been extended to goal-pursuit contexts by Arie Kruglanski, E. Tory Higgins, and colleagues in the goal systems literature. The implication for interruption in knowledge work — that incomplete tasks continue to consume working memory involuntarily, regardless of whether the interrupted person wishes them to — is the mechanistic complement to Leroy's attention residue finding: Leroy measured the performance cost, and the Zeigarnik literature provides the cognitive mechanism that produces it.
11Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011). The System 1 / System 2 terminology, which Kahneman borrows from psychologists Keith Stanovich and Richard West, organises a large body of dual-process theory research. The characterisation of software development as an archetypally System 2 activity follows from the properties Kahneman ascribes to System 2: it is slow, effortful, sequential, capacity-limited, and explicitly conscious. The property that the transition from System 2 to System 1 is involuntary and instantaneous while the transition from System 1 to System 2 is effortful and gradual reflects the asymmetry in cognitive resource allocation: System 1 is always running and reclaims attention automatically when System 2's demands are interrupted; System 2 must be deliberately engaged, which requires the very attentional resources that the interruption has redirected.
12Paul Graham, "Maker's Schedule, Manager's Schedule," published at paulgraham.com, July 2009. Graham's essay, while not an academic paper, articulates a distinction that is implicit in the DeMarco/Lister and Csikszentmihalyi literature and makes it operationally specific. The observation that a single meeting in the middle of a maker's day functions as a day-splitter rather than a one-hour subtraction is consistent with the flow state literature: if a flow state requires fifteen to twenty minutes to achieve and the meeting is scheduled at a point that leaves less than that on either side, the meeting does not consume one hour. It consumes everything. The managerial schedule's hour-block granularity is appropriate for managerial cognitive work, which involves many short engagements and benefits from synchronous interaction. Its imposition on maker work is an architectural category error — the scheduling of one type of work in the time units appropriate for a different type of work.
13The productivity cost of open plan offices for knowledge workers has been studied by Kim and de Dear at the University of Sydney, whose 2013 paper "Workspace Satisfaction: The Privacy-Communication Trade-off in Open-Plan Offices," published in the Journal of Environmental Psychology, Volume 36, found that open plan environments imposed significant costs in noise distraction, loss of privacy, and interrupted concentration that exceeded the communication and collaboration benefits that justified their adoption. The study, based on a dataset of over 42,000 respondents from the US General Services Administration's workplace satisfaction survey, found that workers in fully enclosed private offices reported the highest satisfaction with their ability to concentrate and the lowest frequency of disruption, while workers in open plan environments reported significantly lower scores on both measures. The technology industry's preference for the open plan office, rationalised as a collaboration and culture investment, is in direct tension with its stated interest in developer productivity.
14Mary Czerwinski, Eric Cutrell, and Eric Horvitz, "Instant Messaging and Interruption: Influence of Task Type on Performance," Proceedings of OZCHI, 2000; and Eric Horvitz, Andy Jacobs, and David Hovel, "Attention-Sensitive Alerting," Proceedings of the Conference on Uncertainty in Artificial Intelligence, 1999. The finding that interruption timing relative to task state is a major determinant of recovery cost was extended in Czerwinski's subsequent work with colleagues at Microsoft Research, including the 2004 CHI paper "A Diary Study of Task Switching and Interruptions," which documented in a field study what the earlier laboratory work had found under controlled conditions. The implication for notification system design — that notification delivery should be sensitive to the recipient's current task state — led to research prototypes at Microsoft Research (the Notification Platform) and later to commercial focus mode features in Microsoft Teams and Windows, features whose existence acknowledges the problem that the underlying product creates.
15Slack's research partnership with Future Forum, published in the Future Forum Pulse reports (2020–2023), documented that knowledge workers reported increasing meeting load, increasing interruption frequency, and decreasing ability to do focused work as synchronous messaging adoption increased. The 2021 Future Forum Pulse report found that 56% of non-executive knowledge workers reported difficulty disconnecting from work, and that excessive meetings were cited as a primary driver of burnout and reduced productivity. The commercial dynamic is notable: Slack's revenue model depends on engagement — on messages being sent, notifications being received, and responses being made — while its published research documents that high engagement with synchronous messaging is associated with negative productivity outcomes for complex knowledge work. The product is not designed to be used in moderation. The moderation recommendation is included in the research documentation.
16K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Römer, "The Role of Deliberate Practice in the Acquisition of Expert Performance," Psychological Review, Volume 100, Issue 3, July 1993. This paper is the primary empirical source for the deliberate practice framework. The finding that expert performers — violinists, chess players, athletes — limited daily deliberate practice to sessions of roughly one to four hours, distributed across no more than two sessions per day, and required substantial recovery time between sessions, reflects both the cognitive intensity of deliberate practice and the diminishing returns of practice performed in a fatigued or distracted state. Ericsson's subsequent work, including Peak: Secrets from the New Science of Expertise (Eamon Dolan/Houghton Mifflin Harcourt, 2016), coauthored with Robert Pool, extends the framework and addresses the common misapplication of the ten-thousand-hour figure. The implication for software development is that the conditions required for skill development and expert performance are the same: extended, uninterrupted, effortful engagement with challenging problems, with immediate feedback on performance. The meeting-dense environment of the contemporary technology organisation is the structural negation of those conditions.
17The sociological function of the status meeting as a performance of reassurance rather than a transmission of information has antecedents in Erving Goffman's work on interaction ritual and impression management, particularly The Presentation of Self in Everyday Life (Doubleday, 1959) and Interaction Ritual (Pantheon Books, 1967). Goffman's framework treats face-to-face interaction as a ritual exchange in which participants manage the presentation of self and the validation of others' social identities; in this framework, the touch-base meeting performs not information transfer but relationship maintenance and mutual recognition — the confirmation that the manager is managing and the developer is developing, that the social order of the project is intact. This is not a trivial function. It is simply an expensive one, paid for in the attentional currency of the person whose identity is being confirmed.
18The fundamental problem of managing invisible knowledge work — work whose progress cannot be assessed from its intermediate artefacts by those without the specialist knowledge to read them — is analysed in Thomas Davenport and John Beck's The Attention Economy: Understanding the New Currency of Business (Harvard Business School Press, 2001). Davenport and Beck argue that attention has become the scarce resource in information-rich environments and that the competition for attention — within individuals, within organisations, across markets — is the defining management challenge of the knowledge economy. The meeting, in this framework, is a particularly aggressive competitor for developer attention: it is synchronous, mandatory, and scheduled by an authority whose legitimacy to claim the developer's attention is organisational rather than cognitive. The developer who declines the touch-base is not failing to do their job. They are doing it. The organisation that cannot make this distinction has confused the management of the work for the work itself.