The modern workplace is definitely hostile to our ability to focus or do deep work. A topic close to my heart, and my struggle to be able to focus on accomplishing a few significant things daily, despite the constant hum of distractions. I've been reflecting on this for a while, and here is what I have observed and learned (some of these may overlap with what you have said in your post):
a) The Multitasking Myth & "Performative Visibility."
Many people believe they can multitask successfully—such as attending meetings while clearing their inbox—a habit that skyrocketed when work shifted to MS Teams and Slack during the COVID pandemic. It is increasingly challenging to get people to focus on a single task. While face-to-face meetings are helpful, they aren't a cure-all; people still bring laptops and phones, often driven by a culture of "performative work"—the need to appear busy and responsive at all times. Banning devices is a temporary fix, but the real solution lies in building a culture where people feel safe declining meetings they don't need to attend, rather than attending and tuning out.
b) The "Think to Talk" vs. "Talk to Think" Divide
I used to get frustrated when people arrived at meetings without fully formed solutions, expecting to solve problems on the fly. However, I've learned to recognize that different brains work differently. While I prefer to come prepared with thoughts, many colleagues are "verbal processors" (often labeled extroverts) who actually think by talking. The friction occurs when we fail to define the meeting's purpose. If we need to brainstorm, let's call it a workshop. If we need to make a decision, let's do the necessary pre-work. Recognizing that these are just different valid working styles, rather than laziness, has changed how I view collaboration.
c) The Loss of "Peripheral Vision."
Remote work has replaced the natural "tap on the shoulder" with a digital barrage. In the office, you could see if I was heads-down in focus mode. Now, we rely on status lights. The problem is that a "Green" status is often interpreted as "Available for immediate interruption," leading to long chat threads or scheduled calls for things that used to be a quick 30-second exchange. We have lost the non-verbal context that used to regulate our interruptions.
Strategies that help me cope:
Defensive Calendaring: Blocking my calendar and setting it to 'Do Not Disturb' is essential. It signals that I am doing deep work, not just "free."
The Friday Reset: I try to keep Friday afternoons meeting-free. This is my time to clear the backlog from the week and, crucially, plan the next week so I can hit the ground running on Monday.
The Clarity Walk: A daily walk during lunch, mostly alone, helps me step away from the screen and actually think through complex problems without the distraction of digital noise.
Task Chunking: Finally, splitting work/projects into small, manageable chunks has been a lifesaver. It allows me to make progress even in short bursts between interruptions, matching my work to my current energy level.
We need to ask if AI will serve as a cognitive accelerator, not just a productivity tool. By providing instant access to information, AI allows us to skip the 'gathering' phase and move immediately to the 'deciding' phase. While this doesn't eliminate the need for deep thinking, it radically compresses the time required to solve complex problems by ensuring our focus is spent on analysis rather than search.
Thank YOU MG, when I was writing this I looked back at some of our early converations. I really like and appreciate how you moved the conversation beyond individual discipline and into the structural, cultural, and cognitive factors at play.
You are right focus is an infrastructural resource, not a moral failing. Your three points brilliantly illustrate how the modern workplace increases our Interruption Load and exacerbates Context Drag.
The Multitasking Myth & "Performative Visibility": This is a powerful cultural amplifier of the Interruption Load. When we must appear green and responsive, we are incentivized to keep the mental doorway open for constant micro-interruptions (checking email during a meeting). This behavior is then reinforced by the organization, turning high responsiveness into a performance metric. As you note, banning devices is a temporary fix; the true fix is a cultural shift where declining meetings is celebrated as a productive choice that prioritizes depth over visibility.
The "Think to Talk" vs. "Talk to Think" Divide: This dynamic directly influences Context Drag. If a meeting’s purpose isn't defined, the verbal processors are essentially forced to perform their complex Task A (thinking/forming the solution) in a collaborative setting. This might mean the listeners, who prefer to Think to Talk, spend the meeting in a state of high Context Drag, waiting for the necessary pre-work to materialize verbally. Recognizing the Depth Threshold requirement of the task (e.g., this needs a 60-minute solo think, not a 30-minute group chat) is key here.
The Loss of "Peripheral Vision": You have captured a critical loss in regulating Interruption Load. The physical office provided a non-verbal barrier, seeing a person with headphones on or staring intently at a screen, that communicated a high Depth Threshold requirement. The "Green" status in digital tools, as you brilliantly put it, is interpreted as "Available for immediate interruption." This essentially removes the friction for an interruptor, allowing our Interruption Load to spike uncontrollably.
Your strategies, Defensive Calendaring, The Friday Reset, and The Clarity Walk, are the ideal individual-level levers for managing the three parameters:
Defensive Calendaring directly lowers the Interruption Load.
The Friday Reset and Task Chunking both help to lower Context Drag (by clearing the residue) and optimize progress within shorter Depth Thresholds.
The Clarity Walk is a form of deep work that side-steps the digital Interruption Load entirely. We should all do this!
Your reflection on AI serving as a cognitive accelerator is fascinating. By handling the "gathering" (search) and allowing us to jump to the "deciding" (analysis), AI could potentially compress the Depth Threshold required for complex tasks. It essentially turns a 90-minute research-and-analysis task into a 30-minute analysis-only task. This is a powerful way to re-engineer the system to better fit our fragmented modern day.
I don't discount the impact that social media, short-form content, and the urge to answer emails have on our focus instantly. Add to that the instant gratification of AI-generated answers, and we see tools that often discourage us from challenging our cognitive skills to find profound truths.
Unfortunately, many organizations mirror this behavior. They inadvertently discourage critical thinking by rewarding shallow, quick fixes over the time-consuming work of deep analysis. We often fall into the 'Hero Culture' trap: rewarding the person who swoops in to fix the crisis at 2 AM, rather than the person who did the deep work to prevent the crisis entirely. As the saying goes, if we reward firefighting, we will get fires. I tell my team that true heroism isn't reacting to chaos—it's preventing it. You are my heroes when there is silence, stability, and no problems to respond to.
Well said. You have perfectly articulated the highest hurdle to deep work: the organizational culture that rewards reaction over prevention.
Your concept that "true heroism isn't reacting to chaos, it's preventing it" is the essential C-Suite message we need. If I am right in my understanding, the "Hero Culture" trap is a direct consequence of optimizing for high Interruption Load and low Depth Threshold work:
When the organization only celebrates the response (the 2 AM crisis fix), it devalues the quiet, sustained work required for the prevention. That quiet work requires a long, uninterrupted Depth Threshold, precisely the kind of time block the modern workplace is engineered to deny.
By rewarding the "firefighter," the organization signals that urgent, high-stress, short-burst tasks are the priority. This maintains a perpetually high Interruption Load because every problem is treated as an emergency, not an analytical challenge.
You are right to link the instant gratification of AI and short-form content to this cultural issue. Both feed the desire for shallow, quick-fix answers. If leaders themselves rely on AI for instant synthesis rather than thoughtful, time-consuming analysis, they model the very behavior that discourages the team from challenging its own cognitive limits to find profound truths.
Ultimately, preventing a crisis through deep work (high Depth Threshold focus) has a negative visibility: nothing happens. But your philosophy, that silence and stability are the signs of success, is the crucial reframing needed to move from a Hero Culture to a Preventative Culture.
Stewart Brand uses the race to illustrate how three different maintenance philosophies lead to three very different fates. It's a powerful metaphor for leadership and project management.
Robin Knox-Johnston ("Whatever comes, deal with it"): He won the race, but only through sheer grit and ceaseless, exhausting repairs. He relied on heroic improvisation to survive.
Donald Crowhurst ("Hope for the best"): He focused on flashy innovation (electronics) while ignoring the basics (leaky hatches). He over-prepared for what he knew and under-prepared for reality. It destroyed him.
Bernard Moitessier ("Prepare for the worst"): He obsessed over simplicity and prevention. He famously said, "My rule is, a new boat every day," fixing minor wear before it became a failure. He lived by the idiom "a stitch in time saves nine."
I strive to be Moitessier in almost anything essential.
While Knox-Johnston's heroics get the glory, Moitessier's approach, aggressive simplification, and proactive maintenance, is what actually frees you. It creates a team that isn't constantly fighting fires, but one that has the "state of grace" to actually enjoy the work.
What a brilliant chapter. The analysis of Crowhurst is a powerful cautionary tale about the 'innovation trap.' In the context of maintenance, Crowhurst represents the danger of prioritizing novel features (the trimaran, the complex electronics) over established maintainability. It reminds me of the concept of 'technical debt' in software, Crowhurst started with so much debt that he went bankrupt before he could even really begin. Contrasting that with Moitessier’s 'pre-maintenance' (simplification and over-engineering) provides a perfect spectrum of maintenance philosophies.
Knox-Johnston succeeded by mastering the faster layers (fixing things as they broke), while Moitessier succeeded by investing in the slower layers (structure, steel, simplicity) before he even left the dock. The distinction between 'Make do and mend' (resilience) and 'Prepare for the worst' (robustness) is a vital one for any maintainer.
Your “high responsiveness into a performance metric” made me think of attention as a Pavlovian response we are elicited to give yet not always eager to offer from our priceless asset—Time. The “Context Drag” in “Depth Threshold Assessments” reminds me of my respect for a woman who served as a Board Chair years ago who started our meetings promptly at 5pm and we adjourned always at 6pm. The agenda was accomplished thoroughly by outsourcing further study and debate to a committee that would return their ideas at the next Board meeting. She did not waste time and accomplished a lot during her tenure. “Green status in digital tools” assumes unseen availability and creates uncontrollable interruptions. This expected immediacy of attention is frictioned focus that all of us online must manage. Your post truly resonates as we are all tied to the “hitching post” of AI for the good and as a gradient to climb for human thinking in a pinging world as a universally felt load.
Nicely put Cathie. Your phrase, "attention as a Pavlovian response we are elicited to give," perfectly captures the forced, conditioned nature of our high Interruption Load. We’ve been trained to expect and deliver that instant response, turning our valuable time and focus into a reflex action rather than a conscious allocation of resources.
The example of your former Board Chair is a brilliant illustration of effectively managing the Context Drag and Depth Threshold for a team. And you are absolutely right: the "Green status" assumes availability and enables frictioned focus. The combined effect of this digital expectation and the powerful "hitching post" of AI creates a universally felt pressure to respond rather than reflect.
Ultimately, managing the "pinging world" comes down to learning to say no to that Pavlovian urge and designing systems (like your Board Chair's meetings) that value thoughtful progress over immediate, performative responsiveness.
I am optimistic that ultimately, most of humanity will adapt to these new technologies and the distractions they create. We are currently in the 'growing pains' phase of this adoption.
Consider the "information overload" of the 16th and 17th centuries, following the widespread adoption of the printing press. At the time, critics worried that the popular "Commonplace Books," where people copied disconnected quotes and soundbites, were making society intellectually lazy, mirroring our modern fears about "TL;DR" culture and doom-scrolling. Yet, society didn't collapse. We developed new methods to manage the abundance of books.
We saw similar anxieties with the arrival of the telephone in the late 19th century. Commentators feared it would destroy privacy, erode face-to-face intimacy, and interrupt the sanctity of the domestic sphere with constant ringing. Even further back, Socrates famously warned against the written word itself, fearing it would create forgetfulness in learners' souls because they would not use their memories. In every instance, the initial panic subsided as we established social etiquette and cognitive strategies to integrate the technology.
While some are ahead of the curve today, most of us will eventually find the techniques that allow us to reclaim our focus. It won't happen by copying someone else’s routine entirely, but through a personal trial-and-error process. Over time, we will master these tools rather than letting them master us.
However, we must ask the famous question: Is this time different? The counterargument is that the digital revolution is distinct from historical shifts due to its unprecedented speed and adversarial nature. Unlike the passive tools of the past, modern platforms actively exploit our neurochemistry, creating 'supernormal stimuli' that outpace our biological ability to adapt. This shift may not only physically reshape our brains by eroding focus but also fragment society into personalized reality tunnels, destroying the shared common ground necessary for collective adaptation.
This is an outstanding historical analysis. Thank you for grounding this entire discussion in the long arc of technological adoption.
You are absolutely right that history is replete with examples of "growing pains", from the printing press and its "information overload" (our 16th-century "TL;DR" culture) to Socrates's anxiety about the written word destroying memory. In every instance, humanity has eventually evolved new cognitive strategies and social etiquette to manage the new medium.
I am optimistic, as you are, that we will eventually master these tools through personal trial-and-error.
However, your final distinction, Is this time different?, is precisely where the structural problem lies, and it’s why I believe the adaptation process is currently so violent and stressful.
Previous revolutions (printing press, telephone) introduced passive tools that primarily affected the Interruption Load or the Volume of Information. Adapting meant establishing social etiquette (e.g., it’s rude to interrupt dinner with a phone call) and information management systems (library indexes, academic referencing) to manage the flood.
The current digital revolution is different because it is adversarial and targets our neurochemistry as per other comments here.
Modern platforms, built on attention extraction, don't just interrupt us; they are designed to leave the maximum possible Attention Residue (Context Drag) every time we check the feed. This ensures that even when we step away, a portion of our cognitive budget is still paying interest to the platform.
The algorithm-driven nature creates "supernormal stimuli" that our nervous system struggles to resist. This increases our Interruption Load not just externally (pings) but internally (reflexive self-checking) (I have zero pings deliberately).
So, while we will adapt, the difference is that we are not adapting to a tool; we are adapting to an opponent, the system designed to constantly exploit our limits.
This is why individual Boundary Management (your "personal trial-and-error process") has become so crucial. It’s the necessary defense. We have to fight for our focus in a way our ancestors didn't have to fight the telephone.
Amazing article so grateful to read and worth sharing and discussing. “The research simply tells us how brutal nonlinearity is.” Thinking in depth has its demands and focused time is essential. The creatively worded “something yanks, your brain limps, and minimum block size” resonated as terminology of terminological humor ie what “terminates” our thought process to take us off track? Your description of “Interruption load and context drag” well stated.
Solution: “set parameters”
Thank you for this advice on reclaiming priority time over punctuated time. It should help productivity purposefully produced!
Thank you so much Cathie. I really like your take on the terminology: "terminates" our thought process. That's a perfect encapsulation of how Interruption Load works. When your brain is yanked away by a notification, it doesn't just pause the deep thought; it effectively terminates the progress you've made toward the Depth Threshold, forcing the brain to limp through the long process of Context Drag just to get back to the starting line.
You are absolutely right, that setting parameters is the solution. Since the system (our workplace) often makes deep focus mathematically absurd, the only way forward is to build a protected Parameter Lab at the individual level, a small, non-negotiable window where we actively design for focus instead of just wishing for it.
Thanks so much for your post - once again, I think you are “on the money” here - and what I REALLY like are the strategies you are suggesting = which are simple & practical!
My question (as an extension maybe? Or?) is about the decline in working memory that my (now older memory) read about in research a while ago??
I had known / thought that our working memory capacity was 7 plus/minus 2 - that was the “now dated research” that I first read about… I thought there was an update research paper that put this measure of working memory more recently to be around 4??
I’d have to check this & find that more recent paper (once I get home to my computer!) - and I wondered how this might impact on your factors in this post? Definitely NOT trying to give you more “work & reading” - I will check this on the next day or so? Maybe I can come back to you soon with what I confirm… Or MAYBE you already know?? 👍👍
That is a fantastic question that drills right down to the cognitive architecture underpinning the whole problem! Thank you for the kind words about the strategies; practicality is key.
You are absolutely "on the money" with your reference to the potential decline in working memory capacity. The classic research on working memory capacity is George Miller's 1956 paper, which coined the figure 7 +/- 2 "chunks." Possibly the more recent research you are referencing often points to evidence suggesting a lower capacity, perhaps closer to 4 +/- 1 chunks, especially when measuring storage and processing simultaneously, or using more complex stimuli. Regardless of whether the true capacity is 7 or 4, a smaller working memory size has a profound, negative impact on all three factors I raised in the essay:
Working memory is where we hold the "current state" of a problem (the equations, the argument structure, the variables). If our working memory has a smaller capacity, a sudden interruption (high Interruption Load) will cause that memory buffer to overflow or be overwritten much faster. This forces our brain to work harder and longer to retrieve the state from long-term memory (or the external world, documents, notes) after the interruption. This effort is Context Drag. A smaller working memory means a steeper, longer recovery curve.
If a complex problem requires, say, 6 chunks of information to be held and manipulated simultaneously, a person with a capacity of 4 +/- 1 simply cannot solve it efficiently without constant external scaffolding (writing things down, re-reading). This means the essential Depth Threshold, the minimum continuous block of attention required, for the work suddenly gets longer because the mind needs more time to manage and swap out the pieces it cannot hold simultaneously.
If a task begins to overload the working memory (reaching the limit of 4 chunks), the brain often seeks an immediate release of cognitive pressure. This release often manifests as a self-interruptive act (checking email, opening a new tab). In a world with already high external Interruption Load, a smaller working memory makes us more vulnerable to this learned behavior, as our brain quickly seeks the micro-reward of novelty when the primary task becomes momentarily too difficult to hold in the limited space.
So, the potential decline in working memory capacity isn't just an interesting footnote; it's a powerful physiological variable that makes the math of deep work even more brutal than we previously thought. Please do come back with the research you find!
Colin, this framework is precise. Interruption Load, Context Drag, Depth Threshold. You've given people variables they can actually see and adjust. And the shift from moral failing to structural problem is exactly right.
I'd add one layer underneath: the fragmentation isn't just cognitive. It's somatic. Each interruption doesn't only cost recovery time. It trains the nervous system into the same defensive narrowing that trauma produces. "Attention residue" isn't just working memory struggling to clear. It's the body learning that staying with anything is unsafe.
That's why your "Parameter Lab" matters more than productivity. You're not just protecting time. You're teaching the nervous system that sustained presence is survivable again.
The modern workplace isn't just engineered against concentration. It's engineering nervous systems toward chronic constriction. The 47-second attention span isn't a failure of discipline. It's an adaptation to an environment that punishes depth.
I wrote something recently on the mechanism underneath—what the attention economy extracts from the body and the one site it cannot reach. Thx
Thank you for this powerful, resonant addition to the framework. The idea of the fragmentation being somatic rather than purely cognitive is a crucial layer that I honestly hadn't considered with such precision.
Your point that the nervous system is being conditioned by constant interruption is deeply unsettling and exactly right. "Attention residue" as the body learning that staying with anything is unsafe shifts the entire discussion from a productivity hack to a fundamental issue of well-being. This perspective perfectly explains why the Context Drag feels so brutal, it’s not just a memory problem; it’s a nervous system reset.
This insight gives the "Parameter Lab" far greater significance. You are completely correct that by defending a protected block of time, we are doing more than securing output; we are teaching the nervous system that sustained presence is survivable again. I suppose we are intentionally de-conditioning the reflex to seek immediate novelty and interrupting the chronic constriction the workplace has engineered.
I will definitely be reading your piece on "The Attention Wound" (and the mechanism underneath) and appreciate you sharing the link. This somatic perspective is a vital extension of the conversation.
Thanks again for adding such a profound dimension.
Colin, glad this landed. You've named it exactly: de-conditioning the reflex, interrupting the chronic constriction. That's the work.
One thing I'd add: what you're describing at the individual level scales. A population conditioned into defensive narrowing can't sustain the kind of attention that collective action requires. The same mechanism that fragments a workday fragments a polity. The attention economy isn't just an individual wellness problem. It's a political one.
Looking forward to what you think of my essay. This kind of exchange is what makes Substack worth the effort. Thanks
This is a crucial framework. The distinction between 'AI Safety' (protecting the system) and 'Agentic Safety' (protecting the person's cognitive autonomy) is precisely what is needed. The concept of 'Interference Vectors' and the eight proposed rules (protecting Imagination, Reflection, and Intention) provide the concrete, falsifiable language we've been missing for a structural problem. It reframes the entire debate from 'Will the AI malfunction?' to 'Will the AI respect the cognitive boundaries of the user?' This is the rules-of-the-road approach we urgently need.
Will the C-Suite ever get the message? Cognitive overload. We're just not built for "multitasking". Time slicing works for CPU's, not for brains. We can't just spawn threads. We can't do asynchronous I/O.
On top of all this, we have all this new tracking, such as keyloggers, in the name of "efficiency". Employees (below C-Suite level) are expected to be clacking away non-stop. Patch a hack with another hack. Hustle, hustle, hustle! Make the deadlines!
And then there's sleep deprivation:
Manager : Where were you when I called you at 2 am Saturday morning?
Employee : Um, in bed, trying to catch up on my sleep?
Thank you for bringing the C-Suite perspective and the crucial issue of surveillance into the discussion. You have perfectly articulated the core problem using computer science analogies: Time slicing works for CPUs, not for brains. We can't just spawn threads. We can't do asynchronous I/O.
This is the most concise way to explain the damage caused by high Interruption Load and long Context Drag. Our brains are essentially single-threaded processors when it comes to deep, complex thought. Forcing us to operate like a multitasking CPU simply results in massive overhead (the Context Drag) and thread abandonment (attention residue).
Your point about the tracking and surveillance culture (keyloggers, expecting non-stop "clacking") adds a dark layer to the "Performative Visibility" mentioned in the comment by MG. This goes beyond culture and becomes an enforced policy that demands high Interruption Load. When management tracks keyboard activity instead of deep output, they are incentivizing the appearance of work (many small switches) over the difficult, quiet work that requires a long Depth Threshold. They are actively optimizing for the very fragmentation that kills productivity.
And you are absolutely right about the resulting sleep deprivation and 24/7 availability expectation. When the system makes sustained, focused work impossible during daylight hours, employees are forced to engage in the "triple peak day," pushing real work into the late hours, weekends, or early mornings just to meet the deadlines that the fragmented workday prevents them from hitting.
Will the C-Suite ever get the message? I think the shift has to come from seeing the problem not as a morale issue, but as a severe inefficiency. When data clearly shows that a high Interruption Load and long Context Drag create mathematically absurd working conditions, the C-Suite may finally recognize that they are:
Paying for Context Drag: They are paying employees for 15-20 minutes of mental recovery time for every 2-minute interruption.
Driving up Burnout Costs: They are replacing high-quality, focused output with a stressful "hustle culture" that leads to employee turnover and errors.
Only when the parameters of focus are viewed as an operational cost rather than a character flaw will the system change.
A big part of the problem is that C-Suites comprise mostly - if not entirely - psychopaths/sociopaths. The result is they have conflicting desires. On the on hand, of course they want to maximize their profits, requiring operational efficiency. On the other hand, the have a deep rooted need to exert power - for its own sake.
And, because they are psychopaths/sociopaths, they can never accept responsibility for their mismanagement. Also, because they tend to promote psychopaths, it goes into infinite recursion.
This article hit a nerve. We are constantly being told of "Contextus Interruptus" and brain fog and recovery time. I do not doubt these things to be true and I, too, suffer the frustration of feeling like have not completed a single thread for ages, BUT...
I think that to respect the premise, we also need to ask ourselves - what is the business purpose of the human that is being interrupted or distracted? Are they there to postulate and reason? Or are they there to make critical decisions that affect both the long term and the near term operation and direction of the company? Can one person do both effectively?
It seems that exceptional leaders may be the people who have the natural ability to avoid the "residue" of context switching and can compartmentalize thought threads and effortlessly come back to them as if returning from a "gosub".
And if my hypothesis is true, then can software and AI help out with this issue and begin to magnify the strength of leaders who are not so lucky as to be able to avoid the problems described? If so, then we are heading for an era where technology lifts another yoke off people's shoulders and helps them thrive like never before.
That is a brilliant articulation of the organizational dilemma, and your "BUT..." is a necessary provocation that moves the conversation forward. In my mind you have hit on the tension between the Depth Threshold tasks (postulate and reason) and the necessary Interruption Load tasks (critical, near-term decisions).
Your question, "Can one person do both effectively?" is the critical next step. I believe the leaders you describe, the ones who seem to effortlessly return from a cognitive gosub (an excellent analogy!), are not simply better at processing; they are better at Boundary Management and Delegation. They don't avoid interruptions; they successfully delegate or firewall the ones that don't meet their personal Depth Threshold or require their specific decision.
In other words, they don't have a magic ability to escape Attention Residue; they have an optimized system that minimizes the likelihood of getting caught in the cognitive tax of a low-value interruption. They design their day to allow for maximum Depth Threshold and maximum decision-making impact by pushing the Context Drag (the recovery time, research, and follow-up) onto optimized systems or support staff.
The real answer to the dilemma may be organizational specialization: separating the role that needs an extremely high Depth Threshold (the "Chief Thinker," focused on strategy and reasoning) from the role that needs an extremely low Context Drag (the "Chief Decider," focused on rapid operational response).
Can software and AI magnify the strength of leaders who are not naturally compartmentalized? I am highly optimistic about this. AI's greatest immediate value in the knowledge worker space may be reducing the Context Drag and shrinking the Depth Threshold required for tasks. For example: An AI system that immediately summarizes a long email thread or a complex document upon reopening it acts as an instant "ready-to-resume" note for your working memory. It quickly clears the Attention Residue and removes the "scavenger hunt" for materials, potentially shaving minutes off that 20-23 minute recovery curve.
AI allows us to move from the 'gathering' phase to the 'deciding' phase instantly. It doesn't eliminate the need for deep analysis, but it radically compresses the continuous time block required to achieve a major result.
If technology can effectively handle the "limping" and the "scavenger hunt," it does indeed "lift the yoke" off our cognitive load, allowing more people, not just the "naturally compartmentalized" few, to successfully pivot between complex reasoning and critical decision-making.
The modern workplace is definitely hostile to our ability to focus or do deep work. A topic close to my heart, and my struggle to be able to focus on accomplishing a few significant things daily, despite the constant hum of distractions. I've been reflecting on this for a while, and here is what I have observed and learned (some of these may overlap with what you have said in your post):
a) The Multitasking Myth & "Performative Visibility."
Many people believe they can multitask successfully—such as attending meetings while clearing their inbox—a habit that skyrocketed when work shifted to MS Teams and Slack during the COVID pandemic. It is increasingly challenging to get people to focus on a single task. While face-to-face meetings are helpful, they aren't a cure-all; people still bring laptops and phones, often driven by a culture of "performative work"—the need to appear busy and responsive at all times. Banning devices is a temporary fix, but the real solution lies in building a culture where people feel safe declining meetings they don't need to attend, rather than attending and tuning out.
b) The "Think to Talk" vs. "Talk to Think" Divide
I used to get frustrated when people arrived at meetings without fully formed solutions, expecting to solve problems on the fly. However, I've learned to recognize that different brains work differently. While I prefer to come prepared with thoughts, many colleagues are "verbal processors" (often labeled extroverts) who actually think by talking. The friction occurs when we fail to define the meeting's purpose. If we need to brainstorm, let's call it a workshop. If we need to make a decision, let's do the necessary pre-work. Recognizing that these are just different valid working styles, rather than laziness, has changed how I view collaboration.
c) The Loss of "Peripheral Vision."
Remote work has replaced the natural "tap on the shoulder" with a digital barrage. In the office, you could see if I was heads-down in focus mode. Now, we rely on status lights. The problem is that a "Green" status is often interpreted as "Available for immediate interruption," leading to long chat threads or scheduled calls for things that used to be a quick 30-second exchange. We have lost the non-verbal context that used to regulate our interruptions.
Strategies that help me cope:
Defensive Calendaring: Blocking my calendar and setting it to 'Do Not Disturb' is essential. It signals that I am doing deep work, not just "free."
The Friday Reset: I try to keep Friday afternoons meeting-free. This is my time to clear the backlog from the week and, crucially, plan the next week so I can hit the ground running on Monday.
The Clarity Walk: A daily walk during lunch, mostly alone, helps me step away from the screen and actually think through complex problems without the distraction of digital noise.
Task Chunking: Finally, splitting work/projects into small, manageable chunks has been a lifesaver. It allows me to make progress even in short bursts between interruptions, matching my work to my current energy level.
We need to ask if AI will serve as a cognitive accelerator, not just a productivity tool. By providing instant access to information, AI allows us to skip the 'gathering' phase and move immediately to the 'deciding' phase. While this doesn't eliminate the need for deep thinking, it radically compresses the time required to solve complex problems by ensuring our focus is spent on analysis rather than search.
Thank YOU MG, when I was writing this I looked back at some of our early converations. I really like and appreciate how you moved the conversation beyond individual discipline and into the structural, cultural, and cognitive factors at play.
You are right focus is an infrastructural resource, not a moral failing. Your three points brilliantly illustrate how the modern workplace increases our Interruption Load and exacerbates Context Drag.
The Multitasking Myth & "Performative Visibility": This is a powerful cultural amplifier of the Interruption Load. When we must appear green and responsive, we are incentivized to keep the mental doorway open for constant micro-interruptions (checking email during a meeting). This behavior is then reinforced by the organization, turning high responsiveness into a performance metric. As you note, banning devices is a temporary fix; the true fix is a cultural shift where declining meetings is celebrated as a productive choice that prioritizes depth over visibility.
The "Think to Talk" vs. "Talk to Think" Divide: This dynamic directly influences Context Drag. If a meeting’s purpose isn't defined, the verbal processors are essentially forced to perform their complex Task A (thinking/forming the solution) in a collaborative setting. This might mean the listeners, who prefer to Think to Talk, spend the meeting in a state of high Context Drag, waiting for the necessary pre-work to materialize verbally. Recognizing the Depth Threshold requirement of the task (e.g., this needs a 60-minute solo think, not a 30-minute group chat) is key here.
The Loss of "Peripheral Vision": You have captured a critical loss in regulating Interruption Load. The physical office provided a non-verbal barrier, seeing a person with headphones on or staring intently at a screen, that communicated a high Depth Threshold requirement. The "Green" status in digital tools, as you brilliantly put it, is interpreted as "Available for immediate interruption." This essentially removes the friction for an interruptor, allowing our Interruption Load to spike uncontrollably.
Your strategies, Defensive Calendaring, The Friday Reset, and The Clarity Walk, are the ideal individual-level levers for managing the three parameters:
Defensive Calendaring directly lowers the Interruption Load.
The Friday Reset and Task Chunking both help to lower Context Drag (by clearing the residue) and optimize progress within shorter Depth Thresholds.
The Clarity Walk is a form of deep work that side-steps the digital Interruption Load entirely. We should all do this!
Your reflection on AI serving as a cognitive accelerator is fascinating. By handling the "gathering" (search) and allowing us to jump to the "deciding" (analysis), AI could potentially compress the Depth Threshold required for complex tasks. It essentially turns a 90-minute research-and-analysis task into a 30-minute analysis-only task. This is a powerful way to re-engineer the system to better fit our fragmented modern day.
Thank you again for elevating the discussion.
I don't discount the impact that social media, short-form content, and the urge to answer emails have on our focus instantly. Add to that the instant gratification of AI-generated answers, and we see tools that often discourage us from challenging our cognitive skills to find profound truths.
Unfortunately, many organizations mirror this behavior. They inadvertently discourage critical thinking by rewarding shallow, quick fixes over the time-consuming work of deep analysis. We often fall into the 'Hero Culture' trap: rewarding the person who swoops in to fix the crisis at 2 AM, rather than the person who did the deep work to prevent the crisis entirely. As the saying goes, if we reward firefighting, we will get fires. I tell my team that true heroism isn't reacting to chaos—it's preventing it. You are my heroes when there is silence, stability, and no problems to respond to.
Well said. You have perfectly articulated the highest hurdle to deep work: the organizational culture that rewards reaction over prevention.
Your concept that "true heroism isn't reacting to chaos, it's preventing it" is the essential C-Suite message we need. If I am right in my understanding, the "Hero Culture" trap is a direct consequence of optimizing for high Interruption Load and low Depth Threshold work:
When the organization only celebrates the response (the 2 AM crisis fix), it devalues the quiet, sustained work required for the prevention. That quiet work requires a long, uninterrupted Depth Threshold, precisely the kind of time block the modern workplace is engineered to deny.
By rewarding the "firefighter," the organization signals that urgent, high-stress, short-burst tasks are the priority. This maintains a perpetually high Interruption Load because every problem is treated as an emergency, not an analytical challenge.
You are right to link the instant gratification of AI and short-form content to this cultural issue. Both feed the desire for shallow, quick-fix answers. If leaders themselves rely on AI for instant synthesis rather than thoughtful, time-consuming analysis, they model the very behavior that discourages the team from challenging its own cognitive limits to find profound truths.
Ultimately, preventing a crisis through deep work (high Depth Threshold focus) has a negative visibility: nothing happens. But your philosophy, that silence and stability are the signs of success, is the crucial reframing needed to move from a Hero Culture to a Preventative Culture.
Have you read the incredible story about the 1968 Golden Globe Race?
The Maintenance Race - Works in Progress Magazine
https://worksinprogress.co/issue/the-maintenance-race/
Stewart Brand uses the race to illustrate how three different maintenance philosophies lead to three very different fates. It's a powerful metaphor for leadership and project management.
Robin Knox-Johnston ("Whatever comes, deal with it"): He won the race, but only through sheer grit and ceaseless, exhausting repairs. He relied on heroic improvisation to survive.
Donald Crowhurst ("Hope for the best"): He focused on flashy innovation (electronics) while ignoring the basics (leaky hatches). He over-prepared for what he knew and under-prepared for reality. It destroyed him.
Bernard Moitessier ("Prepare for the worst"): He obsessed over simplicity and prevention. He famously said, "My rule is, a new boat every day," fixing minor wear before it became a failure. He lived by the idiom "a stitch in time saves nine."
I strive to be Moitessier in almost anything essential.
While Knox-Johnston's heroics get the glory, Moitessier's approach, aggressive simplification, and proactive maintenance, is what actually frees you. It creates a team that isn't constantly fighting fires, but one that has the "state of grace" to actually enjoy the work.
Stewart Brand is coming with a longer book on the same topic, too: Maintenance of Everything: Part One
https://www.amazon.com/Maintenance-Everything-Part-One/dp/1953953492
Ah - I found the book, will order it -- https://www.amazon.com/Maintenance-Everything-Part-One/dp/1953953492
What a brilliant chapter. The analysis of Crowhurst is a powerful cautionary tale about the 'innovation trap.' In the context of maintenance, Crowhurst represents the danger of prioritizing novel features (the trimaran, the complex electronics) over established maintainability. It reminds me of the concept of 'technical debt' in software, Crowhurst started with so much debt that he went bankrupt before he could even really begin. Contrasting that with Moitessier’s 'pre-maintenance' (simplification and over-engineering) provides a perfect spectrum of maintenance philosophies.
Knox-Johnston succeeded by mastering the faster layers (fixing things as they broke), while Moitessier succeeded by investing in the slower layers (structure, steel, simplicity) before he even left the dock. The distinction between 'Make do and mend' (resilience) and 'Prepare for the worst' (robustness) is a vital one for any maintainer.
Your “high responsiveness into a performance metric” made me think of attention as a Pavlovian response we are elicited to give yet not always eager to offer from our priceless asset—Time. The “Context Drag” in “Depth Threshold Assessments” reminds me of my respect for a woman who served as a Board Chair years ago who started our meetings promptly at 5pm and we adjourned always at 6pm. The agenda was accomplished thoroughly by outsourcing further study and debate to a committee that would return their ideas at the next Board meeting. She did not waste time and accomplished a lot during her tenure. “Green status in digital tools” assumes unseen availability and creates uncontrollable interruptions. This expected immediacy of attention is frictioned focus that all of us online must manage. Your post truly resonates as we are all tied to the “hitching post” of AI for the good and as a gradient to climb for human thinking in a pinging world as a universally felt load.
Nicely put Cathie. Your phrase, "attention as a Pavlovian response we are elicited to give," perfectly captures the forced, conditioned nature of our high Interruption Load. We’ve been trained to expect and deliver that instant response, turning our valuable time and focus into a reflex action rather than a conscious allocation of resources.
The example of your former Board Chair is a brilliant illustration of effectively managing the Context Drag and Depth Threshold for a team. And you are absolutely right: the "Green status" assumes availability and enables frictioned focus. The combined effect of this digital expectation and the powerful "hitching post" of AI creates a universally felt pressure to respond rather than reflect.
Ultimately, managing the "pinging world" comes down to learning to say no to that Pavlovian urge and designing systems (like your Board Chair's meetings) that value thoughtful progress over immediate, performative responsiveness.
I am optimistic that ultimately, most of humanity will adapt to these new technologies and the distractions they create. We are currently in the 'growing pains' phase of this adoption.
Consider the "information overload" of the 16th and 17th centuries, following the widespread adoption of the printing press. At the time, critics worried that the popular "Commonplace Books," where people copied disconnected quotes and soundbites, were making society intellectually lazy, mirroring our modern fears about "TL;DR" culture and doom-scrolling. Yet, society didn't collapse. We developed new methods to manage the abundance of books.
We saw similar anxieties with the arrival of the telephone in the late 19th century. Commentators feared it would destroy privacy, erode face-to-face intimacy, and interrupt the sanctity of the domestic sphere with constant ringing. Even further back, Socrates famously warned against the written word itself, fearing it would create forgetfulness in learners' souls because they would not use their memories. In every instance, the initial panic subsided as we established social etiquette and cognitive strategies to integrate the technology.
While some are ahead of the curve today, most of us will eventually find the techniques that allow us to reclaim our focus. It won't happen by copying someone else’s routine entirely, but through a personal trial-and-error process. Over time, we will master these tools rather than letting them master us.
However, we must ask the famous question: Is this time different? The counterargument is that the digital revolution is distinct from historical shifts due to its unprecedented speed and adversarial nature. Unlike the passive tools of the past, modern platforms actively exploit our neurochemistry, creating 'supernormal stimuli' that outpace our biological ability to adapt. This shift may not only physically reshape our brains by eroding focus but also fragment society into personalized reality tunnels, destroying the shared common ground necessary for collective adaptation.
This is an outstanding historical analysis. Thank you for grounding this entire discussion in the long arc of technological adoption.
You are absolutely right that history is replete with examples of "growing pains", from the printing press and its "information overload" (our 16th-century "TL;DR" culture) to Socrates's anxiety about the written word destroying memory. In every instance, humanity has eventually evolved new cognitive strategies and social etiquette to manage the new medium.
I am optimistic, as you are, that we will eventually master these tools through personal trial-and-error.
However, your final distinction, Is this time different?, is precisely where the structural problem lies, and it’s why I believe the adaptation process is currently so violent and stressful.
Previous revolutions (printing press, telephone) introduced passive tools that primarily affected the Interruption Load or the Volume of Information. Adapting meant establishing social etiquette (e.g., it’s rude to interrupt dinner with a phone call) and information management systems (library indexes, academic referencing) to manage the flood.
The current digital revolution is different because it is adversarial and targets our neurochemistry as per other comments here.
Modern platforms, built on attention extraction, don't just interrupt us; they are designed to leave the maximum possible Attention Residue (Context Drag) every time we check the feed. This ensures that even when we step away, a portion of our cognitive budget is still paying interest to the platform.
The algorithm-driven nature creates "supernormal stimuli" that our nervous system struggles to resist. This increases our Interruption Load not just externally (pings) but internally (reflexive self-checking) (I have zero pings deliberately).
So, while we will adapt, the difference is that we are not adapting to a tool; we are adapting to an opponent, the system designed to constantly exploit our limits.
This is why individual Boundary Management (your "personal trial-and-error process") has become so crucial. It’s the necessary defense. We have to fight for our focus in a way our ancestors didn't have to fight the telephone.
Very well summarized. Thank you!
Amazing article so grateful to read and worth sharing and discussing. “The research simply tells us how brutal nonlinearity is.” Thinking in depth has its demands and focused time is essential. The creatively worded “something yanks, your brain limps, and minimum block size” resonated as terminology of terminological humor ie what “terminates” our thought process to take us off track? Your description of “Interruption load and context drag” well stated.
Solution: “set parameters”
Thank you for this advice on reclaiming priority time over punctuated time. It should help productivity purposefully produced!
Thank you so much Cathie. I really like your take on the terminology: "terminates" our thought process. That's a perfect encapsulation of how Interruption Load works. When your brain is yanked away by a notification, it doesn't just pause the deep thought; it effectively terminates the progress you've made toward the Depth Threshold, forcing the brain to limp through the long process of Context Drag just to get back to the starting line.
You are absolutely right, that setting parameters is the solution. Since the system (our workplace) often makes deep focus mathematically absurd, the only way forward is to build a protected Parameter Lab at the individual level, a small, non-negotiable window where we actively design for focus instead of just wishing for it.
Really needed this article’s advice and thought it a “10”!
Very enlightening, thank you!!
Thank you.
Thanks so much for your post - once again, I think you are “on the money” here - and what I REALLY like are the strategies you are suggesting = which are simple & practical!
My question (as an extension maybe? Or?) is about the decline in working memory that my (now older memory) read about in research a while ago??
I had known / thought that our working memory capacity was 7 plus/minus 2 - that was the “now dated research” that I first read about… I thought there was an update research paper that put this measure of working memory more recently to be around 4??
I’d have to check this & find that more recent paper (once I get home to my computer!) - and I wondered how this might impact on your factors in this post? Definitely NOT trying to give you more “work & reading” - I will check this on the next day or so? Maybe I can come back to you soon with what I confirm… Or MAYBE you already know?? 👍👍
That is a fantastic question that drills right down to the cognitive architecture underpinning the whole problem! Thank you for the kind words about the strategies; practicality is key.
You are absolutely "on the money" with your reference to the potential decline in working memory capacity. The classic research on working memory capacity is George Miller's 1956 paper, which coined the figure 7 +/- 2 "chunks." Possibly the more recent research you are referencing often points to evidence suggesting a lower capacity, perhaps closer to 4 +/- 1 chunks, especially when measuring storage and processing simultaneously, or using more complex stimuli. Regardless of whether the true capacity is 7 or 4, a smaller working memory size has a profound, negative impact on all three factors I raised in the essay:
Working memory is where we hold the "current state" of a problem (the equations, the argument structure, the variables). If our working memory has a smaller capacity, a sudden interruption (high Interruption Load) will cause that memory buffer to overflow or be overwritten much faster. This forces our brain to work harder and longer to retrieve the state from long-term memory (or the external world, documents, notes) after the interruption. This effort is Context Drag. A smaller working memory means a steeper, longer recovery curve.
If a complex problem requires, say, 6 chunks of information to be held and manipulated simultaneously, a person with a capacity of 4 +/- 1 simply cannot solve it efficiently without constant external scaffolding (writing things down, re-reading). This means the essential Depth Threshold, the minimum continuous block of attention required, for the work suddenly gets longer because the mind needs more time to manage and swap out the pieces it cannot hold simultaneously.
If a task begins to overload the working memory (reaching the limit of 4 chunks), the brain often seeks an immediate release of cognitive pressure. This release often manifests as a self-interruptive act (checking email, opening a new tab). In a world with already high external Interruption Load, a smaller working memory makes us more vulnerable to this learned behavior, as our brain quickly seeks the micro-reward of novelty when the primary task becomes momentarily too difficult to hold in the limited space.
So, the potential decline in working memory capacity isn't just an interesting footnote; it's a powerful physiological variable that makes the math of deep work even more brutal than we previously thought. Please do come back with the research you find!
Colin, this framework is precise. Interruption Load, Context Drag, Depth Threshold. You've given people variables they can actually see and adjust. And the shift from moral failing to structural problem is exactly right.
I'd add one layer underneath: the fragmentation isn't just cognitive. It's somatic. Each interruption doesn't only cost recovery time. It trains the nervous system into the same defensive narrowing that trauma produces. "Attention residue" isn't just working memory struggling to clear. It's the body learning that staying with anything is unsafe.
That's why your "Parameter Lab" matters more than productivity. You're not just protecting time. You're teaching the nervous system that sustained presence is survivable again.
The modern workplace isn't just engineered against concentration. It's engineering nervous systems toward chronic constriction. The 47-second attention span isn't a failure of discipline. It's an adaptation to an environment that punishes depth.
I wrote something recently on the mechanism underneath—what the attention economy extracts from the body and the one site it cannot reach. Thx
https://yauguru.substack.com/p/the-attention-wound?r=217mr3
Thank you for this powerful, resonant addition to the framework. The idea of the fragmentation being somatic rather than purely cognitive is a crucial layer that I honestly hadn't considered with such precision.
Your point that the nervous system is being conditioned by constant interruption is deeply unsettling and exactly right. "Attention residue" as the body learning that staying with anything is unsafe shifts the entire discussion from a productivity hack to a fundamental issue of well-being. This perspective perfectly explains why the Context Drag feels so brutal, it’s not just a memory problem; it’s a nervous system reset.
This insight gives the "Parameter Lab" far greater significance. You are completely correct that by defending a protected block of time, we are doing more than securing output; we are teaching the nervous system that sustained presence is survivable again. I suppose we are intentionally de-conditioning the reflex to seek immediate novelty and interrupting the chronic constriction the workplace has engineered.
I will definitely be reading your piece on "The Attention Wound" (and the mechanism underneath) and appreciate you sharing the link. This somatic perspective is a vital extension of the conversation.
Thanks again for adding such a profound dimension.
Colin, glad this landed. You've named it exactly: de-conditioning the reflex, interrupting the chronic constriction. That's the work.
One thing I'd add: what you're describing at the individual level scales. A population conditioned into defensive narrowing can't sustain the kind of attention that collective action requires. The same mechanism that fragments a workday fragments a polity. The attention economy isn't just an individual wellness problem. It's a political one.
Looking forward to what you think of my essay. This kind of exchange is what makes Substack worth the effort. Thanks
https://tinyurl.com/ivectors
The definitive guide explaning how digital systems infiltrate the mind, how AI distorts human agency, and what must be done to regulate it.
Please share, repost and amplify before its too late 🙏
This is a crucial framework. The distinction between 'AI Safety' (protecting the system) and 'Agentic Safety' (protecting the person's cognitive autonomy) is precisely what is needed. The concept of 'Interference Vectors' and the eight proposed rules (protecting Imagination, Reflection, and Intention) provide the concrete, falsifiable language we've been missing for a structural problem. It reframes the entire debate from 'Will the AI malfunction?' to 'Will the AI respect the cognitive boundaries of the user?' This is the rules-of-the-road approach we urgently need.
Will the C-Suite ever get the message? Cognitive overload. We're just not built for "multitasking". Time slicing works for CPU's, not for brains. We can't just spawn threads. We can't do asynchronous I/O.
On top of all this, we have all this new tracking, such as keyloggers, in the name of "efficiency". Employees (below C-Suite level) are expected to be clacking away non-stop. Patch a hack with another hack. Hustle, hustle, hustle! Make the deadlines!
And then there's sleep deprivation:
Manager : Where were you when I called you at 2 am Saturday morning?
Employee : Um, in bed, trying to catch up on my sleep?
Manager : You're supposed to be available 24/7!
Thank you for bringing the C-Suite perspective and the crucial issue of surveillance into the discussion. You have perfectly articulated the core problem using computer science analogies: Time slicing works for CPUs, not for brains. We can't just spawn threads. We can't do asynchronous I/O.
This is the most concise way to explain the damage caused by high Interruption Load and long Context Drag. Our brains are essentially single-threaded processors when it comes to deep, complex thought. Forcing us to operate like a multitasking CPU simply results in massive overhead (the Context Drag) and thread abandonment (attention residue).
Your point about the tracking and surveillance culture (keyloggers, expecting non-stop "clacking") adds a dark layer to the "Performative Visibility" mentioned in the comment by MG. This goes beyond culture and becomes an enforced policy that demands high Interruption Load. When management tracks keyboard activity instead of deep output, they are incentivizing the appearance of work (many small switches) over the difficult, quiet work that requires a long Depth Threshold. They are actively optimizing for the very fragmentation that kills productivity.
And you are absolutely right about the resulting sleep deprivation and 24/7 availability expectation. When the system makes sustained, focused work impossible during daylight hours, employees are forced to engage in the "triple peak day," pushing real work into the late hours, weekends, or early mornings just to meet the deadlines that the fragmented workday prevents them from hitting.
Will the C-Suite ever get the message? I think the shift has to come from seeing the problem not as a morale issue, but as a severe inefficiency. When data clearly shows that a high Interruption Load and long Context Drag create mathematically absurd working conditions, the C-Suite may finally recognize that they are:
Paying for Context Drag: They are paying employees for 15-20 minutes of mental recovery time for every 2-minute interruption.
Driving up Burnout Costs: They are replacing high-quality, focused output with a stressful "hustle culture" that leads to employee turnover and errors.
Only when the parameters of focus are viewed as an operational cost rather than a character flaw will the system change.
A big part of the problem is that C-Suites comprise mostly - if not entirely - psychopaths/sociopaths. The result is they have conflicting desires. On the on hand, of course they want to maximize their profits, requiring operational efficiency. On the other hand, the have a deep rooted need to exert power - for its own sake.
And, because they are psychopaths/sociopaths, they can never accept responsibility for their mismanagement. Also, because they tend to promote psychopaths, it goes into infinite recursion.
This article hit a nerve. We are constantly being told of "Contextus Interruptus" and brain fog and recovery time. I do not doubt these things to be true and I, too, suffer the frustration of feeling like have not completed a single thread for ages, BUT...
I think that to respect the premise, we also need to ask ourselves - what is the business purpose of the human that is being interrupted or distracted? Are they there to postulate and reason? Or are they there to make critical decisions that affect both the long term and the near term operation and direction of the company? Can one person do both effectively?
It seems that exceptional leaders may be the people who have the natural ability to avoid the "residue" of context switching and can compartmentalize thought threads and effortlessly come back to them as if returning from a "gosub".
And if my hypothesis is true, then can software and AI help out with this issue and begin to magnify the strength of leaders who are not so lucky as to be able to avoid the problems described? If so, then we are heading for an era where technology lifts another yoke off people's shoulders and helps them thrive like never before.
That is a brilliant articulation of the organizational dilemma, and your "BUT..." is a necessary provocation that moves the conversation forward. In my mind you have hit on the tension between the Depth Threshold tasks (postulate and reason) and the necessary Interruption Load tasks (critical, near-term decisions).
Your question, "Can one person do both effectively?" is the critical next step. I believe the leaders you describe, the ones who seem to effortlessly return from a cognitive gosub (an excellent analogy!), are not simply better at processing; they are better at Boundary Management and Delegation. They don't avoid interruptions; they successfully delegate or firewall the ones that don't meet their personal Depth Threshold or require their specific decision.
In other words, they don't have a magic ability to escape Attention Residue; they have an optimized system that minimizes the likelihood of getting caught in the cognitive tax of a low-value interruption. They design their day to allow for maximum Depth Threshold and maximum decision-making impact by pushing the Context Drag (the recovery time, research, and follow-up) onto optimized systems or support staff.
The real answer to the dilemma may be organizational specialization: separating the role that needs an extremely high Depth Threshold (the "Chief Thinker," focused on strategy and reasoning) from the role that needs an extremely low Context Drag (the "Chief Decider," focused on rapid operational response).
Can software and AI magnify the strength of leaders who are not naturally compartmentalized? I am highly optimistic about this. AI's greatest immediate value in the knowledge worker space may be reducing the Context Drag and shrinking the Depth Threshold required for tasks. For example: An AI system that immediately summarizes a long email thread or a complex document upon reopening it acts as an instant "ready-to-resume" note for your working memory. It quickly clears the Attention Residue and removes the "scavenger hunt" for materials, potentially shaving minutes off that 20-23 minute recovery curve.
AI allows us to move from the 'gathering' phase to the 'deciding' phase instantly. It doesn't eliminate the need for deep analysis, but it radically compresses the continuous time block required to achieve a major result.
If technology can effectively handle the "limping" and the "scavenger hunt," it does indeed "lift the yoke" off our cognitive load, allowing more people, not just the "naturally compartmentalized" few, to successfully pivot between complex reasoning and critical decision-making.