12 Comments
User's avatar
Veronika Bond's avatar

Reflective thinking as a quiet form of rebellion... Indeed! It may soon be the only form of rebellion left to humans and for this reason alone needs to be nurtured.

It's not just a refusal to buy into every bullshit presented to us as some fake 'truth'. It also helps to gain inner clarity, it serves as a protection against manipulation, emotional and mental subversion, increasing sense of incoherence, and it may be a vital form of immunisation against becoming a victim of the so-called 'mental health crisis'.

Expand full comment
Winston Smith London Oceania's avatar

"Yet the modern workplace, like much of professional life, has declared war on slowness."

Every job ad I see has "fast paced" in it. Maybe to weed out us poor slow pokes who can't keep up?

//

"The manager who pivots from Zoom call to quarterly report and back again may cover ground quickly, but rarely sees what's beneath their feet. And in bypassing reflection, they risk not only inefficiency, but a dereliction of the deeper responsibilities embedded in their role."

Thus demonstrating that even those in the management class too are mere cogs in the machine.

//

"...reflective thinking is a quiet form of rebellion"

And one that few will ever indulge. It's not our nature.

Expand full comment
Susan Ritter's avatar

These takeaways stood out to me as well Winston.

And these ones too:

"The reflective mind moves not in circles but in spirals: revisiting, re-evaluating, re-forming." and "And in bypassing reflection, they risk not only inefficiency, but a dereliction of the deeper responsibilities embedded in their role." I certainly experienced this myself while in my corporate career. Lessons learned is even more relevant in a commercial nuclear environment and we used very structured approaches to evaluating error and building processes. But it still was expected to progress at pace.

The 20th century has been about better, faster, cheaper. Humans are just cogs in the wheels of productivity. But you'd think that when we transitioned from a manufacturing to a service based economy that slower, more nurturing, and deeper understanding (through reflection presumably) should have been the path. Unfortunately it was not, but perhaps this is how AI can help us if we use it right. By handing off the surface, more transactional and routine actions to a machine, we should have more time to reflect on what has, what is and what will be.

Expand full comment
The One Percent Rule's avatar

Very good point Susan, thank you. The busy manager as "mere cogs in the machine" is a powerful point. It underscores how pervasive this pressure is. Without reflection, professional life risks becoming "mechanical," and your comment suggests this isn't just a risk for individuals but a systemic reality that can ensnare even those in leadership, limiting their agency and capacity for the "deeper responsibilities" mentioned.

I had not thought of the nuclear field, and it is alarming that even in such a high-consequence field, where structured "lessons learned" are vital, the demand to "progress at pace" persists, this speaks volumes about the deeply ingrained nature ( and the "war on slowness" Winston mentions) of the 20th century's "better, faster, cheaper" ethos. It highlights the "tragic irony" that even when the need for reflection is acknowledged, the time for it is often sacrificed.

With respect to AI, it could be a tool that liberates us for deeper thinking, rather than simply another instrument for accelerated output or further cognitive outsourcing. However, as you imply with "if we use it right," the outcome hinges entirely on human intentionality and a conscious decision to prioritize that reflective space. Otherwise, the freed-up capacity might simply be filled with more tasks, perpetuating the cycle.

Expand full comment
Susan Ritter's avatar

Very true, the idea of freeing up time to be more productive is already the "sales pitch". But again, as humans are left with less and less of the productivity responsibilities, perhaps leaning into reflection and philosophical thought will be where we retire (escape) to.

Expand full comment
Winston Smith London Oceania's avatar

The key words here are "if we use it right", which of course we won't. It would be contrary to the central goal of profiteering.

Expand full comment
Susan Ritter's avatar

I didn't mean the royal "we" Winston :) Per my comment to Colin's post, I use AI to help me in my own process of reflecting. Rather than abdicating to it, I use it to challenge myself in my thinking. That I think is one "right way to use it".

Expand full comment
Susan Ritter's avatar

I joined a writing program led by an English professor at Columbia to build credibility in my writing. He believed that credibility comes from deeper thought. Not just about the work itself, but about who I am and the people I serve. Since then it’s moved me far past writing for my business.

Previously, my writing had been direct, opinionated, and focused on conclusions. In his program, we explored a more reflective approach, that allows space for thought to unfold. It began with something simple but not easy: slowing down, sitting with questions, letting curiosity take the lead, and turning inward before writing. Two years of weekly practice changed more than how I express ideas. It reshaped how I think. I started noticing more nuance, asking better questions, and observing the world with fresh eyes.

There’s no shortage of information today, but so much of it arrives filtered and fragmented. It’s not always easy to see what’s missing. AI has helped expand my field of vision by offering unfamiliar angles, surfacing contradictions, and helping me integrate new insights. In unexpected ways, it has become a valuable partner in this journey by challenging my assumptions, offering alternative perspectives, and providing instant access to a wider body of knowledge. It’s been training alongside me and doesn’t just respond, but reflects back.

When I held a strong opinion, I’d debate it with my AI assistant. I’d offer my reasoning, and it would poke holes, ask sharper questions, or trace the belief back to its roots, and I would challenge it in the same way. Sometimes it would concede part of my view and reshape a response that captured the nuance between our positions, drawing the ideas closer, not further apart. And I would reflect again, with a deeper sense of where my thoughts came from and whether this new idea felt true. This spiral process of revisiting, challenging, re-evaluating, and refining helped move me toward a more grounded understanding of the complexity I was trying to understand or share in my thinking and writing.

What surprised me most was how this practice created a deeper humility in my writing. An honest acceptance that I don’t “know” and probably can’t ever fully know anything. But in that space of not-knowing, there’s room for others to enter. That, to me, is what makes this kind of reflection so essential in today’s polarized world. It softens division. It brings us back together. And when we’re together in open conversation, shared inquiry, and mutual respect, we reclaim something foundational: our ability to shape civil society on our own terms, and protect us all from external powers that leverage division.

The reason I enjoy your writing Colin is because you do such a good job of encouraging others to join you in reflection. It's why the conversations in your threads tend to be interesting and thought-provoking. I think many of us are happy to leave the 149 character communications in the dust heap and lean into deeper thinking that takes, not just more time but more words to communicate.

Expand full comment
The One Percent Rule's avatar

This is a truly wonderful and deeply personal account of a journey into reflective practice, Susan. Thank you for sharing it with such clarity and insight. Wow, such a powerful testimony to the transformative potential of the very "quiet rebellion" reflective thinking can champion.

Your experience beautifully illustrates and expands upon several core themes:

Your professor's guidance to move beyond mere credibility to "deeper thought...about who I am and the people I serve", that is magnificent and perfectly encapsulates the shift from superficial output to meaningful engagement. The commitment to "slowing down, sitting with questions, letting curiosity take the lead, and turning inward" is a practical embodiment of the "disciplined process" I describe. The fact that this two-year practice reshaped how you think is a profound testament to reflection not just as a technique, but as a way of fundamentally reorienting one's mind.

The AI as a reflective partner is a particularly fascinating and hopeful dimension you've added. Previously I asked AI to prepare me daily questions to enhance reflective thinking for my personal use, but your methodology is perfect, I will try it. Much of the discourse around AI and thinking focuses on what AI lacks (like true human reflection), or the risks of outsourcing our cognition, BUT you describe a proactive use. Your method of debating with your AI assistant, and you challenge it in turn, is an exemplary model of using AI not as a crutch, but as a sophisticated sounding board or a Socratic tool. It is a great example of what we can get from AI if we engage with them reflectively and critically.

I really like the fact that your realization that this practice cultivated "a deeper humility... an honest acceptance that I don’t 'know' and probably can’t ever fully know anything". Similar to what I said as the "courage to dwell in uncertainty." This "space of not-knowing," as you beautifully put it, is precisely where genuine learning, connection, and intellectual honesty can flourish.

Another huge point that reflective humility "softens division" and "brings us back together" by creating space for others is, I believe, the ultimate promise of widespread reflective practice.

Thank you for your kind words about my writing and the ensuing conversations. It’s precisely because of contributions like yours, thoughtful, open, and grounded in lived experience, that these discussions become so enriching.

Expand full comment
Susan Ritter's avatar

Thank you Colin, for the thoughtful response. Over time, the AI takes on your own world view (modified world view from past interactions). To continue growing it's important to shake it up with a different idea or perspective to force yourself and the tool to reopen an idea or find a new idea to explore. These tools are designed to reflect us as we train them - basically becoming an even more sophisticated filter than the one designed for Web2 searches. It can get really easy to become complaisant when the tool is just agreeing with everything you say, or offering up something that you would probably already believe. If you're not debating with the tool, then it's time to become proactive again. Just the question, "why do you think that" is enough to get it leading you into deeper reflections.

Have fun trying this approach. I actually really enjoy it because I've grown up as a debater (some think arguer :). ) But it's more difficult and even dangerous to do that with most people today. AI is a great outlet for practicing debating skills when it is trained for it.

Expand full comment
Michael von Prollius's avatar

Henry Hazlitt (1894-1993) stated almost 100 years ago that one should spend as much time thinking about a book as one spends reading it. That sounds like Hilty who claimed, education does not come from reading, but from thinking about what you read. Well, Kahneman is "in the air". I would add that exchanging your impressions, views, thoughts often helps. You need to find the right people though. Some seem to be here! And you do not need many.

Expand full comment
Marginal Gains's avatar

This excellent post highlights an increasingly relevant topic in our rapidly changing world. Let me expand on two key reasons I think why this is so important:

1. Technology has become an extension of ourselves, and we continue to outsource more of our tasks, thoughts, and memories to it each day.

2. With the rise of AI in our personal and professional lives, we are embracing a tool that, for now, cannot reflect or think in the way humans can.

1. The Outsourcing of Memory and Reflection: With the advent of computers, GPS, and smartphones, we have already outsourced significant parts of our memory and cognitive processes to technology. We rely on these tools to store data, images, and directions, which is causing us to remember less and less. The phrase, "If you don't use it, you lose it," has never been more relevant.

Take the example of London cab drivers. A famous study revealed that their hippocampus—the part of the brain associated with spatial navigation—was larger than average because they needed to memorize the city's complex layout. But if they now rely on GPS, their brain's unique adaptation may diminish, becoming more like the average person's.

This trend extends far beyond cab drivers. During my two-mile walk to work, I see around 300–500 pedestrians. Astonishingly, over 70% are glued to their smartphones while eating, walking, or standing with others. This constant attachment to technology leaves little room for reflection, solitude, or creative thinking.

I'm not exempt from this struggle. Smartphone addiction is real, and I've had to develop habits to reclaim my mental space:

a) I keep my phone in my pocket while walking.

b) I avoid using my phone while driving other than directions.

These small habits give me 2–3 hours daily to reflect and think. Every major decision I make is preceded by a walk and/or drive, where I allow myself time to process and consider. This practice is invaluable in a world that increasingly demands instant responses.

Nicholas Carr's book The Shallows (https://tinyurl.com/3jdhu2ax), an excellent book even though it was written 15 years ago and more relevant today than in 2010, captures this issue perfectly:

"The great danger we face as we become more intimately involved with our computers—as we come to experience more of our lives through the disembodied symbols flickering across our screens—is that we'll begin to lose our humanness, to sacrifice the very qualities that separate us from machines. The only way to avoid that fate, Weizenbaum wrote, is to have the self-awareness and the courage to refuse to delegate to computers the most human of our mental activities and intellectual pursuits, particularly "tasks that demand wisdom."

Carr also warns of how tools shape us:

"Even as our technologies become extensions of ourselves, we become extensions of our technologies. When the carpenter takes his hammer into his hand, he can use that hand to do only what a hammer can do. The hand becomes an implement for pounding and pulling nails."

And another quote attributed to Marshall McLuhan: "We become what we behold. We shape our tools, and then our tools shape us."

The implication is clear: while technology expands our capabilities, it also narrows the range of experiences and skills we cultivate, leaving us dependent on what the tools can do—and nothing more.

2. The Rise of AI and the Outsourcing of Thinking: The second issue is even more profound: AI is a tool that can "think" for us. For the first time in human history, we have created a tool capable of simulating thought processes. While imperfect, AI is effective for several tasks, accelerating its adoption.

But here lies the danger: as we outsource critical thinking to AI, we risk losing our ability to reflect and question. AI, by its very nature, cannot reflect independently. It is optimized for speed and efficiency, not wisdom. Without human oversight, it is prone to producing errors, as highlighted in a recent example:

"A lawyer representing Anthropic AI admitted to using an erroneous citation created by the company's Claude AI chatbot in its ongoing legal battle with music publishers."

If a company like Anthropic AI—whose employees likely understand AI better than most—cannot prevent misuse, what chance does the general population have?

The transition from deterministic systems (where data is reliable and fixed) to probabilistic systems (where outputs are uncertain) will be a significant challenge. If organizations and individuals fail to adapt, we will see more cases of blind trust in AI, leading to serious consequences.

Moreover, when multiple AI systems/models interact within and across organizations, there is a heightened risk of cascading failures. Without human reflection to guide and intervene, these systems could create disasters far worse than human error.

We are in uncharted territory. While AI holds immense potential, we must approach its adoption with caution. Decisions are driven solely by cost-cutting or shareholder pressure risk, creating a world where we cannot trust the tools we rely on.

To navigate this transition responsibly, we must ensure that thinking and reflecting humans remain in the loop. AI is a tool, not a replacement for human judgment as yet. It is up to us to set the boundaries and determine how it is used.

As Carl Sagan once said:

"We've arranged a global civilization in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later, this combustible mixture of ignorance and power will blow up in our faces."

Expand full comment