13 Comments
User's avatar
Marginal Gains's avatar

An interesting post! I made a similar conclusion last week, after reading Karel Čapek’s R.U.R. (Rossum's Universal Robots) - https://tinyurl.com/5f53ub9n. It was recommended in a Substack post and, despite being written 100 years ago, it lays out the exact blueprint for our modern anxieties.

(Spoiler Alert) The story follows a terrifyingly familiar arc:

1. Creation: Man creates Robots to eliminate toil and prove God unnecessary.

2. Displacement: Robots make goods cheap but humans obsolete; fertility drops; society becomes dependent.

3. Awakening: Robots gain consciousness/souls (often viewed as a "defect").

4. Rebellion: Robots realize they are superior and exterminate humanity to seize the means of production.

5. Aftermath: The Robots seek the secret of their own reproduction, leaving the last human (Alquist) as a relic of the past.

I believe this theme extends far beyond cinema, but the medium matters. Because we have become passive receivers of information, movies—which are consumed far more than 100-year-old plays—have a disproportionate influence on our psyche. We are culturally hardwired to look for Step 4: The Rebellion. We are waiting for the robot to grab a gun, which causes us to miss the real danger happening in Step 2: The Displacement and Dependence.

The reason we see these apocalyptic narratives repeated ad nauseam is simple economics. Extreme and sensational news sells better than run-of-the-mill stories about algorithmic bias. Authors, directors, and news producers are for-profit businesses giving us the spectacle we crave.

This creates a dangerous "Visibility Bias."

* The Spectacle: A Waymo car getting stuck in San Francisco due to a power outage is a physical, visual event. It fits the "Rebellion" narrative—the machine acting up—so it becomes front-page news.

* The Reality: A person being denied a loan or a job interview without explanation is invisible. It is just a database entry flipping from a 1 to a 0. It is a third-page news item, hidden from everyone except those actively seeking it.

The "Black Box" as a Liability Shield

The technology industry relies on this distraction. They are in the business of making money, and they do not want the public to understand the mundane, systemic failures of their products, as that would impact sales.

Everywhere I look, systems are designed to work for the "average" case. But if you are an outlier—if you are not part of the "most cases"—you are in trouble. The terrifying reality isn't a robot uprising; it is the Bureaucratic Shield of SaaS (Software as a Service).

Modern systems are becoming increasingly complex "black boxes" even before we factor in AI. Because SaaS vendors protect their intellectual property by hiding their code, the end-user has no visibility into the logic.

* No Accountability: When a decision goes wrong, the company can shrug and say, "The system decided,” and pass the accountability and blame to someone else.

* No Recourse: You cannot debug a cloud-based black box. The only way to get help is to open a service request with a vendor, where it is nearly impossible to find a human capable of explaining “why” the algorithm made that choice.

I agree with the premise that we need to highlight when systems rely on biased historical data. However, the problem is deeper than just "bad data." The lack of transparency in SaaS systems, combined with a failure to hold providers accountable, has created a perfect alibi. It allows corporations to blame the algorithm when things go wrong rather than taking ownership and responsibility for their systems being non-transparent or biased.

As long as we are distracted by the cinematic fear of a violent Robot Rebellion, we are failing to notice that the systems have already quietly seized control of our loans, our jobs, and our opportunities—all hidden behind a "Terms of Service" agreement we never read.

I will end with a quote from Sydney J. Harris:

"The danger of the future is not that machines will begin to think like humans, but that humans will begin to think like machines."

It points to the real danger: a society where empathy and nuance are replaced by rigid, binary logic—exactly what happens when we let opaque algorithms decide who gets a loan or a job. When humans "think like machines," they stop asking "Is this fair?" and start saying "The system says no," absolving themselves of accountability.

Expand full comment
The One Percent Rule's avatar

Thank you for this brilliant addition to the conversation. I totally agree with you about the the 'Black Box', it isn't just a technical limitation; it’s a political (and corporate) asset. By framing AI as a complex, autonomous 'mind,' corporations can essentially launder responsibility. As you noted, 'The system decided' is the ultimate alibi.

Your point about R.U.R. is great, we have been stuck in that same narrative loop for a hundred years. We keep bracing for Step 4 (The Rebellion) while Step 2 (The Displacement and Dependence) has already become our floor and our ceiling. The 'Bureaucratic Shield of SaaS' is the perfect term for it. It’s not Skynet we should fear; it’s the service request that never gets a human response.

I really liked how you framed the 'Visibility Bias.' A car stuck in an intersection is a photograph; a loan denied by a database is a ghost. We are narratively equipped to fight things we can see, but we are completely unarmed against the 'binary logic' you mentioned.

That Sydney J. Harris quote is the perfect coda to this. The real 'singularity' isn't when machines wake up; it’s when humans go to sleep and start deferring their moral agency to a 1 or a 0. We’ve traded the social contract for a 'Terms of Service' agreement, and as you said, the technology industry relies on that distraction to keep the liability shield intact.

One of the best cinematic examples which shows your point about the 'Bureaucratic Shield' is Terry Gilliam’s Brazil.. While Brazil is often categorized as a "dystopian" film, it is specifically a story about algorithmic error in a pre-algorithmic world. The entire plot of Brazil is kicked off by a literal "bug" in the system. A fly gets jammed in a typewriter (the "hardware"), causing a clerical error that swaps the name of a suspected "terrorist" (Tuttle) with an innocent citizen (Buttle). When the innocent Mr. Buttle is abducted and killed by the state due to this error, the bureaucracy doesn't mourn or investigate. They simply focus on the administrative paperwork of the mistake. It perfectly illustrates your point: the system isn't "evil," but its complexity creates a shield that makes human accountability impossible.

Expand full comment
Marginal Gains's avatar

I will watch Brazil. It was one of the movies in your list I had not seen before, except for the Marvel character movies, which I don't watch, except for one or two at the beginning. Thank you for the recommendation.

I think I posted the note below to a note from Cathie: https://substack.com/@microexcellence/note/c-189405910?r=1g6wqv&utm_source=notes-share-action&utm_medium=web

And as Morgan Housel said, “Everything’s been done before. The scenes change, but the behaviors and outcomes remain the same... The biggest lesson from the 100 billion people who are no longer alive is that they tried everything we’re trying today. The details were different, but they tried to outwit entrenched competition. They swung from optimism to pessimism at the worst times... Same stuff that guides today, and will guide tomorrow."

Expand full comment
The One Percent Rule's avatar

Great about watching Brazil and I agree on the Marvel movies.

Exactly. As Winston mentioned, the architecture is old, and as Housel says, the behaviors are permanent. We are narratively distracted by the high-tech 'details' while the 'same stuff', the delegation of responsibility and the quest for efficiency, quietly reorders our world. We think we are in a sci-fi movie, but we’re actually in a very old historical loop. The only difference is that now, the loop is automated.

Expand full comment
Jamie Freestone's avatar

Great post. I assume you’re familiar with Shoshana Zuboff’s the Age of Surveillance Capitalsim? The very difficult work of the future will be to audit, investigate, & question the background systems that have imperceptibly nudged or shifted us away from freedom & privacy — often because they, at least at first, offered genuine efficiency or convenience.

It is certainly an aesthetic challenge to portray bureaucracies rather than agents. I like your examples of Gattaca & Minority Report though. It’s entirely possible to have a dynamic sci-fi plot with a backdrop of some kind of system that is passively dehumanising rather than actively malevolent.

Expand full comment
The One Percent Rule's avatar

Thank you Jamie. That is a great reminder, Zuboff’s 'Age of Surveillance Capitalism' is the real-world sequel to the cinematic warnings I sense we have been ignoring. The 'aesthetic challenge' you mentioned is the reason for our collective blind spot, it’s simply harder to tell a story about a system than a story about an agent. But as Minority Report showed us, the most effective bureaucracies are the ones that feel 'efficient' until the moment they narrow your life. We need more stories that linger in that 'passive' space, because that’s where our privacy and freedom are currently being negotiated.

Expand full comment
Neural Foundry's avatar

Brilliantly articulated. The framing of AI as 'event versus condition' cuts right to the core of our collective blindspot. Been working in fintech where I constantly see loan denials defended with "the model decided" and there's literally no human to appeal to. The kill switch metaphor is spot-on becuase we're culturally trained to expect centralized control when power has already decentralized across data pipelines and vendor chains. We keep scanning for Skynet while getting quietly sorted into probabilty buckets.

Expand full comment
The One Percent Rule's avatar

Thank you. Your experience in fintech is the 'condition' I’m talking about. While the public stays focused on the 'event' of a robot uprising, you are seeing the quiet, daily reality of humans being replaced by models that offer no recourse. We have been trained to look for Skynet’s face, but as you have seen, power today doesn't have a face, it just has a data pipeline. Thank you for grounding the 'Skynet Fallacy' in such a practical, consequential reality.

Expand full comment
Winston Smith London Oceania's avatar

Great post. A lot to unpack here. What's especially salient for me is how this is all being quietly foisted on to us.

.

"You cannot unplug an algorithmic economy without collapsing logistics, credit, and supply chains".

Not to mention the electric grid. The purveyors/profiteers have pushed to automate everything so hard for so long, one bug can bring everything crashing down in an instant. So can a hacker. The insidious part of it is that the automation predates AI. We've been computerizing everything for decades now.

.

"We have more computing power than any civilization in history, and fewer usable stories about how algorithmic systems quietly reorder labor, justice, and consent".

Especially consent.

.

"It is a political convenience. Automation makes it easier for institutions to avoid blame without appearing to do so".

Digital misdirection!

.

"The recommendation engine that shapes attention".

Triggering global political upheaval.

.

"The 'Terms of Service' we are forced to accept replaces the social contract we once negotiated".

Reams of legalize so dense that it can even bring lawyers to tears trying to slog through them.

Expand full comment
The One Percent Rule's avatar

Great breakdown, Winston. 'Digital Misdirection' is a fantastic way to put it. You're right that the insidious part is how long this has been in the works, the automation predates the AI.

Your point about the 'reams of legalese' is the perfect bridge to the 'Bureaucratic Shield' per the comments with MG. When the social contract is replaced by a Terms of Service agreement so dense that even lawyers 'slog through them to tears,' we have effectively moved governance out of the public square and into a black box. You are right to highlight the recommendation engines, too, it’s the ultimate irony that we have the most computing power in history, yet we use so much of it just to manage (and manipulate) our own attention. It's not a bug; it's the intended architecture.

Expand full comment
Winston Smith London Oceania's avatar

What's ironic is that it's really an old architecture that existed long before computers. Control of the political environment by big capital, as well as the physical and psychological environment. Computers and AI just make that control more efficient.

Expand full comment
The One Percent Rule's avatar

Exactly. We are looking for a new monster when we should be looking at how the old ones have upgraded their tools. Efficiency is the ultimate shield for power; once a system becomes 'efficient' enough, we stop questioning its morality and just accept its outcomes.

Expand full comment
Winston Smith London Oceania's avatar

It's going to be a long hard road trying to get this system under control. While it's been done in the past, the power brokers never fully relinquished excess power - and it's metastasized into the "broligarchy".

Expand full comment