The Skynet Fallacy
Fluent in Drama, Illiterate in Systems
Cinema has imagined thousands of artificial minds, dozens of machine rebellions.
For more than a century, film has taught us how to see artificial intelligence long before most people ever encountered it in practice. We have been raised on the killer robot, the omniscient oracle, and the android who becomes human. The mythological and intellectual fountainhead of AI, from Pygmalion and Frankenstein to HAL and KITT from Knight Rider, all imagine AI as another self, an anthropomorphic Pinocchio figure we can use as a way to reflect on ourselves.
The cinema has solidified artificial intelligence into our imagination, this perception of AI has become cultural: cinema has been our primary school for machine imagination, and it has left us fluent in drama and illiterate in systems.
The Event vs. The Condition
Film has overwhelmingly framed artificial intelligence as an event rather than a condition. Killer machines. Artificial gods. Synthetic beings who either destroy us or beg to be loved. These narratives are vivid, memorable, and visually tractable. They are also increasingly detached from the systems now shaping daily life. Most contemporary AI is narrow, bureaucratic, and infrastructural. It sorts resumes. Flags faces. Allocates credit. Optimizes routes. Scores risk.
Cinema, however, still reaches reflexively for singular minds and decisive moments. We wait for the Skynet moment while we are actually being quietly denied a mortgage by a black box model trained on decades of inherited exclusion.
The Immunity of the Boring
This mismatch matters because film does not merely entertain. It trains expectation. Generations learn what agency looks like, what responsibility feels like, and where blame should land by watching stories resolve. When artificial intelligence is framed on screen as rebellion or transcendence, the real systems deployed today inherit a strange immunity. They look boring. They look technical. They look neutral. And so they slip past scrutiny.
My central claim here is simple and uncomfortable. Cinema has already done much of the governing work for artificial intelligence, just not in ways that serve the present. Film sets expectations about agency, blame, responsibility, and inevitability. If a system is imagined as autonomous in the moral sense, then failures feel tragic but unavoidable. If it is imagined as a tool, failures feel like design flaws. The problem is that film rarely shows tools acting at scale. It prefers characters.
The Illusion of the Kill Switch
This preference has consequences. Responsibility in cinema is almost always locatable. Someone pulls the plug. Someone disobeys orders. Someone overrides the system. Cinema loves the kill switch. It reassures us that control is centralized and reversible. Real artificial intelligence distributes responsibility across data pipelines, vendors, interfaces, and institutional incentives. There is no terminal to smash. You cannot unplug an algorithmic economy without collapsing logistics, credit, and supply chains. Cinema has not taught us how to feel about power that lacks a center, so we struggle to recognize it when it appears.
Procedural Power and Political Convenience
Much of our cultural energy has gone into classifying artificial minds rather than examining artificial systems. Intelligence levels. Consciousness thresholds. Inner lives. Meanwhile, the systems doing the most work are procedural rather than reflective. They do not want anything. They do not intend harm. They do exactly what they are built to do, at scale, under economic pressure, inside institutions that rarely face consequences for abstraction.
We pride ourselves on being technologically sophisticated, yet we remain narratively lazy. We have more computing power than any civilization in history, and fewer usable stories about how algorithmic systems quietly reorder labor, justice, and consent. We laugh, uneasily, because the gap between our narrative habits and our regulatory needs keeps widening.
One of the hardest problems raised by artificial intelligence is responsibility. When decisions are delegated to systems, accountability dissolves. A loan is denied. A suspect is flagged by a probability score. A worker is dismissed. The explanation arrives fully formed: the system decided. No malice. No intent. No human hand. This is not a technical accident. It is a political convenience. Automation makes it easier for institutions to avoid blame without appearing to do so.
The Missing Middle
Fiction has rehearsed this move for decades, but usually at apocalyptic scale. Planetary overseers. Omniscient machines. Total collapse. What is missing are stories about mid level bureaucratic harm, about systems that work exactly as designed and still produce injustice. We practice endlessly for catastrophe and remain unprepared for audits, appeals, and compensation hearings.
Bias follows the same pattern. Bias is not an anomaly. It is inheritance. Systems trained on historical data reproduce historical distributions of power. This is not surprising. What is surprising is how often bias is treated as a glitch rather than a structural feature. The more a system is framed as intelligent, the easier it becomes to forget that it is also institutional memory encoded into process.
Most large scale AI systems operate where people neither meaningfully opt in nor meaningfully opt out. Surveillance. Scoring. Filtering. Ranking. Lives are shaped by systems encountered unknowingly or unwillingly, while ethical debate remains focused on imagined one to one encounters between humans and machines. We argue about conversation while infrastructure decides outcomes.
From Characters to Feedback Loops
This is where cinema has done more harm than help, not because it is frivolous, but because it is powerful. Film has been the primary training ground for how generations imagine artificial intelligence. Early cinema offered machines as spectacles of control or rebellion. Metropolis gave us mechanized labor with a face. Colossus imagined centralized authority as cold computation. HAL turned malfunction into betrayal. Again and again, intelligence was dramatized as a character, because characters are what films know how to handle.
A few films, however, began to strain against this habit. Gattaca showed a world where sorting happens before action, where lives are throttled by inherited scores rather than hunted by machines. The antagonist was not a robot but a ceiling enforced by data. Minority Report pushed further, imagining predictive policing as a bureaucratic apparatus that replaced presumption of innocence with probability. The system was not evil. It was efficient. Its danger lay in how smoothly it converted prediction into policy.
Even Ex Machina, often remembered for its singular mind, pointed elsewhere. The intelligence on screen was built from the scraped behavioral exhaust of billions of people. Search queries, clicks, pauses, glances. The system learned not by thinking, but by looping. Data flowed in, models adjusted, behavior was predicted, new behavior was generated, and the loop tightened. The story was not only about a captive machine, but about a pipeline that turned everyday digital traces into leverage. Intelligence there was less a soul than a feedback system.
A Snapshot
Locatable Power (The Center): In some movies AIs rule from a center. Power is locatable. There is always a terminal, a core, or a room where decisions happen. 2001: A Space Odyssey (HAL 9000); Colossus: The Forbin Project (Colossus); The Matrix (The Source); Tron (Master Control Program).
Total Apocalypse (The End): AI turns into apocalypse. Stakes are total. Failure is extinction. The Terminator (Skynet); Avengers: Age of Ultron (Ultron); WarGames (WOPR / Joshua).
The Anthropomorphic Mirror (The Soul): AI is evaluated through empathy, desire, fear, and moral recognition. Ex Machina (Ava); Blade Runner (Replicants); Her (Samantha); A.I. Artificial Intelligence (David).
The Sorting Machine (The Filter): AIs that do not rebel; they classify, rank, deny, and approve. Minority Report (PreCrime); Gattaca (Genetic Selection); Black Mirror (“Nosedive”).
The Durable System (The Ambient): Systems that never wake up. They simply persist. WALL-E (AUTO); Upgrade (STEM); Elysium (Automated Border Systems).
Automated Enforcement (The Record): Power operates through visibility, records, and automated enforcement. Enemy of the State (Surveillance Systems); Captain America: The Winter Soldier (Project Insight); Brazil (Bureaucratic Computer State).
The Corporate Asset (The Objective): Corporate objectives quietly outrank human survival. Alien (Ash); Westworld (Hosts); Moon (GERTY).
Cinema has imagined thousands of artificial minds, dozens of machine rebellions.
The Permanence of the Boring
These films sit uneasily between spectacle and infrastructure. They glimpse systemic power, but still resolve it through exposure, escape, or collapse. They stop short of asking what happens when the system remains intact.
This focus crowded out another story that film rarely tells well: durability. Systems that never wake up. Systems that never rebel. Systems that never explain themselves. Systems that persist because they are profitable, embedded, and boring. The recommendation engine that shapes attention. The risk model that shapes opportunity. The scoring system that shapes trust. These do not photograph well. They do not deliver catharsis. They do not end.
Only recently has cinema begun to inch toward this quieter reality, and even then it hesitates. Near-future films now gesture at algorithmic governance without fully committing to it. Control appears as ambience rather than antagonist. Screens glow with dashboards. Characters comply with prompts, scores, nudges, rankings. Power is present but diffused, everywhere and nowhere. The system does not confront the protagonist. It surrounds them.
Permanent Probation
What these newer stories glimpse, often unintentionally, is a world where no single machine matters very much, but where opting out becomes socially expensive. The drama is no longer rebellion versus submission, but friction versus convenience. Characters are not hunted by robots; they are sorted, throttled, delayed, deprioritized. Nothing explodes. Lives simply narrow.
This is closer to the truth of the moment we are entering. Artificial intelligence as background condition rather than event. Intelligence as process rather than personality. Governance as interface design rather than decree. The user interface becomes the site of political struggle. The 'Terms of Service' we are forced to accept replaces the social contract we once negotiated. Cinema has not yet learned how to linger here, but the outlines are visible. The future it is beginning to sketch is not one of conquest or awakening, but of permanent probation, where behavior is continuously assessed and quietly corrected. In this world there is often nothing to rebel against. No villain to confront. No authority to appeal to. Decisions are rendered by automated administrative systems that possess no ears for an appeal.
A Failure of Attention
In this world, the most consequential scenes would not involve violence or revelation. They would involve appeals that go unanswered, errors that cannot be traced, and decisions that arrive without explanation. That is difficult drama. It resists heroes. It resists endings. But it is precisely the story that now demands to be told.
The result is a cultural archive that is vast and repetitive at the same time. Even when television finally names the condition directly, showing worlds organized around continuous evaluation and social credit, the horror is not death but a low rating. Characters are not hunted. They are deprioritized. Lives contract through friction rather than force. We have imagined thousands of artificial beings and almost no artificial bureaucracies. We have rehearsed rebellion endlessly and accountability hardly at all.
What is needed now is not restraint of imagination but redirection of attention. Better questions rather than louder warnings. How do systems age. How do they accrete power. How do they absorb human labor while presenting themselves as autonomous. How do they shift legal norms without formal debate. How do we cross-examine a proprietary trade secret in a court of law? These are not cinematic questions. They are civic ones.
We are telling the wrong stories at the wrong scale. And until that changes, governance will continue to chase spectacle while the real machinery hums along, unbothered.
The final recognition is not a climax. It is a realization of inertia. It feels closer to resignation, or vertigo.
It is a failure of attention.
Stay curious
Colin



An interesting post! I made a similar conclusion last week, after reading Karel Čapek’s R.U.R. (Rossum's Universal Robots) - https://tinyurl.com/5f53ub9n. It was recommended in a Substack post and, despite being written 100 years ago, it lays out the exact blueprint for our modern anxieties.
(Spoiler Alert) The story follows a terrifyingly familiar arc:
1. Creation: Man creates Robots to eliminate toil and prove God unnecessary.
2. Displacement: Robots make goods cheap but humans obsolete; fertility drops; society becomes dependent.
3. Awakening: Robots gain consciousness/souls (often viewed as a "defect").
4. Rebellion: Robots realize they are superior and exterminate humanity to seize the means of production.
5. Aftermath: The Robots seek the secret of their own reproduction, leaving the last human (Alquist) as a relic of the past.
I believe this theme extends far beyond cinema, but the medium matters. Because we have become passive receivers of information, movies—which are consumed far more than 100-year-old plays—have a disproportionate influence on our psyche. We are culturally hardwired to look for Step 4: The Rebellion. We are waiting for the robot to grab a gun, which causes us to miss the real danger happening in Step 2: The Displacement and Dependence.
The reason we see these apocalyptic narratives repeated ad nauseam is simple economics. Extreme and sensational news sells better than run-of-the-mill stories about algorithmic bias. Authors, directors, and news producers are for-profit businesses giving us the spectacle we crave.
This creates a dangerous "Visibility Bias."
* The Spectacle: A Waymo car getting stuck in San Francisco due to a power outage is a physical, visual event. It fits the "Rebellion" narrative—the machine acting up—so it becomes front-page news.
* The Reality: A person being denied a loan or a job interview without explanation is invisible. It is just a database entry flipping from a 1 to a 0. It is a third-page news item, hidden from everyone except those actively seeking it.
The "Black Box" as a Liability Shield
The technology industry relies on this distraction. They are in the business of making money, and they do not want the public to understand the mundane, systemic failures of their products, as that would impact sales.
Everywhere I look, systems are designed to work for the "average" case. But if you are an outlier—if you are not part of the "most cases"—you are in trouble. The terrifying reality isn't a robot uprising; it is the Bureaucratic Shield of SaaS (Software as a Service).
Modern systems are becoming increasingly complex "black boxes" even before we factor in AI. Because SaaS vendors protect their intellectual property by hiding their code, the end-user has no visibility into the logic.
* No Accountability: When a decision goes wrong, the company can shrug and say, "The system decided,” and pass the accountability and blame to someone else.
* No Recourse: You cannot debug a cloud-based black box. The only way to get help is to open a service request with a vendor, where it is nearly impossible to find a human capable of explaining “why” the algorithm made that choice.
I agree with the premise that we need to highlight when systems rely on biased historical data. However, the problem is deeper than just "bad data." The lack of transparency in SaaS systems, combined with a failure to hold providers accountable, has created a perfect alibi. It allows corporations to blame the algorithm when things go wrong rather than taking ownership and responsibility for their systems being non-transparent or biased.
As long as we are distracted by the cinematic fear of a violent Robot Rebellion, we are failing to notice that the systems have already quietly seized control of our loans, our jobs, and our opportunities—all hidden behind a "Terms of Service" agreement we never read.
I will end with a quote from Sydney J. Harris:
"The danger of the future is not that machines will begin to think like humans, but that humans will begin to think like machines."
It points to the real danger: a society where empathy and nuance are replaced by rigid, binary logic—exactly what happens when we let opaque algorithms decide who gets a loan or a job. When humans "think like machines," they stop asking "Is this fair?" and start saying "The system says no," absolving themselves of accountability.
Great post. I assume you’re familiar with Shoshana Zuboff’s the Age of Surveillance Capitalsim? The very difficult work of the future will be to audit, investigate, & question the background systems that have imperceptibly nudged or shifted us away from freedom & privacy — often because they, at least at first, offered genuine efficiency or convenience.
It is certainly an aesthetic challenge to portray bureaucracies rather than agents. I like your examples of Gattaca & Minority Report though. It’s entirely possible to have a dynamic sci-fi plot with a backdrop of some kind of system that is passively dehumanising rather than actively malevolent.