Part 1 of a series on cognitive warfare. Part 2 is here.
Whilst drones get the headlines in today’s terrible wars, cognitive warfare is overlooked but more pervasive. Cognitive warfare, as defined by NATO’s Allied Command Transformation, is the coordinated use of technologies, narratives, and psychological techniques to influence, disrupt, or impair human cognition at the individual, group, and societal level.
Unlike traditional information warfare, which seeks to control what information is seen, cognitive warfare targets the mechanisms by which individuals interpret and internalize that information. The aim is not to persuade but to destabilize, disabling the brain’s ability to process, judge, and act autonomously. This is not merely a contest over content, but control over thinking.
The recognition of this contest has been dawning for some time. In the early years of this century, NATO planners, military futurists, and cyber-intelligence scholars began to issue quiet alarms: the nature of war was changing. But their forecasts, dense with acronyms and scenarios, sounded abstract, even antiseptic. They talked of “hybrid threats,” “non-kinetic operations,” “gray zones.” What they failed to convey was that war would become ambient. That it would seep not through borders, but through screens. That the next great invasion would arrive not as a blitz, but as a blur. And that the battleground would be us.
NATO now manages six threats: Land, Sea, Air, Space, Cyber and Cognitive warfare.
Cognitive warfare doesn’t advance like a military column or issue commands like a general. It spreads. It adapts. Sometimes it arrives as comedy, sometimes as commentary, always calibrated. Its victory condition isn’t occupation, it’s the quiet dismantling of autonomy. The first casualty is not truth, but our confidence that truth can still be distinguished from distortion, especially in an environment engineered to blur the line between credible information and persuasive fabrication.
Cognitive warfare is not an outgrowth of propaganda; it is propaganda reengineered for the neurological age. The term ‘propaganda’ has an ecclesiastical origin, coined in 1622 when Pope Gregory XV established the Congregatio de Propaganda Fide, the Congregation for the Propagation of the Faith, to counteract Protestant expansion and consolidate Catholic doctrine. At its inception, propaganda was simply about organized communication in service of a higher truth, not manipulation. It was only in the twentieth century, especially under totalitarian regimes, that the word took on its modern connotation: a mechanism for psychological conditioning and political control. It’s what propaganda becomes when upgraded with biometric precision, neurocognitive mapping, and generative AI. If twentieth-century psychological operations dropped leaflets, twenty-first-century cognitive ops drop ambient influence, individuated, immersive, and behaviorally optimized.
As NATO’s Innovation Hub has acknowledged, this is a campaign “to degrade the capacity to know.” This isn’t about what is true or false. It’s about disabling the very machinery by which we adjudicate truth. Our thinking.
The shift is conceptual. Information warfare targets what we see. Cognitive warfare targets how we see. The goal is not to implant a lie. It’s to corrode the category of the believable, so that even truth limps, uncertain of its footing.
The instruments?
Generative language models. Synthetic media. Neurotechnologies. Targeted psychographics. And unlike Cold War propaganda, today’s tools don’t broadcast. They infiltrate. They tailor. They learn.
China calls this “mind superiority,” a project that weds surveillance capitalism with engineered narrative saturation. Russia’s “reflexive control” doctrine, meanwhile, perfects the loop of preemptively shaping enemy expectations, then letting them act out their own confusion. This is not Cold War redux. This is something colder, more intimate.
Artificial intelligence is not merely a tool in cognitive warfare, it is a system of amplification, precision, and concealment. Large language models can generate plausible, emotionally resonant narratives at scale, flooding the information ecosystem with a density of content that overwhelms human discernment. These systems do not just repeat talking points; they iterate, adapt, and optimize. They test variations in tone and framing to maximize engagement, creating bespoke realities for every ideological niche.
More insidiously, AI can function as an emotional cartographer. By analyzing biometric data, sentiment signals, and online behavior, adversaries can construct high-resolution psychological profiles. These profiles are not just marketing datasets, they are targeting coordinates. They allow for the delivery of hyper-personalized influence operations, tailored not to persuade en masse but to destabilize by microfracture: alienating the daughter from the father, the friend from the group, the citizen from the polis.
Even silence becomes weaponizable. AI systems trained on engagement metrics can detect patterns of social withdrawal, hesitation, or ambiguity and target them with stimuli designed to provoke anxiety or tribal affirmation. It is no longer a matter of seeding lies, it is a matter of curating emotional states, nudging populations into moods of cynicism, helplessness, and performative outrage. In the hands of malign actors, artificial intelligence is not an oracle. It is an accelerant.
Hackable Brains
Cognitive warfare works because the mind is hackable. It is a system of evolutionary shortcuts. Built not for truth but survival. Confirmation bias, emotional reasoning, cognitive dissonance, these aren’t glitches. They are the exploit surface.
The triad chanted now in military strategy rooms is: assess, access, affect. First, Assess: map the vulnerabilities in cognition, emotion, culture. Next, Access: embed in the attention spaces, newsfeeds, comment sections, everyday rituals. Finally, Affect: shift beliefs, polarize communities, modulate behaviors, not through coercion, but through engineered plausibility.
The same neural insights that help treat trauma can now be reverse-engineered to produce it. The same machine learning models that help us find what we want can help adversaries ensure we never want what matters.
And all this unfolds in a fog, a new fog, denser and stranger than Clausewitz imagined. Not the fog of uncertainty, but of surplus. Too much signal, too little meaning. The aim is not to convince. It is to exhaust. To smother the public sphere under a debris field of competing realities.
The explosion at a Gaza hospital in 2023 is a case in point. Within minutes, social media was aflame with footage, denials, alternate footage, alternate denials. The result was not outrage, but saturation. Truth became not contested but submerged.
This is the new escalation cycle: not bullets, but viral clips. Not boots on ground, but minds in loops.
Cognition belongs to us
There is no Geneva Convention for cognition. The laws of armed conflict were written for tanks, not timelines. But if belief in its own right can be manipulated, if memory, perception, and inference can be shaped by external actors with strategic intent, then isn’t that a form of occupation?
What happens when sovereignty no longer means the control of territory, but the integrity of thought? If the brain is now a battlefield, then freedom is not just a political condition. It is a neurological one.
Defense must become cultural. The firewalls of democracy must be cognitive.
This means cultivating epistemic resilience: institutions that prioritize clarity over virality, media systems that reward verification over velocity, and citizens trained not just to think critically, but to recognize when their attention is being monetized and their cognition is being manipulated. That might mean embedding digital literacy as a core pillar of national curricula, or investing in independent civic infrastructure capable of countering adversarial influence in real time. It may even mean public awareness campaigns that treat information hygiene with the same urgency as public health.
This means understanding that attention is now a geostrategic asset. As Churchill foresaw,
“The empires of the future are the empires of the mind.”
He understood long before the digital age that domination would not come solely from force, but from influence, through shaping the way populations think, feel, and respond. Control over physical terrain would one day be eclipsed by control over mental terrain.
That whoever owns it, owns the present tense. And possibly the future.
This is not a polemic. Polemics presume the battlefield of ideas still exists intact. As Churchill once said during the information fog of World War II,
“In wartime, truth is so precious that she should always be attended by a bodyguard of lies.”
In today’s cognitive battlefield, those lies no longer protect truth, they often outlive it. Cognitive warfare undermines the rules of coherence and rebuttal before any debate begins, saturating discourse, pre-biasing audiences, and dissolving cause-effect logic until argument feels ornamental. This is not debate. It is an engineered entropy. And confronting it means understanding that the threat isn’t persuasion, it’s preemption.
Because the true triumph of cognitive warfare would not be the implantation of false beliefs. It would be the erosion of belief. A world where conviction becomes kitsch, doubt becomes ambient, and truth becomes a nostalgic artefact.
The war for the mind is not on the horizon. It is here. But so is the fight to reclaim it.
Stay curious
Colin
In my opinion, the war for the mind has been raging since time immemorial. It is older than humanity itself. And because the collective mind is 'shared,' it can either be infected or enlightened as one.
Excellent essay, thank you Colin! There are so many points you made that I highlighted (for myself) --- I think I'll have to read this again a few times to let it all sink in. So I'm only responding here to the bits I think I understand...
It's true what you say, Gavin, the war on the human mind is an ancient one. We are part of collective Consciousness, so our 'individual minds' are not as individual as they may appear (to us)...
At the same time, this 'soft software war' is new (to us in this era and civilisation at least), and on one hand we are dealing with the same old issues, while on the other we are confronted with new and unknown (to us individually) hacks of the mind/ brain/ cognition/ etc.
If we are each a living 'part' of collective Consciousness, then we also have a potential influence on that living organism to which we inherently belong. Therefore, I assume, if we (individually) can figure out ways to understand ~ and in the same vein neutralise ~ the potential dangers and threats that 'cognitive technology' poses to the human mind, can we not protect ourselves (individually and collectively) against those threats?
This 'cognitive warfare' is a product of human minds, developed with a relatively limited understanding of human Consciousness. You say this 'tool' is used (and has been planned to be used) as an aggressor to overpower and control the human mind, and this seems to be effective in relation to humans who are unaware of the powers of their own minds...
It doesn't mean this 'warfare weapon' has the same effect on everyone.