AI, Influence, and the Disassembly of Truth
NATO Virtual Manipulation Brief
AI agent swarms almost instantly tailor, schedule, and amplify content. Russian-aligned narratives are increasingly focused on undermining NATO’s credibility, painting the alliance as an aggressive, untrustworthy force that is responsible for escalating tensions and endangering global peace.
We have all seen them: a video, grainy, off-balance, emerges on X, Facebook, or Telegram. It shows the French president in a compromising position. The file metadata is stripped. The light is wrong, but plausibly so enough to create belief. It has the warmth of authenticity without the burden of truth. By the end of the day it is not just on Telegram, it has been reposted to Russia’s VK, embedded in YouTube commentary, clipped into TikTok reaction videos, and translated into English on X. It is a fiction, assembled by machine, but it carries the weight of truth. By the time it is debunked, it has already done its work, the doubt lingers.
The highly detailed Virtual Manipulation Brief 2025, produced by NATO’s Strategic Communications Centre of Excellence, dissects the machinery that makes this inevitable. Between June 2024 and May 2025, analysts “collected over 11 million posts and comments across 10 key topics… roughly five times more data than in the previous time period.” Hostile narratives, the report shows, are “designed, timed, and deployed” with military-like discipline: Kremlin-aligned messaging bursts were roughly twice as frequent as their pro-Western counterparts, and about three times as frequent for posts that appeared on more than one platform.
Opportunistic
The Kremlin’s campaigns unfold like a grim calendar. June’s “Traditional-Values Defender” paints Russia as “a defender of traditional values and sovereignty” while portraying the US as “corrupt, degenerate, and controlled by globalist interests.” July’s “Globalist Plot Alarm” warns of “engineered pandemics” and “future F-16 strikes on Crimea.” September’s “NATO Aggression Spin” frames the Alliance as “aggressive and Russophobic” while mocking Ukraine’s leadership. Each peak coincides with a geopolitical beat, the Vilnius summit, Baltic cable sabotage, the Trump-Zelenskyy Oval Office meeting, ensuring propaganda rides the slipstream of genuine news.
China’s work is measured but no less deliberate. Its Indo-Pacific messaging leans on a fixed vocabulary: “cold_war_mentality,” “zero_sum_approach,” “destabilise.” The Brief notes these phrases “advance a coherent message: NATO is an exclusive Western club locked in Cold-War thinking, expanding eastward, meddling in others’ affairs and sowing instability.”
While Russia’s style is emotional and defamatory, China’s is calm and strategic, yet both employ coordinated cross-platform cascades to create the illusion of broad consensus.
A Swarm of Misinformation
Platform dynamics matter. While X had the highest number of posts, “YouTube and Telegram showed greater engagement and reach,” demonstrating that influence is not just about volume but about cultivating loyal, interactive audiences. The “broad amplification swarm” on X is built on simplicity: the report’s Profile Taxonomy shows that 98–99% of hostile accounts there are mere reposters. Original content, only 18.7% of items, originates largely on VK, OK, and Telegram, where it is seeded for later amplification.
Artificial intelligence is now the accelerant.
“Approximately 15% of our current computer code for data collection and analysis was generated by AI,”
NATO analysts note. Hostile actors use the same tools offensively: deepfakes, AI-generated influencers, and synthetic news clips. Beyond current tactics, the report warns of “generative AI agent swarms” that could “almost instantly tailor, schedule, and amplify content,” pushing manipulation faster than defenders can respond.
The Baltic case study offers a glimpse of the stakes. Of 273,000 posts on “NATO & Baltics,” more than two-thirds on Telegram, VK, and OK were anti-NATO. Hostility spiked in mid-October 2024 after Estonian officials mentioned a “pre-emptive strike” on Russia, triggering “the highest yearly proportion of explicit violence threats (around 18%).” Comments urged executions and assassinations: “Когда начнём отстреливать эту мразь???” (“When will we start shooting this scum?”)
These campaigns are ruthlessly opportunistic, twisting narratives to exploit breaking news. The 2024 U.S. election serves as a prime case study. Before the vote, pro-Kremlin messages were just 1.2% of the conversation. After, they tripled, while “negotiation-keyed mentions” exploded by a factor of 39. The rhetoric shifted from portraying NATO as an aggressor to accusing it of rigging the election and plotting nuclear war.
“The themes overall evolve from glorifying Russia as a moral stronghold and casting Ukraine as a failed Western puppet, to portraying NATO and the EU as globalist tools orchestrating war, societal collapse, and digital oppression.”
Exploited
AI agents embedded within platforms are changing discourse mechanics. On X, users can summon Grok, Musk’s large language model, into a thread. The Brief notes:
“interactions with these bots are not limited by fact-checking—users also use them to settle debates, generate images, and so on,” despite risks of “hallucinations, inconsistent reasoning, and occasional biased output.”
The solution?
The recommendations are pragmatic: tailor countermeasures to platform-specific dynamics, monitor and disrupt cross-platform coordination, integrate “narrative-environment mapping” with rapid-response capabilities, and expand media literacy. But the underlying message is blunt: a falsehood that races through five platforms in twelve hours cannot be countered in twenty-four.
This is not just a contest of content; it is a contest of rhythm and reach. As the Brief concludes:
“Pro-Russian and pro-Chinese narratives are increasingly aligned, spreading rapidly across platforms to target audiences… The rise of deepfake technologies and AI-driven content creation further complicates the battle.”
The curtain in this theatre of cognitive war never falls. The players, both human and machine, do not rest. The plot is rewritten in real time. In such a war, the winner is not the side with the better facts, but the side that can make its version of reality arrive first, repeat most, and linger longest.
Stay curious
Colin
PS - I strongly recommend reading the brief, the data and images can help you deal better with identifying misinformation.




Mind Manipulation the fiercely battled frontier. It always has been, but now propaganda is pushing aside genuine meaning with hurricane force winds whipping the truth to shreds and elevating inserted self interest that “arrives, repeats, and lingers” without verification. The stakes are high and those who care have surely much exhausted responsibility being applied to sort and set straight what is coming at gale force. The frenzy is inscrutably significant.
Why is the Interwebs being used for AI generated Slop allowed to be broadcast...
Seems the owners are a tad incompetent, uninformed, or could not care any more or less...
All the rights in the world to broadcast rubbish and not one bit of responsibility being taken for its control...
The Great Dummying of our Times has begun with the Babel of stupidity.
The World Wide breakdown can not come quick enough...