There are many cases in history when something meant to serve everyone begins drifting into the hands of a few. Electricity, elements of the internet, vaccines, each underwent such a metamorphosis, when the logic of public benefit lost ground to the rhetoric of proprietary gain.
Today, artificial intelligence stands at that same inflection point. And unlike its predecessors, it is arriving not as a tool but as an infrastructure, a latticework of compute, data, and models upon which the future will be built. The risk, if left unaddressed, is that we build this lattice with only a handful of actors holding the blueprints.
The core of the dilemma lies not in the technology, but in the architecture of power. It is not simply that a few companies build large AI models; it is that they increasingly control the underlying terrain: compute capacity, proprietary datasets, foundational architectures. The field of AI is no longer defined by invention alone but by who controls the preconditions for scientific advancement. And those preconditions are consolidating into the hands of a small number of companies, with the consequence that the AI systems we rely on begin to reflect a narrowing of values, incentives, and visibility.
The only meaningful antidote to this creeping enclosure is what we might call “Civic AI”, not merely AI developed by or for the state, but systems engineered with the public in mind at every layer. Compute must not be a scarcity auctioned off to the highest bidder. Datasets must not be the secretive spoils of web crawlers, our online footprints and low-paid labor. Models must not be obscured from audit and adaptation by legal obfuscation. What is required is not just access, but infrastructure: publicly provisioned, openly governed, and constructed with permanence in mind.
Some will balk at the idea of state intervention. And yes, many of us have legitimate grievances with slow, bureaucratic, or poorly maintained public systems. But without public investment, there would be no roads, no railways, no electrical grids. Why are we outsourcing the infrastructure of cognition, models that will become capable of reasoning, generating, and deciding, to a cluster of firms driven by market logic rather than public mandate? It is a choice masquerading as inevitability. And it is one that must be reversed.
Taking Back Control
History offers guidance. In the 1930s, as rural America remained dark while cities lit up, the U.S. created the Rural Electrification Administration, bringing power to forgotten places not because it was profitable, but because it was necessary. Likewise, ARPANET, the precursor to the modern internet, began as a state-backed initiative to create resilient communication infrastructure. What began as public provisioning became the foundation for private innovation. Today’s equivalent opportunity is Civic AI.
To seize it demands more than regulation; it demands construction. Something like a “Commons AI Strategy” must be built from the ground up. At the base is compute: high-performance infrastructure capable of supporting training and inference workloads at scale. Just as we do not expect every town to build its own power plant, we should not expect every institution to train its own LLM. Shared national and international compute infrastructure, owned, governed, and distributed in the public interest, is foundational.
Would this look like a state-run utility? Or perhaps something more akin to the BBC, a publicly chartered institution with operational independence, funded to serve long-term societal needs rather than quarterly returns? It could also resemble a public utility commission, with democratic oversight over pricing, access, and equitable distribution. However the model evolves, it must avoid capture, and it must be built to endure. Governance structures must be transparent and participatory, ensuring that global civil society, not just states and corporations, have a voice in shaping access, standards, and priorities.
Ethical Data
With respect to data, we need to shift from utility to commons. High-quality, representative, ethically sourced, and crucially, compensated, datasets must be treated as public goods. Yet what counts as “ethical” is far from settled. Questions of consent, provenance, and global data equity must be addressed not with slogans but with institutions capable of enforcing evolving standards. That means investing in dataset stewardship through dedicated public bodies that curate, audit, and maintain open data resources with the same care we afford to libraries or environmental reserves. These bodies could be nested within a broader Civic AI Institutes framework.
Importantly, this stewardship must also reconcile the tension between openness and privacy. A Civic AI ecosystem must be architected from the outset with privacy-preserving technologies as foundational components, not afterthoughts. Federated learning, differential privacy, homomorphic encryption, and synthetic data generation are not luxuries but necessities. Investing in and deploying these technologies would allow large-scale data aggregation without undermining individual or group rights, setting a global benchmark for responsible innovation.
Likewise, private actors must be more than grudging participants. Their incentive lies in what public infrastructure unlocks: just as businesses don’t own the TCP/IP protocols but build empires atop them, AI companies can compete on applications while coexisting with publicly provisioned foundations. A tiered ecosystem could emerge: with public models and datasets as a shared substrate, companies could innovate on higher levels of the stack, building proprietary tools, services, and domain-specific systems without monopolizing the foundations. This arrangement preserves entrepreneurial dynamism while safeguarding democratic control of essential resources.
It is not enough to open the weights of yesterday's frontier model after the commercial advantage has been secured. A truly public model is one whose code, training process, data provenance, and deployment pipeline are open, inspectable, and reusable. Such models must be maintained as evolving systems, not museum artifacts, adaptable and improvable by a broad ecosystem of researchers, developers, and civil society actors.
Values
To coordinate this ambition, a new class of institutions will be required. Call them Commons AI Institutes, or Civic AI Institutes: not merely regulators or research hubs, but stewards of infrastructure, conveners of stakeholders, and custodians of trust. Their charter must be grounded in enforceable public values. Their design must reflect democratic accountability, including independent oversight, multi-stakeholder participation, and transparency of deliberation. Their mandate is not innovation for its own sake, but innovation in service of the public.
Some will say this is utopian. They are wrong. What is utopian is to imagine that the current trajectory, defined by opacity, centralization, and extraction, can be endlessly scaled without consequence. What is naive is to pretend that the market alone will deliver systems that serve the many rather than the few. The historical record shows the opposite: only through deliberate public investment and strategic statecraft have the most transformative infrastructures been made equitable.
Critics may argue that state-led initiatives are slow, unwieldy, or risk-averse. But that misses the point. Civic AI is not a bureaucratic imitation of commercial innovation; it is a platform that lowers the cost of entry, amplifies the diversity of applications, and redistributes the power to invent. Speed is not sacrificed; it is redistributed.
These are general-purpose technologies and must be developed and owned for the common good. The future of AI is not yet written, but it is already in motion. Whether it scripts another chapter in the long story of enclosure, or a departure into something genuinely civic, will depend on what we build now, and who we build it for.
The question, then, is not whether AI will be transformative, but who will get to decide what it transforms, and for whom. Let’s not hand over the pen. Let’s write the next chapter ourselves.
Stay curious,
Colin
I was thinking about writing something similar, but I can't say it as well or with as much breadth as this. The assumption from the huge tech behemoths is that AI is something that, if we don't all want it, will be grateful to have (think along the lines of the Steve Job's quote - apocryphal or not - "People don't know what they want until you show it to them") and as amazing as the tools are, someone needs to put on the brakes but we can't possibly expect the companies to police themselves with so much money at stake.
Amen. Pandora is out of the box and never going back in. That’s why guardrails are so important. Here’s some work. I’m involved in at the intersection of Catholicism and AI: https://www.baif.ai