AI is the “number one coder in the world now.” ~ Sarah Friar
There’s often a lag between how we talk about change and the weight of what’s actually happening. Listening to Sarah Friar, OpenAI’s Chief Financial Officer (CFO), speak at Goldman Sachs’ tech conference was one of those moments. Not because she announced anything we didn’t already suspect, but because she did so with the composure of someone describing quarterly earnings: logistics, infrastructure, hiring, power loads, talent pipelines.
But make no mistake, what she was describing was a transformation that, if realized, will rearrange the economic and epistemological furniture of modern civilization.
Friar, a senior executive and veteran of McKinsey, Square, and Goldman, is no firebrand futurist. She’s disciplined, cerebral, corporate. But that’s what made her words land harder. When she calmly described AGI as:
“the point where AI systems can take on a majority of the real, value-added human work in the world,” and “we're getting pretty close to that being the case”,
…it wasn’t a thought experiment. It was a roadmap. She wasn’t asking if labor could be automated at scale, she was declaring that OpenAI is nearly there. And then, perhaps inadvertently, she invited the world to buckle up.
We should not use the phrase “value-added human work” lightly. It is the kind of antiseptic phrasing that disguises a dismantling of meaning and worth. It reduces the dignity of employment, of care, invention, maintenance, judgment, to an actuarial category.
And Friar is precise in her delivery: she reminds us AGI isn’t superintelligence. It doesn’t need to be. It just needs to replace the cognitive output of the global workforce, at scale, and with margins Silicon Valley has only previously fantasized about.
The unsettling part is how ordinary it all sounds. Take her team’s onboarding with OpenAI’s own tools: a hackathon to automate finance workflows, procurement, tax, and investor relations. It wasn’t revolutionary. It was spreadsheet optimization. But the cumulative effect? A CFO watching her own team disappear into lines of code. According to Friar, the team ‘were dancing in that conference room’ when ChatGPT automated the tedium of answering diligence questions, an anecdote offered with both sincerity and, perhaps, a glint of self-aware irony. It was charming and yet of course it is also terrifying.
And so unfolds the five-act drama OpenAI is staging: Chatbots. Reasoning. Agents. Innovation. Agentic Organizations. Each act sounds like a product line; in reality, each could be read as a progressive sidelining of human-centered work. ChatGPT has gone from curiosity to infrastructure. Reasoning models now simulate the kind of strategic thinking once reserved for analysts. And Agents, like Deep Research or Operator, are already performing specialized tasks with greater efficiency than junior bankers or product managers.
Then there's Auite, the “agentic software engineer,” a name so profoundly bland it masks its implication. This is not Copilot on steroids. This is a machine that writes, tests, documents, and quality-checks its own code, duties engineers routinely defer, outsource, or skip altogether. As Frier said:
What we call Auite, is an agentic software engineer and this is not just augmenting the current software engineers in your workforce which is kind of what we can do today through co-pilot, but instead it's literally an agentic software engineer that can build an app for you can take a product request that you would give to any other engineer and go build it. But not only does it build it, it does all the things that software engineers hate to do. It does its own QA its own quality assurance its own bug testing and bug bashing and it does documentation things that you can never get software engineers to do
It doesn’t call in sick. It doesn’t need coffee. It doesn’t negotiate.
And this is just the third act. The fourth, Innovation, describes AI extending into realms of knowledge we haven’t yet mapped. Friar cites examples of professors finding novel ideas in GPT-generated research that outpaced graduate seminars. No consensus yet on whether these ideas are real or replicable. But the model had the audacity to invent them.
And the fifth act? Agentic organizations, companies run by AI, staffed by agents, untethered from human managers, let alone shareholders. If this isn’t a glimpse of corporate surrealism, it edges unsettlingly close to what might be called post-human realism.
None of this can happen without power. Literal, electrical power. This is Project Stargate. A $500 billion initiative to build the compute infrastructure to run this AI empire. 10 gigawatts of capacity, more than what powers the entire Republic of Ireland. This isn’t cloud computing. It’s nation-scale reconfiguration. Think AWS crossed with Manhattan Project ambition.
Friar is candid about the stakes: compute bottlenecks already shape OpenAI’s deployment decisions. Features like Sora and Deep Research weren’t rolled out earlier not for technical limitations, but because there simply wasn’t enough compute. Imagine being unable to ship the future because you ran out of electricity. Now imagine that becoming routine.
What’s disturbing isn’t that OpenAI is building a massive infrastructure. It’s that the scale of this bet now warps public policy around it. Governments are interested, not just in the tools, but in the secondary effects: job creation for HVAC technicians and electricians; retraining workers; zoning for hyperscale data centers; national security; education pipelines. This isn’t just Silicon Valley pitching software to finance. It’s a reconfiguration of statecraft.
And yet, Friar's prescriptions for business remain weirdly quaint. Start small. Run pilot programs. Let a thousand GPTs bloom. Customize. Iterate. Deploy.
It’s pragmatic. But it disguises the fact that each innocuous chatbot use case, each API call that replaces a team, is a rung on the ladder to institutional replacement. The agents arrive not as overlords, but as coworkers. They take the tasks we loathe. Until there’s nothing left but the tasks we once loved, and they take those too. At least this is the map!
If this sounds alarmist, go back to the metaphors used by OpenAI’s own leadership. Sam Altman, according to Friar, is already convinced AGI is imminent. Deep Research, their flagship analytical tool, is positioned, by Friar and echoed by corporate anecdotes, as outperforming teams of managing directors and analysts. o3 Mini is described as the best coder in the world, according to internal benchmarks shared by OpenAI. PhD-level reasoning in biology, chemistry, physics. And the new 4.5 model? It has, to borrow the latest tech unfortunate phrasing, “vibes”, emotional intelligence tuned for persuasion, writing, and design.
The message was clear, we are not merely looking at replacement of manual labor or analytical grunt work. We are watching creativity, empathy, and persuasion be fed into models and resold to us at scale. Every domain that once relied on the inefficiencies of human thought is now being streamlined. Every inefficiency is an opportunity. Every hesitation, a cost center.
This is not only a tech story, this is a political project. A metaphysical provocation. What happens when intelligence is commoditized? When capital owns cognition? Who gets to train the models? Who decides what counts as knowledge? What happens when OpenAI, Google DeepMind or Claude becomes not just a vendor of productivity tools, but the central nervous system of economic life?
We should be asking these questions not after deployment, but now, while the sales pitch is still warm. Because this future doesn’t arrive with fanfare. It seeps in through product demos. Through clever insurance comparisons. Through wildfire risk assessments and vacation planning. Through a thousand useful things that feel like nothing.
I know from my work on the EU AI Act committee and discussions with the main labs, that these questions are not being considered in governments deeply enough, and I mean anywhere, except maybe in China.
Are we building gods? Or are we, in a bout of historically rare short-sightedness, outsourcing the very thing that made us human in the first place: the need to fumble, to err, to think in sentences that don’t end in punchlines?
The real question is not whether AGI is coming. It's what kind of people we become by welcoming it.
If the future is being written in Python, we would do well to ask who is writing the script, and whether we are anything more than a line of deprecated code. Then maybe we will all become active in shaping AI policy, before we get trampled over.
Stay curious
Colin
Note - the video of Sarah Friars Goldman talk is now available online.
The next immediate stage in this is VC companies will cease to invest in people and will focus entirely on AI investment.
Excellent article Colin! Here are the takeaways that have me thinking...
"The agents arrive not as overlords, but as coworkers. They take the tasks we loathe. Until there’s nothing left but the tasks we once loved, and they take those too. At least this is the map!"
"We are watching creativity, empathy, and persuasion be fed into models and resold to us at scale. Every domain that once relied on the inefficiencies of human thought is now being streamlined. Every inefficiency is an opportunity. Every hesitation, a cost center."
As entrepreneurs and business people, our task is to solve problems for others. If technology can anticipate and then solve all our problems, then there truly is no purpose to the human economy.
I think the "spark of creativity" concept is considered the last bastion of humanness that we can hold onto in an AI world. But then we need to consider what creates that spark. If it's just the most likely next step in evolution, which is implied when multiple people from around the world come up with the same "break through" at the same time without any communication, then even this can be anticipated by AI.
Scary thoughts and always begs the question of why we would do this to ourselves?
Perhaps, like in the past, when we get ahead of ourselves, other constraints slow us down. Compute appears to be the constraint that may be our gift that gives us a breather before we sail off the cliff.
Putting this into the larger economic prism, it is also coming to a head just as we are coming to the end of another 18.6 year property cycle that typically ends in an extreme crushing of the economic environment, washing away vast amounts of excess - companies, ideas, even money.
I trust, as difficult as another financial reset will be, it can't come too soon, if we're to slow down the speeding AI train.