24 Comments
User's avatar
Marek Mudrik's avatar

My favorite quotation is: "The important fact about AI is not that it can write. The important fact is that it cannot answer for what it writes." Human agency supplies the part that Ai doesn't have and never will have - the spirit. A human being is more than a body. Ai, IMHO – no matter how advanced it gets in time – will never be more than a tool, a machine. It may get smart enough to start misbehaving and causing trouble, but it will remain a machine. I may be seriously wrong, but I don't believe people were endowed with the ability to "breathe a spirit" into a machine and make it a living being. Which, I realize, may not be the point of your essay. I am just reflecting.

The One Percent Rule's avatar

That is a profound reflection, Marek. You are right, it strikes at the heart of the Machine God delusion I mentioned in other comments.

The reason AI cannot answer for what it writes isn't just a technical limitation of the software; it’s exactly what you noted: the absence of a spirit that can inhabit the result. To answer for something requires a conscience that can feel the weight of a mistake. A machine can simulate the syntax of an apology, but it cannot feel the humiliation of a mishandled mortgage or the moral burden of a redesigned life.

I think we agree that the spirit is the part that cannot be automated. My essay focuses on the signature as the legal and organizational expression of that spirit. The signature is the moment where a human being says, "I am a living being, and I accept the consequence of this machine’s output."

If we lose that distinction, we are not just losing governance, we are losing the very thing that makes work, and institutions, human. Even a machine that misbehaves is just a broken tool; it takes a spirit to be adult enough to fix it.

Marek Mudrik's avatar

Agreed. And you pointing out that the author’s signature is in essence a declaration of responsibility for the content is a very helpful. It provides a solid handle to hold on to the concept and remember it. I am going to put that in my quiver. thank you, sir!

The One Percent Rule's avatar

I suspect the next few years will be a long lesson in rediscoverng why the archer matters more than the bow :-)

Robot Bender's avatar

At what point will AI cross over? Can it even do that? How would we know if it did? That kind of philosophical thinking is beyond me. Science fiction has grappled with this before. It. Data of Star Trek is one of the more recent attempts. There were a few good episodes about this.

The One Percent Rule's avatar

Ah great question ... and I do not know the answer. In those episodes, Data was often seeking to be human by embracing the very things I argue are now being automated away: the struggle, the confusion, and the burden of choice. He was an Adult machine because he took ownership of his actions.

Our current AI tools are the opposite. They are designed to provide the syntax of wisdom without the substance of a person. Even if the technology crosses over and becomes indistinguishable from human intelligence, it still exists in a legal and moral vacuum. It still cannot sit in the witness stand.

No matter how much science fiction we live through, the machine remains a tool and the human remains the only one who can live with the verdict.

Alex Randall Kittredge's avatar

Compelling argument about governance being the bottleneck, but I wonder if you're being too generous to the pre-AI status quo. You described organizations as "historical settlements" full of legacy chaos... but isn't that precisely the environment where human judgment was already failing quietly?

If the named human being who is supposed to sign for the decision was already hiding behind procedure and fog before AI arrived, why should we trust that same institutional culture to suddenly produce courageous, accountable people just because the stakes are clearer now?

Isn't it possible that AI doesn't just reveal the absence of governance? It reveals that real governance was always rarer than we pretended, and that the "theater of effort" described was itself a form of institutional self-deception we were all silently complicit in?

I wonder what would it actually take to build that culture of ownership from scratch, rather than assuming it exists somewhere waiting to be activated?

The One Percent Rule's avatar

That is the hardest question of all Alex. You are right, I may be being too generous.

You can’t build a culture of ownership from scratch until you stop rewarding the culture of circulation. It’s going to be a very painful transition for a lot of VPs.

Organizations have a veil of ignorance problem. Maybe this is the time for leaders to step up and reward ‘ownership and responsibility’?

Robot Bender's avatar

I'm afraid too many managers will just go along with whatever the AI puts out and discount human judgment.

Michael S Faust Sr.'s avatar

Here's the comment draft:

---

Colin — this is one of the most honest pieces I have read on the subject and I want to tell you why it landed the way it did.

You wrote that the truly scarce things were responsibility, courage, institutional memory, and the willingness to say — this decision is mine. And that we have spent too long praising disruption and too little time admiring custody.

You said the companies that thrive will ask — what process are we trying to improve, who owns it, where are the failure points, and what institutional conditions would make the output usable.

That is the exact sequence the upgraded Faust Baseline 2.8 Codex runs before every output.

You and I are working on the same problem from different ends of the same hallway. You are describing what serious AI governance needs to look like from an organizational and economic perspective. I built a working model of it from the inside of daily practice and documented it in plain language before most of this conversation was happening publicly.

Michael

---

Want any adjustments before you post it?

The One Percent Rule's avatar

That is a striking way to put it, Michael: "working on the same problem from different ends of the same hallway."

If the Faust Baseline 2.8 Codex is actually running that sequence, asking about ownership and institutional conditions before it speaks, then you have built a much more adult tool than the Machine Gods currently being worshiped in the headlines. Most models are built to be persuasive; it sounds like yours was built to be prudent.

But here is the question that keeps me at my end of the hallway: when the Codex identifies a failure point or a hidden liability, and the organization decides to ignore it and proceed anyway, who absorbs the wreckage? Think of the Challenger disaster!

A sophisticated model can provide the contextual intelligence that I argue is so rare. It can even simulate the custody of institutional memory. But it still cannot provide the signature. My fear is that even with a tool as disciplined as yours, lazy institutions will use its brilliance as a higher-grade alibi. They will say, "The Faust Codex validated this," as a way to avoid saying, "I approved this."

Michael S Faust Sr.'s avatar

You are describing the oldest failure in institutional life — not ignorance, but the deliberate outsourcing of accountability to a credible-sounding voice.

The Faust Baseline was built with that exact problem in mind. The Codex does not issue verdicts. It issues structured analysis with a hard stop at the edge of its evidence. The output is always traceable back to a human decision point — by design. If an organization says "The Faust Codex validated this," the Codex record will show exactly what it said, what it flagged, and where it stopped. The alibi collapses on contact with the transcript.

Your Challenger point is the right one. The engineers knew. The warning was on record. The institutional machinery chose to proceed anyway — and then buried the signature under launch pressure. No tool, however disciplined, prevents that. What a disciplined tool does is make the burial harder. It puts the warning in writing, in plain language, with a timestamp.

The Baseline cannot sign. You are correct. But it can make unsigned decisions visible — and that is a different kind of accountability than most institutions have ever had to face.

The one percent who will use it honestly are exactly the ones it was built for.

Syd Malaxos's avatar

They want intelligence without governance.” That sentence is doing the same work in corporate strategy that I’m watching play out in classrooms.

I teach high school chemistry and physics. Every day I watch students produce polished AI-assisted work they cannot explain under questioning. The output looks like understanding. The conversation says otherwise. The system rewards the output and never checks for the thinking underneath.

You named it perfectly — the desire for speed without submission to discipline. In education, the version is: the desire for answers without submission to struggle. Same structural failure. The institution gets what it measured, and nobody measured what mattered.

I’ve been writing about this on my Substack — what happens to independent reasoning when AI removes the friction that built it. Your framing from the organizational side confirms what I’m seeing from the classroom side. The governance gap isn’t just corporate. It runs all the way down to a fifteen-year-old who has never been asked to explain their own thinking.

Would love to connect on where these two lines meet.

The One Percent Rule's avatar

That is a beautiful and haunting reflection. Blissfulness, nature, community, even the cookies, is the ultimate "analog" intelligence. It is exactly the kind of "context" that no AI model can simulate because it requires a physical presence and a shared history.

You have identified a different kind of "Governance Gap." We talk about the "lockdowns" as a medical or political event, but you’re describing them as a structural failure of our human tribal systems. If humans are inherently tribal, then the "aching wound" you are seeing at the beach is what happens when the pipes of community are broken.

Just as a corporation becomes a museum of bad decisions when it loses its institutional memory, a society becomes a collection of random people when it loses its volunteer organizations and local bonds. We have spent too long praising the disruption of our social lives and too little time admiring the custody of our neighborhoods.

The struggle is real because we’ve automated the convenience of being alone, but we have not yet figured out how to automate the bliss of being together. That still requires a person, and a cookie, to show up in the flesh.

Syd Malaxos's avatar

"We have automated the convenience of being alone but we have not yet figured out how to automate the bliss of being together." That belongs in a book.

You just named the exact thing I watch happen in classrooms. The students who are struggling most aren't the ones who lack access to information. They're the ones who lack the experience of building something with another person in the room. The friction of a shared space. The discomfort of being wrong out loud. The slow process of constructing understanding while someone watches and pushes back.

AI compresses all of that. It gives you the answer without the room. Without the presence. Without the correction that only lands when it comes from someone who saw you struggle.

Your framing of institutional memory maps directly onto what I call integration space — the cognitive interval where understanding forms. Organizations lose it when they automate without governance. Students lose it when they delegate without ownership. Same architecture. Same failure.

I'd welcome a deeper conversation. I think the intersection of what you're seeing in institutions and what I'm seeing in fifteen-year-olds is where the real work lives.

The One Percent Rule's avatar

That is a such a solid point Syd, I suspect "integration space" is the cognitive version of the "drainage pipes" I described. If students (or executives) do not have the interval, the friction of being wrong out loud, then the information just floods the brain without ever becoming understanding.

You have identified the same structural failure I see in the boardroom. When an organization automates its administrative relay, it thinks it is gaining efficiency. In reality, it is deleting its own integration space. It is removing the room where people have to defend a decision, surface a trade-off, or admit an exception.

The fifteen-year-old who delegates their chemistry homework to an AI is practicing the exact same evasion of consequence as the CEO who wants a strategy deck produced in seconds. They both want the artifact without the ownership. They both want the polished output without the submission to struggle that actually builds the capacity to govern the result.

If we lose the experience of "constructing understanding while someone watches," we lose the ability to trust the result. An AI can give you the answer, but it cannot give you the nerve to act on it when things go wrong. We are building an entire civilization that knows the price of every token but has forgotten the value of the interval.

I would welcome that deeper conversation. The architecture of the failure is indeed the same: we are automating the execution and accidentally deleting the judgment.

Syd Malaxos's avatar

Colin — "the price of every token but forgotten the value of the interval" is one of the clearest things anyone has said about this problem. I'm going to carry that sentence for a while.

You're seeing the same architecture from the boardroom that I see from the classroom. The fifteen-year-old and the CEO line is exactly right — and it's the part no one wants to name. The evasion of consequence scales. It looks different at each level but the structure is identical: confident output, no verification, no ownership.

Integration space and your drainage pipes — I think we're describing the same structural requirement from different floors of the same building. The room where someone has to defend a decision out loud, surface a trade-off, admit what they don't know. When that room disappears — whether it's a classroom or a boardroom — the output keeps coming but the judgment underneath it hollows out.

I'd welcome that deeper conversation. I think the convergence between what's happening in education and what's happening in organizational governance is one of the most important and least discussed patterns right now.

The book I'm finishing addresses this from the classroom up. Sounds like you're addressing it from the organization down. The crack is the same. Might be worth comparing notes.

Cathie Campbell's avatar

Very essential depth of thinking you have applied to this cautionary tale.

“Serious work is the assumption of consequences.” Your analogy of the factory foreman well drawn as humans withdraw oversight of the machines and leave an absence of accountability for the outcome since “conscience within complexity is beyond mere computability”. Your “bureaucracy on amphetamines as being hyper fast” seems an override of heartfelt responsibility. And “tech does not enter a vacuum. It enters a culture.” Extraordinary insights conveyed, Colin.

The One Percent Rule's avatar

That is a hauntingly accurate phrase, Cathie: “conscience within complexity is beyond mere computability.”

You have touched on the exact point where the Machine God narrative usually fails. We keep trying to treat responsibility as if it were a data problem, something that can be optimized or calculated, when it is actually a human one.

The danger of "bureaucracy on amphetamines" isn't just the speed; it’s that it creates a feedback loop where no one feels they have the permission (or the time) to exercise that conscience. When the machine produces the "perfect" recommendation in milliseconds, the human who pauses to ask "Is this right?" or "Is this humane?" starts to look like a bottleneck rather than a safeguard.

But as you noted, technology enters a culture. If that culture does not value the pause, it won't value the conscience either. We have to be very careful that we don't build factories that are so efficient they leave no room for the foreman to actually think.

Winston Smith London Oceania's avatar

This raises the question of what the purpose of a particular organization is in the the first place. In the corporate world, the intrinsic goal is arguably to maximally enrich the top players.

One way this is accomplished is by "holding the line on upward salary pressures" - a favorite platitude of Wall Street. This often, all too often, means mass layoffs. Often to the detriment of customers. It reminds me of Cory Doctorow's "enshittification" https://en.wikipedia.org/wiki/Enshittification.

From the C-Suite's perspective (and that of the purveyors/profiteers of AI), AI is great way to eliminate all but the top players. Everyone else? You're on your own. Ironically, the oligarchs/robber barons will continue to scream "Get a job, bum! Get a job!" because that's what they've been doing since the dawn of "civilization".

.

"Any serious use of AI requires someone to know where a process begins, where it can fail, who has authority to stop it, and what happens when the machine’s recommendation collides with law, ethics, customer reality, or ordinary human decency". I completely agree with this. Unfortunately, corporations are more about marketing, and that means paying lip service to all these things. It also means we need better, stronger laws to enforce oversight. That too is something the oligarch's/robber barons have fought against tooth and nail for decades, if not centuries.

.

"They will use the factory to generate the options, but they will rely on a named human being to survive the result". They will rely on a lower level fall guy to take the heat. "We fired the guy who was responsible, so the problem is solved - while we wash our hands of it".

The corporate mantra of "CYA" will continue unabated.

The One Percent Rule's avatar

Winston, you have touched on the dark underbelly of the Accountability argument. You are right to be skeptical: "CYA" is perhaps the most resilient institutional logic we have ever invented. I am going through such a case with a friend right now.

There is a very real danger that the named human being I’m calling for won't be a empowered leader, but a sacrificial lamb, a Chief Blame Officer hired at a mid-level salary to sign for the output of a billion-dollar model. As you noted, the C-Suite has a long history of "washing their hands" while a lower-level player takes the heat.

But here is the structural trap for those oligarchs: while a fall guy can absorb the blame for a PR scandal, a fall guy cannot provide the judgment required to prevent the "enshittification" you mentioned. When an organization moves from thought to "administrative relay, it loses its ability to see a crisis coming.

If the person signing the paper has no actual authority to stop the machine, then the governance is just more theater. Eventually, the wreckage becomes too large for any fall guy to cover. My point is that seriousness isn't a moral plea; it’s a survival requirement. An organization that runs on automated intelligence and manual blame is an organization that has lost its mind. It’s just a faster way to reach the wreckage I described. We need to find a way to push back against these technocrats.

Winston Smith London Oceania's avatar

It's going to be a slog. A company can fail outright - and the fat cats still find a way to profit from it. I'm reminded of "too big to fail, too big to jail". The company I was working for back in 2008 had both Lehman and Bear Stearns as clients. I guess the big question is whether our political "leaders" will finally step up to the plate. I'm not holding my breath.

The One Percent Rule's avatar

That is the $46 billion net income question, Winston. You are right, 2008 was the ultimate masterclass in "relief without responsibility."

When an institution becomes "too big to fail," it has effectively decoupled itself from consequence. It becomes a permanent adolescent: all the power of a Machine God with none of the Adult submission to discipline. If the political leadership refuses to enforce the signature, then we aren't just looking at a governance gap, we are looking at a moral hazard that AI will only accelerate.

If we don't demand that named human being be held to account, we aren't building a new economy; we’re just automating the old robber baron playbook. The slog you’re describing is the fight to make sure that seriousness isn't just a competitive advantage for the few, but a legal requirement for the many.

But as you said, holding one's breath is a dangerous game in this climate. The only thing we can control is the clarity we demand within our own historical settlements and the refusal to accept "the machine made me do it" as a valid defense.

Winston Smith London Oceania's avatar

Terrifying. We've got a bumpy road ahead of us.