Shaping AI’s Impact on Billions of Lives
Demystifying the Potential Impact of AI
I recently spent several days reading a paper titled Shaping AI’s Impact on Billions of Lives. The authors include a former California Supreme Court justice, the president of a major university, and several prominent computer scientists. These individuals possess a high degree of influence in the technology sector. They state that the development of artificial intelligence has reached a point where its effects on society are unavoidable.
I find it significant that they avoid the usual debate between people who want to stop all progress and those who want no rules at all. Instead, they propose eighteen specific goals to ensure that these systems help the public. They argue that we are currently in the early stages of this technology. I observe that the authors believe we can still change the final result if we act now.
They suggest that the most effective way to use these tools is to increase how much a person can produce, rather than trying to take away their job. This choice is based on the idea that some fields grow when the cost of their services goes down. I think this perspective is important because it moves away from simple fear toward a plan for research and policy.
This report is less of a technical manual and more of a confession of the “Chiefs” of our digital age. It is a document written by people who have already “struck a Gold-mine” and are now looking back to see if they can prevent the mine from collapsing on the workers.
There is a profound irony in reading about “Sisyphus’s drudgery” while realizing that the authors propose to solve his ancient, boulder-pushing problem by giving him a smartphone. I find a certain intellectual comedy in the idea that the answer to human labor is “elastic demand”, a term from labor economics that suggests we will never run out of work as long as we make things cheap enough.
The authors note that “the United States in 2020 had 11 times more programmers and 8 times more commercial airline pilots than in 1970”. This is the heart of their argument: technology does not just delete tasks; it spawns new, more complex ones. We killed the typist and the telephone operator, and in their place, we birthed the programmer and the lawyer. I think there is a quiet, almost beautiful absurdity in the fact that we have spent fifty years perfecting machines to do our chores only to find that “Lawyers now use their computers by themselves, type their own emails and texts, and use their own smartphones”. We did not free the lawyer from work; we simply forced the lawyer to become their own typist.
The stakes of this work are quantified in a way that feels like a Richard Rhodes history of the atomic bomb. The authors compare the current private investment in AI to the Manhattan Project and the Space Race. They point out that “the $26B to put a person on the moon would be $318B today”. Yet, while the government funded the moon landing, private industry backs this new venture. I believe this is the most terrifying and poignant point in the entire report. We are witnessing a revolution of the same scale as the conquest of space, but it is being conducted not by nations for glory, but by corporations for efficiency and profit.
Human Contact
There is a reflective, almost elegiac tone when the authors discuss “removing the drudgery of current tasks”. They write that “Doctors and nurses choose their careers because they want to help patients, not to do endless insurance documentation”. I consider this the “Sisyphus” of the modern era. We have trained the most brilliant minds in our species to spend their afternoons filling out digital forms. The report’s proposed “Healthcare Aide” is not a robotic surgeon; it is a tool to automate the “paperwork and drudgery” so that a human can look another human in the eye again.
The authors think some of the projects can be in place within 2 to 5 years. However, I think we must grapple with the “unblemished record of incorrectly forecasting the long-run consequences of technological innovations”. The authors quote Bill Gates, who observed that “innovation takes longer than many people expect, but it also tends to be more revolutionary than they imagine”. This is the central loop of the material: we are trying to plan for an impact we cannot even see yet. We are like the stage actors of 1900, as Neal Stephenson suggests, trying to understand movies by imagining we just need to shout louder in a warehouse.
Giving Back
The “thousand moonshots” proposed here, from “Worldwide Tutors” to “Disinformation Detective Agencies”, suggest a future where technology is a partner, not a master. But I believe the most radical idea in this document is not the AI itself, but how it should be funded. The authors argue that “money for these efforts should come from the philanthropy of the technologists who have prospered in the computer industry”. They propose a “Laude Institute” where those who have “benefited financially from computer science research” pay for the safeguards. It is a technological tithe, a way for the architects of our new world to buy a bit of insurance for the rest of us.
In the end, I consider this a poignant attempt to keep the “human in the decision path”. We are not being replaced by a cold logic; we are being invited to outsource our mechanical drudgery so we can return to the creative, messy, and deeply empathetic work that no algorithm can ever truly replicate. The thousand moonshots are not just about reaching new frontiers in science or medicine; they are about reclaiming the time and the focus we lost to the paperwork of our own making.
Stay curious
Colin
The eighteen milestones proposed in the report represent a deliberate attempt to steer research toward the “common good” rather than leaving it solely to commercial interests. I have categorized these into two primary groups: those aimed at immediate social relief and those focused on fundamental long-term economic and structural shifts.
I. Immediate Social Impact
These milestones focus on reducing human suffering, enhancing safety, and improving the day-to-day quality of life for professionals and the public.
Healthcare Aide: Aims to reduce the paperwork and drudgery of medical professionals to combat burnout and allow more direct patient interaction.
Narrow Medical AI: Deployment of systems for specific, high-stakes tasks, such as predicting patient deterioration in the ICU.
Teacher’s Aide: Designed to automate unattractive aspects of a teacher’s workload, such as grading and lesson planning, to improve their quality of life.
Worldwide Tutor: Leveraging smartphones to provide a tutor for every child in their own language and culture.
Disinformation Detective Agency: Development of tools to identify deepfakes and machine-generated content with high accuracy.
Journalist’s Aide: A tool that checks for mistakes in news drafts and highlights conflicting sources to support investigative journalism.
AI-mediated Platform for Civic Discourse: A platform that mediates conversations to enhance public understanding and reduce polarization.
Controllable AI for Information Consumption: A personalized agent that balances personal preferences with exposure to new perspectives.
Equity-improving AI: Decision-support systems in governance designed to improve measurable equity outcomes.
Recent Government/Industry Collaborative Successes: Highlighting partnerships that successfully enhance AI upsides while dampening downsides.
II. Long-Term Economic & Structural Change
These milestones seek to redefine the nature of work, scientific discovery, and the legal frameworks of our global economy.
Rapid Upskilling: An AI system that allows unemployed or low-income workers to gain an in-demand skill and enter the middle class within three to six months.
Job Forecaster: A real-time tool to help workers displaced by technology retrain for well-compensated jobs with growth potential.
AI Scientific Breakthroughs for the UN SDGs: Using AI for major breakthroughs in hunger, poverty, and environmental sustainability.
Scientist’s AI Aide/Collaborator: Accelerating the pace of discovery by improving scientist productivity in tasks like grant writing and literature review.
Broad Medical AI: A generalist system that learns from multiple data modalities (images, genomics, records) to explain its recommendations.
Empirical Education Platform: A system to turn education into an empirical science using randomized controlled trials in heterogeneous environments.
Copyright Detector/Revenue Sharer: A tool to detect unlicensed use of original work and automatically distribute funds to owners.
Implementable AI Audits: A public-private partnership establishing clear, technical criteria for how to actually perform an AI audit.



Thanks for sharing, Colin. I look forward to a deeper dive. "common good" is such a complicated, multifaceted concept that I believe we need always to be in the process of discussing it. I'm not sure what role AI has to play in that discussion.
Re: teaching "Teacher’s Aide: Designed to automate unattractive aspects of a teacher’s workload, such as grading and lesson planning, to improve their quality of life." I don't know this for sure, but I suspect many teachers could/would find grading and lesson planning quite fulfilling and necessary if they didn't have to teach so many students. For my part I can't imagine an AI planning my lessons for me. Everything I do as a professor is a dialogue with my students, and I don't need/want a script for that coming from somewhere else.
"We have trained the most brilliant minds in our species to spend their afternoons filling out digital forms". This problem can be easily solved with a single payer system - but I realize that's another topic altogether.
"AI-mediated Platform for Civic Discourse: A platform that mediates conversations to enhance public understanding and reduce polarization". The power brokers won't like this one.
"Controllable AI for Information Consumption: A personalized agent that balances personal preferences with exposure to new perspectives". As above, the power brokers/propagandists won't like this. They want to be the ones to control information. Faux Newspeak comes to mind.
"Rapid Upskilling: An AI system that allows unemployed or low-income workers to gain an in-demand skill and enter the middle class within three to six months". If the jobs exist. Right now, the most in demand jobs demand at least a four year degree. A big part of the problem is that the psychopathic billionaire oligarch overlords don't want to pay a living wage to the "undeserving".
"AI Scientific Breakthroughs for the UN SDGs: Using AI for major breakthroughs in hunger, poverty, and environmental sustainability". Maybe I'm just cynical, but those in power don't want to solve these problems. It's how they convince themselves they're superior to the rest of us. There's a lot of psychopathic psychology involved. And the environment? The big oil industry really wants to sweep it under the rug, as they've done the past twenty plus years.