Is the frantic “race” to adopt and adapt to A(G)I wholly or largely created by the fear/mistrust that someone malevolent or heartless will adopt it first?
Good question Norman. Harold Wilson had said, either the blind imposition of technological advance or the purposive use of scientific progress. I think this report is very clear sighted, not a race against others, there is an opportunity to get ahead and be 'aware' and create value, rather than taken by surprise and not make the best use of the tools.
I think there is a fear that AI will become so embedded that who gets there first will dominate. They will control the data and therefore how the model adapts. It always comes back to power and money.
One other point. Countries have the ability now to model their own LLM systems. eg in Poland our government is training its own models, on its own data and buying the cloud systems, so they are not reliant on 'techbros'. It is not rocket science - the algorithm (transformer) is built and open sourced, it is about data and compute. So Nvidea benefit by selling compute to governments... and then governments need data cleaning teams. Sensible, rational government will avoid paying big tech big dollars
Absolutely, that is one element - in the 'race'. But let's face it, US companies are leading that race, with UK talent and Google UK development, but those US companies will think nothing of closing down their foreign subsidiaries... so yes, a race is important. Profit is point 3 of my comment above... we do need to get a steady hand and some additional formal control over AI for the common good. Someone will nationalize it one day (as China are doing)
From where would the surprise come? How might delay squander the best use of the tools? I’m not trying to come across as a Luddite; I just find the messaging on AI to be strange. It feels like, “repent of your old ways, for paradise is nigh.”
Multiple ways. 1. For sure there is an excess of inefficiency in governments, and high debts, AI does help streamline processes and can help reduce costs. 2. It identifies opportunities in data that we are not seeing, this can be highly beneficial in health care. and 3, superintelligence will lead to breakthroughs in science, countries want to be at the forefront of that to build competence and capabilities and capitalize on market share.
I do think that those that get ahead earlier will make hay while the sun shines.
What is the sunset in this analogy if not “someone else will have all the hay and leave us to starve”? My point in pursuing this question about racing is that there seems to be a logical if implausible alternative, namely, collaborating at a pace that helps everyone. But no one seems to be entertaining that. And if we are not going to entertain that, it feels like we are not racing for a tool so much as racing for a weapon, or at least some future where there is untold abundance (hay) for those faithful to the machines of loving grace and scarcity for everyone else.
Good points Norman. There is a lot written and discussed about this. There is a call for a CERN of AGI, a collaborative development by nations. The video I link to of Demis Hassabis mentions this.
There is a 'fear' that we are racing for a weapon, as Putin said, he who controls AI, will control the world.
Likewise, a recent post I did on Tyler Cowen shows that countries will be disadvantaged if they do not have their own AI systems.
The post by Sam Altman this week and Dario Amodei's post point to abundance for all. But as the report I mention states, governments are not taking action to achieve this.
So fundamentally, there is a risk that bad actor nations will control AI and weapons of mass destruction, but like all double edged technologies, we can also harness good for all of society if we manage this right and get government action. The UN meetings on AI governance are woeful. A CERN for AI with oversight by a nuclear style committee seems the most sensible solution.
Thanks for your patient and thoughtful engagement, Colin. I find myself increasingly challenged to properly characterize AI. Tool? Weapon? Companion? It is as if we are creating a deity and then asking it to work for us (and not *them*), or as Belloq says in *Raiders of the Lost Ark*, "It's a transmitter. A radio for talking to God." Only there's no guarantee the god we create will melt the faces of our hubristic adversaries.
Nice use of Belloq. You know Norman, when a system has access to the world's knowledge, as an AGI superintelligence would, then there is no knowing what it can help bad actors create! There is no guarantee and we will need global regulations like the nuclear treaty.
As to creating a God, this is exactly the short 4 minute brain by Anthropic developers I often quote "we are growing a brain"! Yes, the developers believe they are building a God. This is dangerous, very dangerous. Not the current version, but before we get to the singularity we MUST have guardrails and regulations in place.
There is that plus the urgent need to get ahead of the pure profit(eering) motive of the purveyors. Otherwise, they'll foist it onto us indiscriminately without compunction.
Thought provoking, as always, but I need more time to ponder both the report, as well as your implied and stated questions.
When I listened to that video a few days ago, it disturbed me because Hassabis throughout it was bouncing , like most of us, from ' the future with AI has the potential to be human life enhancing' to on the other hand, there are these existential concerns. His advice for young people today is to immerse themselves in the AI, to get the most out of the tools, in order to use them. In other words, to embrace it somewhat without reservations, because it is unstoppable. To be honest, I’d hoped for a more in-depth answer, for these are the young people I am attempting to answering that question to every day. Yes, doing this immersion with AI is a necessary action, today; but, with the acceleration, we know that is fleeting advice. I am not faulting the answer. I was seeking firmer grounded and expanded actions advice, more seeing into where this acceleration will be in just a few years, maybe even just next year. We are increasingly stuck in this unknownable , and inability to offer advice to handle this change. Yes, as you point out , we don’t have the answers, yet here we are dealing with a complexity none of us comprehends even as that complexity accelerates. I am stating the obvious here, I know
While this was not the core theme of this article, Colin, what struck me was this line :
We may hope to embed human values in superintelligent systems. But we have yet to reach consensus on what those values are.
As you stated, "De Minckwitz has written something rare: a document that straddles the line between statecraft and philosophical provocation. It does not pretend to have all the answers. But it is unflinching in its claim that the old questions no longer suffice" . With this acceleration, this change of thinking, one that incorporates AI + Humans, this change is slightly incomprehensible.
So, I'd like to ask, in line with the goal to foster these ongoing dialogues, have you read , " AI Mirror: How to Reclaim our Humanity in the Age of Machine Thinking", by Dr. Shannon Vallor, and if so, will you be posting a review on it? I'm about half way through it.
Appreciate this post and even more so, your invocation to remain curious as we immerse ourselves in the dissonance and uncertainty in order to create and engage in this dialogue.
Excellent points Wendy. Such thoughtful, perceptivity, which gets to the heart of the dissonance many people feel. You've pinpointed the central paradox perfectly: how can we offer meaningful, long-term guidance, especially to young people, when the technological ground is shifting beneath our feet?
Hassabis did a talk at Cambridge recently which was targeted at young people, but his advice was very similar to what you got from the other video in my post, (this is the Cambridge interview https://www.youtube.com/watch?v=48MmAGjZ7cA).
That feeling of seeking 'firmer grounded' advice is exactly what prompted my post. The report's argument is that individual advice will always be 'fleeting,' which is why we need a radical overhaul of our collective institutions to manage this accelerating complexity.
Your frustration with the standard advice to 'just immerse' ourselves in AI is completely understandable, but its the advice I give students too (although add other depths). It feels inadequate because it's tactical advice for what is a profound, strategic shift. That is precisely the gap the de Minckwitz report, and my post, plus the great comments, seeks to address. The old answers and career paths are becoming obsolete in real-time. The report's radical proposal is that the only durable 'advice' isn't for individuals to simply keep up, but for our society to build new systems, new ways of thinking and governing, that can handle this 'incomprehensible' change.
You're right to highlight the point about human values. It’s the philosophical bedrock of the entire challenge. We are trying to build the ship's rudder while sailing into a storm, without an agreed-upon map. Hassabis does call for a new wave of philosophers to help us lay the map!
I have met Shannon a few times at conferences and commented on her book on other platforms, good idea about a review. I like her work, and could add in some of her recent thoughts. Nice nudge, thank you.
One of the arguments in favour of AI is that large organisations (the NHS, Government itself, Pensions Dept, etc) have got too complex for humans to manage. Hence AGI to the rescue.
The difficulties of running large-scale organisations has been well mapped (eg: "The Awakening Giant: Continuity and Change in ICI". (1985) by Andrew Pettigrew).
So why not approach the problem by decentralising power and administration into smaller and more manageable units?
In theory , Joshua, what you are saying makes perfect sense. But government departments are not very good at communicating with each other, so if you break them up further you are likely to run into even more roadblocks. But I don’t think AI is the answer either. And here’s why: McKinsey is the most overvalued consulting firm in the world. They charge vast sums of money and invariably their recommendations are impractical and cannot be implemented by their clients. They then charge more money to send in an army of consultants to do the implementation and it still doesn’t work. I suspect the same dynamic will happen with AI.
Yep, a consultant is someone who borrows your watch to tell you the time. :)
I take your point though. As an aside, I wonder why organisations rate consultants so highly. Perhaps it's simply because businesses lack confidence in themselves, or perhaps it's because they can blame the consultants if it all goes wrong from following their advice. So there's some deep psychology at work here too. Yet still, there's nothing so practical as a good theory.
I think that the large consulting firms are in for a rude awakening, the big 4 accounting firms too. McKinsey has already downsized. Accenture are currently gaining AI implementation contracts, but once those processes are automated I sense there will be less roles in major corporations for 'consultants'.
The companies I work with bring in major consultant firms because they often believe that the consultants have a deep rooted understanding of the current market dynamics and competitive landscape across sectors. It amazes me. Every time there is a board change in major financial institutions in Eastern Europe (surprisingly regularly) the big consulting companies are brought in to refresh the strategy! Millions of dollars spent on slides and conversation - it is a strange business model.
But as I say I do believe that AI is already impacting the sector and will upend the consulting industry.
Incidentally Mckinsey roled out AI to all staff trained on proprietary data, and has subsequently reduced headcount from 45,000 to 40,000 with increasing revenues! Capitalizing on profit and AI productivity gains, but it will not last!
Whoever thought the most intelligent person in the room was a policymaker got that wrong! In a match of intelligence I would put my money on the likes of John Lydon aka Johnny Rotten of The Sex Pistols any day of the week. There is no doubt government is desperately inefficient and needs a kick up the backside. As an example, the NHS spends close to $1 billion on consultants like McKinsey every year and yet the system gets more and more inefficient. Inevitably the money-go-round gets redistributed according to who is friends with whom. But efficiency is not the be all and end all. Without compassion, without empathy, without wisdom algorithms will bypass human need for efficiency and we already know that AI can be modeled for bias. Is more efficient bias better and if so for whom? There is a TV series called 24 Hours in A&E. What sets this series apart from other reality TV medical shows are the backstories of the patients, doctors and nurses. Healing is not just an efficient medical procedure. Government is not just the redistribution of money and services. My concern with AI is the privatization of government by Big Tech. Data mining for power and control. It’s hard enough to argue with ‘authority’ when you have no power and it will be even harder when that ‘authority’ is a machine. Before we let AI run amok we need to pause and redefine what a civilization is. A civilization that serves everyone. But, we’ve always needed to do this throughout history and yet we’ve never done so. AI will not do this. Technology has only ever replaced one toil for another, allowing for the redistribution of human power and wealth depending on who the architects of that technology are, at any one moment in time. AI will be no different.
I agree with much of what you say Gavin. But the report in this essay is effectively showing a roadmap (with limitations as I point out) for how best to get Government controlled AI, as illustrated in my last comment. Big tech provides the tool, in corporates and institutions they should have no control whatsoever over the data, this is shown vividly by Palantir, a great tech system that the government licenses and then government employees, in government controlled data centers, with no access by the big tech company, have improved oversight of the patterns in the data. I think people miss the point about who owns the data in corporates and institutions - the tech co's do not get access to this! Meta is different because we give their platforms our data... so they 'molest' and hook us as users. The privitazation of government would be very severe mismanagement by parliament... as I say AI from a big tech company should be isolated on government owned and operated servers so big tech have no access to it. It just seems insane it would be any other way.
History shows that foundational innovations invariably usher in new eras and reshape entire civilizations. In his book, The Coming Wave, Mustafa Suleyman argues that containing such technologies is nigh on impossible, even as he makes a compelling case for attempting to do so. To illustrate how innovations can lead to unforeseen consequences, he points out that "the creators of the internal combustion and jet engines... had no thought of melting the ice caps."
What the report I write about advocates for is "A civilization that serves everyone" as Mustafa says, we will not pause Ai, nor stop it so we need to 'contain' and manage it, and to do that we need government action, we need to understand the consequences, have agency and wake up society through very detailed awareness and education campaigns.
In Poland, I have just finished training the trainers to educate 11,000 teachers for 11 to 16 year olds, backed by the government, on responsible use of AI. We need conscious awareness of these programs, who owns them and start building an AI for the commons. For the benefit of society. Some countries are ahead of the curve.
Or at least they didn't until Trumpkopf and MuskRat unleashed DOGE on the entirety of government - and the courts effectively gave them unlimited access to all the data, including what was traditionally protected data with the Social Security Administration.
The way you describe AI needing to be contained sounds more like a virus, Colin! As regards government data being separated from Big Tech we’ve already seen Musk running roughshod over the US government and we’ve no idea what access his own private companies have had to that data. That remains to be seen.
True Gavin. We MUSt have guardrails around AGI, we do not fully know the consequence of technology, eg. Nobel with dynamite, or any dual use technology. So we need to align it with human 'values' as best as we can and control who can own and access the data .. and what advanced AGI can be trained on. This is a powerful tool, like nuclear weapons, and bad actors will always find a way to abuse it - so we need to contain it.
It would be highly unusual if Musk was able to get access to citizens data and migrate it to Grok. This just flies in the face of governance controls over citizens data. But the US is certainly all over the place... as is China and communist regimes.
I agree, Colin. But, let’s strip this back even further and admit we still have a long way to go before we are able to properly define what ‘human values’ are. We still have so much bias in that term. It goes back to my point that we’ve yet to create a civilization that works for everyone. In short, we are getting ahead of ourselves. In my article AI - Mind If I Don’t? I said, “Throughout history, including the invention of the wheel, has ANY piece of technology EVER freed man (read human) from his bondage?” The answer is no!
Thank you Gavin, I wrote a while back about needing something like the Bill of Human Rights, which aligned global 'values' as a Bill of AI Human Alignment... not easy but we have managed before (although fail with leaders and UN oversight). But it is a first step.
Human's must free themselves from the bondage, technology may have improved lifespan and disease... how people live their lives is the choice they make. I do believe we can free ourselves from a bondage and live in accordance with higher powers and use technology to our advantage, unfortunately for many reasons people adopt other mind states. But that is a whole different discussion.
Overall, I am hopeful. Maybe, just maybe human consciousness will raise and we will live our best humanity, but then my dystopian side sees that many will not choose that path and live in the Orwellian world. We still have a choice, but seriously not enough action and awareness is discussed by the general public to take advantage of directing AI for good... to help us thrive.
There are many problems in the world. AI is not the problem, human laziness, human addictions and human greed are part of problem... and many more... this is the issue we should be solving. Banning addictive substances and so on... as the post I wrote on Japan's reversal of obesity, similar guidelines may help us get AI for good and overcome our addictions to wars, gluttony, consumption, etc - https://onepercentrule.substack.com/p/how-do-we-build-a-smarter-society?utm_source=publication-search
Would you include technology, social media, smartphones, AI and ChatGPT as addictive substances? We already know about fast food, processed food and cigarettes. What about money and power? Those two combined are more addictive than heroin and yet we encourage them and from those addictions we build and form our societies.
Is the frantic “race” to adopt and adapt to A(G)I wholly or largely created by the fear/mistrust that someone malevolent or heartless will adopt it first?
Good question Norman. Harold Wilson had said, either the blind imposition of technological advance or the purposive use of scientific progress. I think this report is very clear sighted, not a race against others, there is an opportunity to get ahead and be 'aware' and create value, rather than taken by surprise and not make the best use of the tools.
I think there is a fear that AI will become so embedded that who gets there first will dominate. They will control the data and therefore how the model adapts. It always comes back to power and money.
One other point. Countries have the ability now to model their own LLM systems. eg in Poland our government is training its own models, on its own data and buying the cloud systems, so they are not reliant on 'techbros'. It is not rocket science - the algorithm (transformer) is built and open sourced, it is about data and compute. So Nvidea benefit by selling compute to governments... and then governments need data cleaning teams. Sensible, rational government will avoid paying big tech big dollars
Absolutely, that is one element - in the 'race'. But let's face it, US companies are leading that race, with UK talent and Google UK development, but those US companies will think nothing of closing down their foreign subsidiaries... so yes, a race is important. Profit is point 3 of my comment above... we do need to get a steady hand and some additional formal control over AI for the common good. Someone will nationalize it one day (as China are doing)
When we begin to talk about the ‘race’ to peace then I’ll be all ears!
Those who get there first will dominate - until the AGI itself decides to take over.
From where would the surprise come? How might delay squander the best use of the tools? I’m not trying to come across as a Luddite; I just find the messaging on AI to be strange. It feels like, “repent of your old ways, for paradise is nigh.”
Multiple ways. 1. For sure there is an excess of inefficiency in governments, and high debts, AI does help streamline processes and can help reduce costs. 2. It identifies opportunities in data that we are not seeing, this can be highly beneficial in health care. and 3, superintelligence will lead to breakthroughs in science, countries want to be at the forefront of that to build competence and capabilities and capitalize on market share.
I do think that those that get ahead earlier will make hay while the sun shines.
What is the sunset in this analogy if not “someone else will have all the hay and leave us to starve”? My point in pursuing this question about racing is that there seems to be a logical if implausible alternative, namely, collaborating at a pace that helps everyone. But no one seems to be entertaining that. And if we are not going to entertain that, it feels like we are not racing for a tool so much as racing for a weapon, or at least some future where there is untold abundance (hay) for those faithful to the machines of loving grace and scarcity for everyone else.
Good points Norman. There is a lot written and discussed about this. There is a call for a CERN of AGI, a collaborative development by nations. The video I link to of Demis Hassabis mentions this.
There is a 'fear' that we are racing for a weapon, as Putin said, he who controls AI, will control the world.
Likewise, a recent post I did on Tyler Cowen shows that countries will be disadvantaged if they do not have their own AI systems.
The post by Sam Altman this week and Dario Amodei's post point to abundance for all. But as the report I mention states, governments are not taking action to achieve this.
So fundamentally, there is a risk that bad actor nations will control AI and weapons of mass destruction, but like all double edged technologies, we can also harness good for all of society if we manage this right and get government action. The UN meetings on AI governance are woeful. A CERN for AI with oversight by a nuclear style committee seems the most sensible solution.
but governments need to wake up and take action!
Thanks for your patient and thoughtful engagement, Colin. I find myself increasingly challenged to properly characterize AI. Tool? Weapon? Companion? It is as if we are creating a deity and then asking it to work for us (and not *them*), or as Belloq says in *Raiders of the Lost Ark*, "It's a transmitter. A radio for talking to God." Only there's no guarantee the god we create will melt the faces of our hubristic adversaries.
Nice use of Belloq. You know Norman, when a system has access to the world's knowledge, as an AGI superintelligence would, then there is no knowing what it can help bad actors create! There is no guarantee and we will need global regulations like the nuclear treaty.
As to creating a God, this is exactly the short 4 minute brain by Anthropic developers I often quote "we are growing a brain"! Yes, the developers believe they are building a God. This is dangerous, very dangerous. Not the current version, but before we get to the singularity we MUST have guardrails and regulations in place.
There is that plus the urgent need to get ahead of the pure profit(eering) motive of the purveyors. Otherwise, they'll foist it onto us indiscriminately without compunction.
Thought provoking, as always, but I need more time to ponder both the report, as well as your implied and stated questions.
When I listened to that video a few days ago, it disturbed me because Hassabis throughout it was bouncing , like most of us, from ' the future with AI has the potential to be human life enhancing' to on the other hand, there are these existential concerns. His advice for young people today is to immerse themselves in the AI, to get the most out of the tools, in order to use them. In other words, to embrace it somewhat without reservations, because it is unstoppable. To be honest, I’d hoped for a more in-depth answer, for these are the young people I am attempting to answering that question to every day. Yes, doing this immersion with AI is a necessary action, today; but, with the acceleration, we know that is fleeting advice. I am not faulting the answer. I was seeking firmer grounded and expanded actions advice, more seeing into where this acceleration will be in just a few years, maybe even just next year. We are increasingly stuck in this unknownable , and inability to offer advice to handle this change. Yes, as you point out , we don’t have the answers, yet here we are dealing with a complexity none of us comprehends even as that complexity accelerates. I am stating the obvious here, I know
While this was not the core theme of this article, Colin, what struck me was this line :
We may hope to embed human values in superintelligent systems. But we have yet to reach consensus on what those values are.
As you stated, "De Minckwitz has written something rare: a document that straddles the line between statecraft and philosophical provocation. It does not pretend to have all the answers. But it is unflinching in its claim that the old questions no longer suffice" . With this acceleration, this change of thinking, one that incorporates AI + Humans, this change is slightly incomprehensible.
So, I'd like to ask, in line with the goal to foster these ongoing dialogues, have you read , " AI Mirror: How to Reclaim our Humanity in the Age of Machine Thinking", by Dr. Shannon Vallor, and if so, will you be posting a review on it? I'm about half way through it.
Appreciate this post and even more so, your invocation to remain curious as we immerse ourselves in the dissonance and uncertainty in order to create and engage in this dialogue.
Excellent points Wendy. Such thoughtful, perceptivity, which gets to the heart of the dissonance many people feel. You've pinpointed the central paradox perfectly: how can we offer meaningful, long-term guidance, especially to young people, when the technological ground is shifting beneath our feet?
Hassabis did a talk at Cambridge recently which was targeted at young people, but his advice was very similar to what you got from the other video in my post, (this is the Cambridge interview https://www.youtube.com/watch?v=48MmAGjZ7cA).
That feeling of seeking 'firmer grounded' advice is exactly what prompted my post. The report's argument is that individual advice will always be 'fleeting,' which is why we need a radical overhaul of our collective institutions to manage this accelerating complexity.
Your frustration with the standard advice to 'just immerse' ourselves in AI is completely understandable, but its the advice I give students too (although add other depths). It feels inadequate because it's tactical advice for what is a profound, strategic shift. That is precisely the gap the de Minckwitz report, and my post, plus the great comments, seeks to address. The old answers and career paths are becoming obsolete in real-time. The report's radical proposal is that the only durable 'advice' isn't for individuals to simply keep up, but for our society to build new systems, new ways of thinking and governing, that can handle this 'incomprehensible' change.
You're right to highlight the point about human values. It’s the philosophical bedrock of the entire challenge. We are trying to build the ship's rudder while sailing into a storm, without an agreed-upon map. Hassabis does call for a new wave of philosophers to help us lay the map!
I have met Shannon a few times at conferences and commented on her book on other platforms, good idea about a review. I like her work, and could add in some of her recent thoughts. Nice nudge, thank you.
"The premise of this Policy Exchange report is that superintelligence will demand a fundamental reimagining of governance structures"
Indeed, it will demand a fundamental reimagining of human nature itself.
Sadly, it will indeed.
One of the arguments in favour of AI is that large organisations (the NHS, Government itself, Pensions Dept, etc) have got too complex for humans to manage. Hence AGI to the rescue.
The difficulties of running large-scale organisations has been well mapped (eg: "The Awakening Giant: Continuity and Change in ICI". (1985) by Andrew Pettigrew).
So why not approach the problem by decentralising power and administration into smaller and more manageable units?
In theory , Joshua, what you are saying makes perfect sense. But government departments are not very good at communicating with each other, so if you break them up further you are likely to run into even more roadblocks. But I don’t think AI is the answer either. And here’s why: McKinsey is the most overvalued consulting firm in the world. They charge vast sums of money and invariably their recommendations are impractical and cannot be implemented by their clients. They then charge more money to send in an army of consultants to do the implementation and it still doesn’t work. I suspect the same dynamic will happen with AI.
Yep, a consultant is someone who borrows your watch to tell you the time. :)
I take your point though. As an aside, I wonder why organisations rate consultants so highly. Perhaps it's simply because businesses lack confidence in themselves, or perhaps it's because they can blame the consultants if it all goes wrong from following their advice. So there's some deep psychology at work here too. Yet still, there's nothing so practical as a good theory.
I think that the large consulting firms are in for a rude awakening, the big 4 accounting firms too. McKinsey has already downsized. Accenture are currently gaining AI implementation contracts, but once those processes are automated I sense there will be less roles in major corporations for 'consultants'.
The companies I work with bring in major consultant firms because they often believe that the consultants have a deep rooted understanding of the current market dynamics and competitive landscape across sectors. It amazes me. Every time there is a board change in major financial institutions in Eastern Europe (surprisingly regularly) the big consulting companies are brought in to refresh the strategy! Millions of dollars spent on slides and conversation - it is a strange business model.
But as I say I do believe that AI is already impacting the sector and will upend the consulting industry.
Incidentally Mckinsey roled out AI to all staff trained on proprietary data, and has subsequently reduced headcount from 45,000 to 40,000 with increasing revenues! Capitalizing on profit and AI productivity gains, but it will not last!
Whoever thought the most intelligent person in the room was a policymaker got that wrong! In a match of intelligence I would put my money on the likes of John Lydon aka Johnny Rotten of The Sex Pistols any day of the week. There is no doubt government is desperately inefficient and needs a kick up the backside. As an example, the NHS spends close to $1 billion on consultants like McKinsey every year and yet the system gets more and more inefficient. Inevitably the money-go-round gets redistributed according to who is friends with whom. But efficiency is not the be all and end all. Without compassion, without empathy, without wisdom algorithms will bypass human need for efficiency and we already know that AI can be modeled for bias. Is more efficient bias better and if so for whom? There is a TV series called 24 Hours in A&E. What sets this series apart from other reality TV medical shows are the backstories of the patients, doctors and nurses. Healing is not just an efficient medical procedure. Government is not just the redistribution of money and services. My concern with AI is the privatization of government by Big Tech. Data mining for power and control. It’s hard enough to argue with ‘authority’ when you have no power and it will be even harder when that ‘authority’ is a machine. Before we let AI run amok we need to pause and redefine what a civilization is. A civilization that serves everyone. But, we’ve always needed to do this throughout history and yet we’ve never done so. AI will not do this. Technology has only ever replaced one toil for another, allowing for the redistribution of human power and wealth depending on who the architects of that technology are, at any one moment in time. AI will be no different.
I agree with much of what you say Gavin. But the report in this essay is effectively showing a roadmap (with limitations as I point out) for how best to get Government controlled AI, as illustrated in my last comment. Big tech provides the tool, in corporates and institutions they should have no control whatsoever over the data, this is shown vividly by Palantir, a great tech system that the government licenses and then government employees, in government controlled data centers, with no access by the big tech company, have improved oversight of the patterns in the data. I think people miss the point about who owns the data in corporates and institutions - the tech co's do not get access to this! Meta is different because we give their platforms our data... so they 'molest' and hook us as users. The privitazation of government would be very severe mismanagement by parliament... as I say AI from a big tech company should be isolated on government owned and operated servers so big tech have no access to it. It just seems insane it would be any other way.
History shows that foundational innovations invariably usher in new eras and reshape entire civilizations. In his book, The Coming Wave, Mustafa Suleyman argues that containing such technologies is nigh on impossible, even as he makes a compelling case for attempting to do so. To illustrate how innovations can lead to unforeseen consequences, he points out that "the creators of the internal combustion and jet engines... had no thought of melting the ice caps."
What the report I write about advocates for is "A civilization that serves everyone" as Mustafa says, we will not pause Ai, nor stop it so we need to 'contain' and manage it, and to do that we need government action, we need to understand the consequences, have agency and wake up society through very detailed awareness and education campaigns.
In Poland, I have just finished training the trainers to educate 11,000 teachers for 11 to 16 year olds, backed by the government, on responsible use of AI. We need conscious awareness of these programs, who owns them and start building an AI for the commons. For the benefit of society. Some countries are ahead of the curve.
"...the tech co's do not get access to this".
Or at least they didn't until Trumpkopf and MuskRat unleashed DOGE on the entirety of government - and the courts effectively gave them unlimited access to all the data, including what was traditionally protected data with the Social Security Administration.
The way you describe AI needing to be contained sounds more like a virus, Colin! As regards government data being separated from Big Tech we’ve already seen Musk running roughshod over the US government and we’ve no idea what access his own private companies have had to that data. That remains to be seen.
True Gavin. We MUSt have guardrails around AGI, we do not fully know the consequence of technology, eg. Nobel with dynamite, or any dual use technology. So we need to align it with human 'values' as best as we can and control who can own and access the data .. and what advanced AGI can be trained on. This is a powerful tool, like nuclear weapons, and bad actors will always find a way to abuse it - so we need to contain it.
It would be highly unusual if Musk was able to get access to citizens data and migrate it to Grok. This just flies in the face of governance controls over citizens data. But the US is certainly all over the place... as is China and communist regimes.
I agree, Colin. But, let’s strip this back even further and admit we still have a long way to go before we are able to properly define what ‘human values’ are. We still have so much bias in that term. It goes back to my point that we’ve yet to create a civilization that works for everyone. In short, we are getting ahead of ourselves. In my article AI - Mind If I Don’t? I said, “Throughout history, including the invention of the wheel, has ANY piece of technology EVER freed man (read human) from his bondage?” The answer is no!
Thank you Gavin, I wrote a while back about needing something like the Bill of Human Rights, which aligned global 'values' as a Bill of AI Human Alignment... not easy but we have managed before (although fail with leaders and UN oversight). But it is a first step.
Human's must free themselves from the bondage, technology may have improved lifespan and disease... how people live their lives is the choice they make. I do believe we can free ourselves from a bondage and live in accordance with higher powers and use technology to our advantage, unfortunately for many reasons people adopt other mind states. But that is a whole different discussion.
Overall, I am hopeful. Maybe, just maybe human consciousness will raise and we will live our best humanity, but then my dystopian side sees that many will not choose that path and live in the Orwellian world. We still have a choice, but seriously not enough action and awareness is discussed by the general public to take advantage of directing AI for good... to help us thrive.
There are many problems in the world. AI is not the problem, human laziness, human addictions and human greed are part of problem... and many more... this is the issue we should be solving. Banning addictive substances and so on... as the post I wrote on Japan's reversal of obesity, similar guidelines may help us get AI for good and overcome our addictions to wars, gluttony, consumption, etc - https://onepercentrule.substack.com/p/how-do-we-build-a-smarter-society?utm_source=publication-search
Would you include technology, social media, smartphones, AI and ChatGPT as addictive substances? We already know about fast food, processed food and cigarettes. What about money and power? Those two combined are more addictive than heroin and yet we encourage them and from those addictions we build and form our societies.