13 Comments
User's avatar
WinstonSmithLondonOceania's avatar

There's no question about the enormous potential for technology to improve our lives. At the same time however, it appears that the purveyors don't consider humanistic ideas like empathy and ethics to be sufficiently profitable.

Witness how the web was subverted by crass commercialism, collecting as much personal information about us as possible and creating psychological profiles of us all for "micro targeted" marketing. The same information and profiles can and have already been used for more nefarious purposes, such as identity theft and blackmail. Ah progress.

AI can be immensely beneficial, while it can also put the abuses we've already experienced on steroids. By all means, let's embrace the reality that only change is constant, with cautious optimism. Very cautious.

Expand full comment
The One Percent Rule's avatar

You are right - cautious optimism. I agree the tension between technological potential and the pursuit of profit is not equally balanced. Great point on the subversion of the web by crass commercialism, as you describe, serves as a cautionary tale.

AI, with its immense power, can amplify existing abuses a gazillion fold over what we have. That's precisely why I believe we need to be extremely vigilant and proactive in shaping its development and deployment. We can't afford to be naive about the potential for misuse and have to raise that awareness - the psychological misuse alone can be devastating. Overall, we must maintain personal agency. 

Expand full comment
WinstonSmithLondonOceania's avatar

The big question is how to get politicians to listen to us and not their biggest donors? They almost treat it like the Church of K Street.

Expand full comment
Joshua Bond's avatar

Very cautious indeed. History shows that new technologies (eg: the steam engine) are used by the owners of capital for business profit first, and then very slowly the fall-out (eg: dangerous working practices, long hours, factory-paced jobs, short-cycle soul-destroying jobs, hire & fire, etc) are gradually ameliorated by a catch-up of social concern, perhaps a general strike, the rise of unions, and some new laws, etc) - to balance the one-sided use of the power of the new technology.

With AI the potential for 'one-sided-use' of its power is massive (like your example of the 'new technology' of the internet). remember how it was supposed to democratise the world through everyone having equal access to all knowledge? Clearly there's more to it than 'mere knowledge'. Many knowledgeable people end up in gulags.

There currently seems to be a huge push and momentum towards centralising power. AI will enable that on steroids. I can see the good AI can/could do (and what the internet can and does do - I learn all sorts from it for 'free', everything from woodwork to weaving). Though Ive become something of a pessimist having witnessed the trashing of the post-war social contract in my lifetime, which now seems to me as deliberate, rather than an act of governments being stupidly misinformed. He who pays the piper calls the tune.

Expand full comment
The One Percent Rule's avatar

Very true Joshua. You are right to point out that new technologies are often initially used to consolidate power and maximize profit, with social concerns lagging far behind.

Technology alone is not a panacea. We need to address the underlying power structures that shape its development and deployment. I share your concern about the potential for AI to further centralize power. The 'one-sided use' of its power is a real and present danger. We can't rely on a gradual 'catch-up' of social concern; we need proactive measures to ensure that AI is used for the common good.

This requires systemic change, including robust regulations, strong labor protections, and a public that is both informed and engaged. As you rightly point out, 'he who pays the piper calls the tune,' and we need to ensure that the tune reflects the interests of all, not just a select few. Wishful thinking I know, but we have to try.

Expand full comment
WinstonSmithLondonOceania's avatar

Yes, exactly!

Expand full comment
Veronika Bond's avatar

"Let us not strive to be merely “future-proof,” but “future-fluent,” embracing the reality that the only constant is change."

Beautiful!

I believe (based on my limited perception of course) that the greatest challenge will be for humans to keep up with the wonders of technology. Can we grow at the speed of our technology? Or are we using the technology to compensate for our inadequacies?

I totally believe that with parallel internal maturity, the potential you suggest for a human future with new technological opportunities is totally possible.

With the expectation that technology will let us 'off the hook' of doing the work we each have to do ourselves, I'm not so sure...

Expand full comment
Gavin J. Chalcraft's avatar

I wish I shared your optimism, Colin. While I've said many times that there is no doubt AI can and will do some good, I do not believe it will be used that way. We only need look at those creating these products for the mass market, those known as the Tech Bros., to know it will not. Their collective history tells us as much. Motivation and intention is everything and I don't believe their motivations and intentions are being directed toward the common good with this technology. And while the EU seems to be putting in place more effective guardrails, most governments do not share those intentions either. Indeed, the Biden Administration's EO on AI guardrails, offered little more than giving non-regulatory agencies the power to give advice but no power to regulate anything. We are entering a Cold War with China on AI development. It is a case of who dares wins and guardrails will be a fly in the ointment in that race. Again, we only need look at the last Cold War we entered into with Russia to see the consequences of the nuclear arms race and warheads which will litter the planet for thousands of years to come and we are not out of the woods yet, as regards a nuclear war. The problem we face is not the technology per se, but the fact that human consciousness, not human ingenuity, and particularly amongst those who seek 'power,' are not yet evolved enough to use it for good. They cannot and should not be trusted to use it for anything other than power, money and manipulation and that includes both the corporations and the governments with whom they are aligned. Those who do good with this technology will remain in the minority. The equivalent of the organic farmer against the giant mono-agricultural corporations.

Expand full comment
The One Percent Rule's avatar

I know Gavin, this is my conflict. I understand and share your concern that the race for AI dominance might overshadow ethical considerations.

While I agree that the geopolitical landscape presents significant challenges, I believe that even amidst competition, we can prioritize the development of AI that benefits humanity. The EU's efforts to establish guardrails are a positive step, and I hope that other nations will follow suit. I believe that fostering international dialogue and collaboration on AI ethics is essential to prevent a destructive arms race. Although, I was equally concerned to just learn of Witkoff's point that the US and Russia could potentially work together to build AI - this is why I think we must have a treaty similar to the Nuclear treaty, I know this is being discussed.

I also believe that focusing on the potential for AI to address global challenges, such as climate change or healthcare, can create a shared sense of purpose that transcends geopolitical rivalries.

Amid all of the mess, maybe we can find a way to empower more 'organic farmers' to counterbalance the 'mono-agricultural corporations' - although I hear you, this is unlikely. But we have to find a way to wrestle control.

Expand full comment
Gavin J. Chalcraft's avatar

A global treaty is essential, but I doubt you will find much play with either Russia or the current US administration to join in, which will fragment any sense of shared purpose. I do think AI could benefit healthcare, but the question in my mind is can AI be developed to diagnose and offer treatment plans which are NOT biased toward profit? The danger is we give our power away to AI thinking it is non-biased, when it actually might well be and does not have out best interests 'in mind.'

Expand full comment
The One Percent Rule's avatar

Got to try - but it is the US and Chinese companies that control the technology now and somehow we need to wake those buggers up in Congress!

Expand full comment
Gavin J. Chalcraft's avatar

You are absolutely correct. 100%. But Congress has hobbled its own power, handing it over to the executive branch, so try as one might, I think you will find it difficult to gain any purchase there.

Expand full comment
Gavin J. Chalcraft's avatar

An article on US health insurance companies using AI to determine who is covered or not.

https://www.newsweek.com/health-insurance-pay-out-ai-2046555

Expand full comment