9 Comments

If an activity can be cost-effectively automated, it will be. Such is the unstable nature of economic equilibrium and everyone's voting for that to happen with their money.

We will no longer be able to rely on economic coercion to force us into intellectual engagement. The 'augment your intelligence/abilities' angle is pushed by many AI companies but it's not gonna be true for everyone nor forever. Instead what is commonly understood is: Click the button and make the problem go away.

Maybe the world we're gonna be living in, barring any major catastrophe or other blocker along the way, will be one of pure hedonic in-the-moment experience. All problems requiring intellectual or physical effort are solved, but you still exist and experience. May as well go all-in on that then.

Unless consciousness and emotions themselves can also be "disrupted" by AI. Then it's gonna get... weird?

Expand full comment

Sadly you are absolutely right. Automation will be disruptive. I have a dystopian view that many (the majority) will be glad to attend gladiator style fights and maybe 10% will gather for intellectual conversations. I agree with Sean, that consciousness and emotions are already disrupted.

Expand full comment

Consciousness and emotions are already disrupted by AI, the most obvious being social media and filters. More complex AI may make this worse.

https://futurism.com/sophisticated-ai-likely-lie

Expand full comment

The "disrupt consciousness" bit I meant as a post-singularity fantasy.

As in: Perhaps there's even higher levels of consciousness that are accessible only via human augmentation like natively thinking in >3 dimensions, implanting understanding or some entire other type of consciousness. And maybe there's new emotions that we cannot feel like we cannot see infrared or ultraviolet.

Psychonauts commonly describe experiences for which there exist no words. Might hint at something more.

Expand full comment

Might hint that you can destroy the sense of self, yes and wildly mind control someone, yes.

Expand full comment

Hello - right now, AI is probably going to replace and cause lower human intelligence -we already are seeing it. One reason why I became very focused on safety was because I realized that human replacement is going to lead to "industrialized dehumanization" and likely extinction.

I'll link you to the appropriate discussion on this, which I think would be of interest to you:

https://forum.effectivealtruism.org/posts/XuoNBrxH4AGoyQEDL/my-theory-of-change-for-working-in-ai-healthtech

If I was going to sum up his very good article in one sentence:

"For lack of a better term, I'll call the attitude underlying this process successionism, referring to the acceptance of machines as a successor species replacing humanity."

I'll like to connect with you since I've been working in this space a lot with AI governance people(and leaders, including someone connected to Sam Altman) to see how we can make this go better.

My email is seancpan@gmail.com.

Expand full comment

Thank you Sean. wholeheartedly agree with you about "industrialized dehumanization". I will read the Effective Altruism post carefully. Just sent you an email and look forward to discussing further. This is a crucial topic. Colin

Expand full comment

Thank you so very much. I am deeply honored to communicate with you and exhibit our agency to build a better future.

Expand full comment

Intellectual effort and development is already a choice, to a certain extent, and the rise of AI will make that choice an ever-increasing niche. The human effort required to learn and think will require a religious-devotion to those capabilities.

This sounds quite a bit like the monastic preservation of knowledge in the Dark Ages, but Western Europe eventually benefited from the preservation of learning in Islamic culture. With AI’s global deployment, where will learning continue to flourish?

Expand full comment