Discussion about this post

User's avatar
First Last's avatar

If an activity can be cost-effectively automated, it will be. Such is the unstable nature of economic equilibrium and everyone's voting for that to happen with their money.

We will no longer be able to rely on economic coercion to force us into intellectual engagement. The 'augment your intelligence/abilities' angle is pushed by many AI companies but it's not gonna be true for everyone nor forever. Instead what is commonly understood is: Click the button and make the problem go away.

Maybe the world we're gonna be living in, barring any major catastrophe or other blocker along the way, will be one of pure hedonic in-the-moment experience. All problems requiring intellectual or physical effort are solved, but you still exist and experience. May as well go all-in on that then.

Unless consciousness and emotions themselves can also be "disrupted" by AI. Then it's gonna get... weird?

Expand full comment
Shon Pan's avatar

Hello - right now, AI is probably going to replace and cause lower human intelligence -we already are seeing it. One reason why I became very focused on safety was because I realized that human replacement is going to lead to "industrialized dehumanization" and likely extinction.

I'll link you to the appropriate discussion on this, which I think would be of interest to you:

https://forum.effectivealtruism.org/posts/XuoNBrxH4AGoyQEDL/my-theory-of-change-for-working-in-ai-healthtech

If I was going to sum up his very good article in one sentence:

"For lack of a better term, I'll call the attitude underlying this process successionism, referring to the acceptance of machines as a successor species replacing humanity."

I'll like to connect with you since I've been working in this space a lot with AI governance people(and leaders, including someone connected to Sam Altman) to see how we can make this go better.

My email is seancpan@gmail.com.

Expand full comment
7 more comments...

No posts