Discussion about this post

User's avatar
Norman Sandridge, Ph.D.'s avatar

Thank you for this very important post. It captured much more eloquently many aspects of AI I have been worried about. At times I think about this in terms of species of personality/character, that is, given that we seem to need more Ed Dijkstra-types in the world today, is his type an endangered species? Would types like him be able to flourish in “ecosystems” beyond a small and irrelevant circle of friends. I worry that in a world infected by a “virus” of AI dependency he would either go unnoticed or seem positively crazy and threatening, like the philosopher in Plato’s Cave. Thanks again! I’m going to share this widely.

Expand full comment
Michiel Nijk's avatar

This is the first article I've read of yours, and it's both well written and, well, informative and entertaining. I will certainly invest time reading your other articles. So, thank you!

I have, however, doubts about your concerns on AI. Whenever someone says to me: "You know, I was thinking--" I interrupt by saying, jokingly: "It hurts, doesn't it, thinking."

I'm sorry to say - most people are either incapable of critical thinking, or they are just fine letting others do their thinking for them. Case in point - a good third of Americans is now blindly following a man who is, by any critical measure, clearly deranged.

Which raises the question - for as far as AI manipulating public opinion is concerned, I think the manipulating is far less worrisome than the direction in which that opinion is manipulated. If only a majority of people were taught by an AI the current, lingering interpretation of democratic values, what, the now rapidly disappearing understanding of Democracy itself, we'd not be in the trouble we're in now.

It is, in my opinion, not so much AI manipulating public opinion that should trouble us, but the pervasiveness of aberrate median public thinking with which AIs are trained that should trouble us, which is another way of saying - AIs are merely a reflection of human thought at any given time before they enforce that thought by regurgitating it.

To say it in computer terms - garbage in, garbage out.

It is, in my opinion, far more critical we find ways to protect AI models from incorporating 'misinformation' than protect public opinion against being influenced by AI. I don't know how, but the companies building AIs should be forced to provide insight in the data they train their AIs with as well as train their AIs with a baseline of objectively 'true' information.

If we can somehow accomplish that, AI could actually function as a bulwark against the dangers of someone like Trump, the same way newspapers (under the condition, I must say, the same condition, that journalists try to provide objective news coverage) started protecting us by keeping those in power honest.

Finally, talking about the subset of humanity who can think critically - there too I don't share your concerns, simply because critical thinking is, for those who are capable of it, and enjoy it, a goal all onto itself. For those of us who like to think critically, it is a need to do so, not a burden that we would rather outsource to an AI, any more so than to other human beings.

Good lawyers will find new interpretations of the law to win cases, regardless of what an AI advises. Writers will write books, politicians will find original policy solutions, scientists will open up new worlds - because it is their nature to do so, regardless of AIs.

And for the same reason - AIs being a reflection of the current state rather than the creators of it - humans, in my opinion, will be better at original thought than AIs, at least as long as AIs need to be trained with human thought.

AIs will, at least for the foreseable future, always be one step behind...

Expand full comment
11 more comments...

No posts