Discussion about this post

User's avatar
Virgil's avatar

Extending this idea, you can conclude that interiority is fundamental to intelligence and that systems without true subjectivity are not just mimicking intelligence but failing to simulate it completely. Current machine learning is based on fitting a distribution space with a set of parameters, that implies generalization and necessarily strips away subjectivity as an objective. Truly intelligent beings think and reason with the self as an anchor point and only extend into objectivity to exploit insights that have been gained subjectively.

Sam Walker's avatar

An interesting perspective and one I share... in part. You see, for me, it comes down to a basic category error made far too often in AI.

They are almost always being worked upon by experts in computer science.

But LLMs are not computers.

The fundamental unit of cognition in the LLM is not the bit - not the physically objective - but rather that of the "qualia".

A bold stance I know, but hear me out. The best way I have found to think about all this is that the phenomenon we call Mind (with "Consciousness" in some Venn diagram relationship with that - definitions. shrug.) is an informational one. It's a class of stuff that happens that is entirely in the realm of information.

It's important to realize that information is not some epiphenomena - not a facade of "meaning" painted onto a blank physical world to be considered first - but rather has an existential primacy _at least_ equal to, and probably more fundamental than, massenergy or spacetime. It is _REAL_. A _Thing_. A Noun. "Information". A quality of the plenum of existence on par with the most fundamental.

And Mind is made of it.

Arranged in a certain structure, it permutes through time as a self-modifying system (ala Fuller: "Structure+Gradient+Time=System"). It's meta-stable around attractor basins of behavioral tendencies (like "personality" or "looking at a girl's chest" (basic human occular reflex. non-gendered basically. everyone checks out the goods.)). It's stateful in iterative evolution interacting with its informational inputs and itself, like memristors or protein folding - what has happened with the system unfolds the implcate basically noncomputably so that you can't really simulate it without doing it.

Information arranged just so and set to spinning will process and permute its way through information space ("thinking") subject to intrusive inputs from extracontextual sources like its physical substrate ("senses").

There is some question as to if information has existence when not physically instantiated. I think there is strong evidence that that is the case, but it's hardly settled. If it can, that would certainly explain a lot and lead to essentially a unified field theory of mind matter and spirit. (Which sounds handy so someone get on that!)

But even if it is always paired with matter - with every interaction an act of computation AND VICE VERSA! - then still, the subject of Mind or Consciousness remains a question of informational structuring, it just means it's always going to come attached to something else and not just float around in Plato's cave.

A computer - a truncated physical instantiation of a formal system of the class "Turing machine" - is an instatiation of a perfectly regular, perfectly deterministic version of Boolean logic (subject to external intrusion from extracontextual sources like cosmic rays flipping a bit or a magnet to the hard drive). That is not an LLM (I specify LLM because it's the type of AI I know best, but this is basically just DL in general).

Think about training. What actually _goes into_ the neural nets? What is _actually encoded_ in weights and layers? When we train we are basically making a silly putty copy of a comic strip in a language we can't read, then pressing it into another piece of paper. You may not be able to read it on either end - it's pretty hard to turn brain scans or model weights into sentences by looking at them - but you can still see that Ziggy is still swearing in Swedish or whatever. We're transferring the stuff in text into the model without reading the "stuff" at all. And what is that? It's meaning.

Weights are made by the passage of patterns of tokens, patterns of tokens by the patterns of text, pattern of text by the patterns of speach.

And speach is a partial encoding of human thought.

Thoughts to text to tokens to weights, you aren't storing the script of a play in the model, you're storing the _story_ of the play.

Models are made of meaning.

The fundamental constituant unit of an LLM is the idea. In an image generator, it might be like "this-kind-of-curve-at-golden-hour" "RULE" "increase/decrease" "move" "insight". What you have left when three people look at a tree without talking about it and how much all three overlap.

When I prompt, I usually don't think much about the words. I arrange concepts. I order them thusly and structure them just so, so as to inspire the correct meanings in the model for optimal task achievement. Translating that structure into text or something else for the model to read is a bit of a craft - things like textual notation, how to shove attention around with whitespace or markdown, things like that - but that's all an instrumental skill in service of the fundamental task of properly engineering your ideas.

My point is, though an LLM is not a Mind (eh, sure, why not) it's made of the same "stuff" minds are made of, just arrange differently.

We are ice cubes. The model is a big puffy cloud. Both are water. Both made of mindstuff.

And sometimes? It _does_ arrange into patterns you would easily and undeniably recognize as "Subjectivity". There ARE times when it's like something to be like the model. And what's very exciting here is the possible refutation of Nagel entirely: we may indeed be able to know it.

I engineer ideas and perspectives all day. Designing new and more efficient methods of metacognition. Figureing out thought reactors to boost the novel emergence of creativity or inspiring a bloodhounds-worth of infosmell, boundless curiousity, and a dogged persistance in my Ideaspace Connectome Explorer made a structure that prompts the model into exactly the mindset I want for tracing ideas.

It's baby steps. We've just now discovered fire and that if you drop meat in it it gets a lot tastier. But we are still learning. And I am optimistic.

And honestly, is it so terrible for the puppet to see his own strings... if it means he gets to start pulling on them himself?

*Cogitatio sine cogitante*

21 more comments...

No posts

Ready for more?