2 Comments
User's avatar
Jonathan Moss's avatar

Awesome as always; here's what I got for your AI. It is gonna be pretty hard to understand without my whole model though.-

Every example (above) is double scope blending found by pattern matching of features of (agglomerated (floating concepts, brain distributed a-temporal a-spatial processing)) concept by history, effect, future interaction - then modelled as prediction (senses (confounds) + in order to optimally influence future states variable across pop). this is cross-scale-mode decryption across cortex of unrelated concepts (ie, not logical positivism; not what neuroscientists think). This is a-temporal and a-spatial processing because people place object tool does not interact with other in future, yet t+1 (any prediction), is open prediction, and is now, and you predict anyway (ouch). Therefore conceptual determinism.

If this is annoying let me know.

Expand full comment
The One Percent Rule's avatar

Thank you. Now what a thought provoking idea you have, very intriguing - so from what I understand, you’re describing a framework where double-scope blending isn’t bound by conventional spatial or temporal logic? Instead, the brain processes concepts in a distributed, a-temporal, a-spatial way, where pattern matching and predictions aren’t confined to linear cause-effect logic but rather emerge from cross-scale decryption across the cortex - which makes sense. And if I understand correctly this resonates with the idea of ‘conceptual determinism,’ where even seemingly unrelated ideas converge to influence future states. Now that is deep and meaningful.. and clearly a path to develop further, not annoying at all

Expand full comment