Chess may be a good analogy for explaining the difference between a descriptive model of reality and a generative model of reality.

A descriptive model of chess will describe the rules (i.e. initial setup, allowed moves, points per piece, objectives). This tells you how a chessboard might evolve and also narrows the combinations of realistic chess positions.
In contrast, a generative model involves the actual gameplay between competing players that follow the rules. It generates new styles and patterns. It establishes new strategies and tactics. It generates new openings, middle gameplay, and end game tactics.
A generative model is an evolutionary model that leads to emergent behavior. As a consequence, a master of chess is able to recall the layout of chess positions in a real game in an instant, yet is unable to remember pieces placed in random positions.
That is because causality is present in the movements of pieces in a chessboard. An intuitive system like the human mind is able to capture that causality as a means to understanding and recall.
Experience leads us to learn the underlying causality of a subject. More specifically, the relationships between the many moving pieces in a subject. It is through these relationships that we understand the world.
An expert musician, choreographer or painter is able to recognize much more in music, dance or a scene because they've learned to see relevant causality in the subject. Our minds are constructed to wire these rich networks of causality together.
The relationship of generative models with causality is a nuanced one. It is critical to not confuse causation with causality. Causation is a consequence of the deterministic computation of the unfolding of reality.
Causality is the connections we make to understand the world. Generative models are driven by computation and thus are the source of causation. Causality in contrast is how we make sense of the world.
Going back to the chess analogy. We can think of the causation of what makes a game of chess happen. The existence of the pieces and the players of the game. The pieces move because the atoms of the players emanate an EM field that pushes the pieces forward.
We can think of causation from a physical perspective as well as a computational perspective. A game of chess can be played using physical pieces or in a virtual setting where information is interchanged. Describing the causation tells us nothing about the game of chess.
However, the recurring patterns and relationships that become emergent in the game of chess tell us about causality. Like the wetness of water, causality is an emergent phenomenon. It requires an observer to recognize an invariant. In the words of physicists, a symmetry.
A generative model is driven by causation, causality is the emergent behavior. But here's the rub, it is all self-referential in nature.
Chess pieces are moved by agents that have minds that are contemplating causality. This is what others call top-down causation. How is it that the mindless computation from physics leads to emergent behavior we see in a game of chess?
Generative models create minds that create descriptive models that constrain behavior. A game of chess follows rules because players have agreed to follow a descriptive model of the rules.
Let me know if this analogy clarifies the ideas of generative vs descriptive models as well as causation and causality. I'm now open to questions.
@threadreaderapp unroll

More from Carlos E. Perez

Programming in abstractions is very different from a system that is capable of its own 'abstracting'. But what does abstracting mean? We only know of its inputs and outputs, but we fail to describe its inner workings.

I like this short video about living in space. This is because it makes you realize the gaps in your knowledge when you turn off something (i.e. gravity) that you have always assumed to be present.


Perhaps we can understand 'abstracting' better if we turn of many assumptions that we unconsciously carry around. Perhaps we need to get rid of the excess baggage that is confusing our thinking about abstraction.

Turning off gravity and living in space is a perfect analogy. We somehow have to turn off a cognitive process to understand the meaning of abstraction.

The first step to divorce ourselves from our habitual cognitive processes is to realize the pervasiveness of 'noun-thinking' .

More from Internet

You May Also Like