What Is Computation?
We often describe computation as symbol manipulation, state transitions, or logic gates. These descriptions are technically correct but conceptually narrow. They explain the mechanics but not the deeper structure of what computation actually is.
If we look across programming languages, machine learning, human cognition, and distributed systems, a deeper pattern emerges:
Computation is the transformation of structure.
And there are three fundamental ways this transformation happens:
- Expansion: generating richer structures from simpler ones
- Compression: reducing complex structures into simpler invariants
- Morphism Construction: building bridges between structures
All computation is some combination of these three operations. This framework explains interpretation, semantics, learning, understanding, intelligence, and even Datom.world's architecture.
1. Expansion: Creating Higher-Dimensional Semantics
All computation begins with a low-dimensional substrate, such as:
- bytecode
- datoms
[e a v t m] - text
- DNA sequences
- audio waveforms
These are streams: flat sequences with no built-in semantics.
An interpreter transforms a stream into a multi-dimensional space of meaning.
Example: Bytecode → Execution Semantics
A linear instruction stream becomes:
- stack depth
- memory objects
- control-flow graph
- environments
- closures
- causal order
- side effects
None of these dimensions are in the stream itself. The interpreter creates them.
This dimensional uplift occurs everywhere:
- DNA folds into proteins
- Datalog rules expand into logical graphs
- Text expands into thought
- Datom streams expand into entity graphs
- π-calculus channels expand into topologies of processes
Expansion is computation as semantics creation. It turns inert symbols into structured meaning.
2. Compression: Learning Through Dimensional Reduction
If expansion adds structure, compression removes structure, but not arbitrarily.
Compression extracts invariants from high-dimensional data:
- models
- embeddings
- categories
- latent spaces
- patterns
- summaries
- rules
Machine learning is the most explicit form of this:
High-dimensional input space
→ compress →
Low-dimensional latent spaceThe system collapses unnecessary degrees of freedom while preserving information that matters.
Compression is not the opposite of semantics; it is another kind of semantics. It reveals what the data is really about by eliminating what it does not need.
Compression as Symmetry Discovery
Compression is often explained as removing noise or reducing dimensionality, but that view is incomplete.
Interpretation expands a low-dimensional stream into a higher-dimensional semantic space, but much of that dimensionality is basis-dependent: specific to the interpreter's representational choices rather than intrinsic to the semantics.
Compression removes this basis-dependence. It takes the high-dimensional structure and strips away the representation-specific dimensions until only the basis-free invariants remain.
A system can compress only because the expanded space contains regularities that survive changes in representation. These symmetries are patterns that repeat, align, fold, or map onto themselves when viewed from a higher-order perspective.
What remains after compression is not a reduced version of the data, but the underlying invariants that made the data meaningful in the first place.
Learning, therefore, is not "throwing information away"; it is identifying the underlying symmetries that make the data intelligible. A dataset compresses well only when the higher-dimensional space contains such invariants: axes along which many points behave identically or predictably. The better the symmetry is understood, the more powerful the compression, and the deeper the generalization.
True Symmetry vs. Apparent Symmetry
When we analyze something through coordinates or basis vectors, it's easy to mistake symmetry as a property of the coordinate system. But not all symmetries are equal:
Apparent symmetry can be an artifact of the chosen basis and can disappear when the basis changes. Only true symmetry survives all basis transformations.
Apparent Symmetry (Basis-Dependent)
This is symmetry that only seems to exist because of the way you chose to slice the space.
Example:
A matrix representing a linear transformation might appear diagonal in one basis (revealing apparent symmetries in its structure), but become dense and asymmetric when you change to a different basis. The diagonal appearance was an artifact of the coordinate choice, not an intrinsic property of the operator.
The symmetry was not in the object. It was in the coordinate description.
Many patterns in machine learning, statistics, and neural embeddings fall into this category: the "symmetry" is really a coordinate alignment. This is why bad embeddings look symmetric until you rotate the latent space.
True Symmetry (Basis-Free)
This is symmetry that survives every possible coordinate transformation: rotations, affine changes, nonlinear reparameterizations.
Example:
A sphere is symmetric under all rotations. No matter what basis you use, its underlying symmetry group (SO(3)) is unchanged. Basis-free symmetry is an intrinsic property of the object itself.
This is the symmetry that matters for:
- physics
- information theory
- category theory
- compression
- Datom.world semantics
- learning invariants
Because it reflects the structure itself, not the representation.
Compression Relies Only on Basis-Free Symmetry
Interpretation creates a high-dimensional space where some dimensions are essential to the semantics and others are specific to the representation choice. Compression identifies which is which.
How does compression know what to remove?
It removes anything that fails to survive basis changes.
In other words:
- dimensions tied to specific representational choices can be eliminated
- basis-free symmetries remain
- what survives is the invariant structure
This is why compression is a form of discovering invariants. It answers the question: "What remains true no matter how I choose to represent this thing?"
Only true symmetry survives. Compression reveals what is basis-free: the invariants, the equivalences, the true symmetries.
Thermodynamic Connection
In thermodynamic terms:
- entropy = representational cost
- learning = finding lower-entropy structure inside higher-entropy data
The more compressible something is, the more learnable it is.
3. Morphism Construction: The Heart of Understanding
Expansion and compression describe how systems generate and simplify structure.
But understanding is something else entirely. Understanding requires seeing how different structures relate, which is precisely what morphisms capture.
Understanding is the ability to build morphisms (structure-preserving mappings) between different spaces.
A morphism is:
A → BA consistent way one structure relates to another.
This is where true semantics arises.
Examples of Morphisms
- A metaphor is a morphism between conceptual spaces.
- A scientific theory maps observations → models.
- A compiler maps syntax trees → machine semantics → bytecode.
- A machine learning model maps inputs → latent representations.
- A mathematical proof maps assumptions → conclusions.
- A human insight connects two structures that were previously separate.
Expansion and compression are themselves morphisms: expansion is a morphism from low-dimensional stream-space to high-dimensional semantic-space, and compression is a morphism from high-dimensional structure to low-dimensional invariants. But morphism construction is more general:
- It can connect different dimensionalities.
- It can unify different representations.
- It can explain relationships without changing data.
- It can reveal equivalences and symmetries.
Understanding is not dimensional motion; it is structural alignment.
Understanding is discovering the bridges that make multiple structures mutually intelligible.
Morphisms Require Basis-Free Symmetry
A morphism only exists between two structures if their basis-free invariants align.
Two structures may look similar in one basis (superficial symmetry) but become totally different when the basis shifts. This means there is no true morphism, only a coordinate coincidence.
Understanding, in this sense, requires identifying structural symmetry that is independent of representation.
Morphisms Define Equivalence Classes
Morphisms are not just bridges between structures. They can be used to define equivalence classes, which further enable compression.
When isomorphisms (bidirectional structure-preserving maps) exist between two structures A and B, we can map A → B and B → A while preserving structure. This means A and B are equivalent: they represent the same underlying pattern in different forms.
This creates equivalence classes of structures that:
- Share the same basis-free invariants
- Can be transformed into each other via structure-preserving maps
- Represent the same essential information in different forms
Compression exploits these equivalence classes: instead of storing every member of a class separately, we store one canonical representative and the morphisms needed to recover the others.
Examples:
- In category theory, isomorphic objects form equivalence classes; understanding one is understanding all.
- In machine learning, different embeddings of the same data form an equivalence class under rotation/scaling.
- In programming, α-equivalent expressions (differing only in variable names) represent the same computation.
- In physics, gauge symmetries define equivalence classes of field configurations. Different gauge choices in electromagnetism or quantum field theory are different mathematical descriptions of the same physical state. Gauge transformations are morphisms that reveal which configurations are equivalent: they represent re-labeling due to limitationsof the observer, not true physical transformations. Physical observables are gauge-invariant because they must be the same for all members of an equivalence class.
This is why morphism construction is fundamental to both understanding and compression: morphisms reveal which structures are truly different and which are merely different representations of the same underlying form.
That's why interpretation can create representation-specific dimensions, compression removes them, and morphisms identify what's actually there: the equivalence classes of basis-free structure that survive all transformations.
The Unified View: Computation as Structural Transformation
We now have a general framework:
Expansion (create richer structure)
Low-D ───────────────────────> High-D
Compression (extract invariants, reduce dimension)
High-D ───────────────────────> Low-D
Morphism Construction
(structure-preserving bridges between spaces)
Space A ←──────────────────────> Space BThese three operations are the fundamental moves of computation.
Everything from CPUs to brains to distributed systems performs these moves.
How This Framework Illuminates Datom.world
Datom.world is built on the idea that:
- streams are the minimal substrate
- semantics live in interpreters
- structure emerges from interpretation
- learning and compression happen on top
- agents build morphic bridges across systems
This aligns perfectly with the three-part computation model.
Expansion in Datom.world
- DaoDB expands datoms into entity graphs
- DaoFlow expands datoms into UI trees
- Yin.vm expands datoms into execution semantics
- Entangled nodes expand streams into distributed timelines
Compression in Datom.world
- Datalog queries reduce complex entity graphs
- Agent learning compresses observations
- Embeddings compress multimodal datom streams
- Optimization reduces AST or datom graphs
Morphism Construction in Datom.world
- Interpreters create morphisms from datoms → meaning
- Agents create morphisms across streams
- Continuations create morphisms between computation states
- Entanglement creates morphisms across nodes
- UI renderers map semantic graphs → visual surfaces
- Compiler passes map code → bytecode → datoms
The entire system becomes a fabric of morphisms over a universal stream substrate.
Datom.world is not just a database, VM, or operating system. It is a space where expansion, compression, and morphic alignment coexist.
Conclusion
Computation is not just:
- symbol manipulation
- logic circuits
- bytecode execution
- neural networks
Those are implementations, not definitions.
A deeper view is:
Computation is the transformation of structure through expansion, compression, and morphic alignment.
- Expansion creates new semantic dimensions.
- Compression identifies invariant structure.
- Morphisms connect and unify structures.
This captures semantics, learning, understanding, interpretation, intelligence, distributed coordination, and the design of Datom.world itself.
Everything else (languages, OSes, databases, VMs) is built on top of these three primitive operations.
Learn more:
- Dimensional Gradients, Recursive Interpreters, and the Emergence of Work
- Large Cardinals, Reflection Principles, and the π-Calculus Bridge
- Semantics, Structure, and Interpretation
- Datom.World and the Collapse of the Wave Function
- DaoDB - The dimensional substrate
- DaoStream - Stream-based architecture
- Yin - The interpreter engine