Structure vs Interpretation: Why Schemas/Ontologies Are Secret Interpreters

I met Pete Chapman, CEO of FirstCognition, at Thoughtworks XConf Vietnam 2025. What struck me was how aligned our visions are. Both FirstCognition and Datom.world are trying to solve the same fundamental problem: make data programmable at a semantic level, not trapped in fragmented application silos.

But we diverge on a philosophical question: where does meaning live? Does it reside in the structure of data itself, or does it emerge only through interpretation?

FirstCognition believes that with the right schemas and ontologies, data can carry its own meaning. Datom.world believes data is syntax (always) and meaning emerges only when an interpreter observes it.

Consider what happens when you use a type system or ontology. The type checker evaluates logical constraints. The reasoning engine performs inference over RDF triples. Even when a human reads a schema and understands what the fields mean, that understanding is interpretation. The interpreter might be software (type checker, reasoner) or wetware (human mind), but it's always there, always required, always performing the work that transforms syntax into semantics.

This isn't academic philosophy. It determines whether your system can handle semantic evolution, multiple interpretations, distributed agents, and AI collaboration. Get it wrong, and you've built a beautiful prison. Get it right, and you've built a living system.

Consider a simple example: a user profile with a schema {name: String, age: Int}. The schema tells you the shape, but what does age actually mean? Years since birth? Months? Age at signup, or current age calculated dynamically? Age in human years, or dog years if this is a pet app? The schema cannot tell you. Only an interpreter—your application code, a validator, or even your brain as you read the schema—imposes that meaning. The structure is just bytes arranged in a pattern. Semantics require interpretation.

Structure as Meaning

The first approach assumes meaning lives in the structure of data itself. If you structure data correctly, the system can compute on it without special semantics. This is inspired by type theory, structuralism, and knowledge graphs.

The promise is elegant: a universal structural layer gives predictability, low cognitive load, and shared ontology. Users can model their domain, and the structure captures the semantics.

But here's the problem: structure is just syntax. A structure (JSON, EDN, a typed schema, a graph) is just shape. Shape doesn't equal meaning.

A structure can encode a user profile, a molecule, political alliances, or a Clojure AST. But the structure itself doesn't know what it's encoding. Without something that interprets the shape, the shape is meaningless.

Interpretation as Meaning (The Agent Approach)

Datom.world takes a different approach: semantics are external. Meaning emerges from interpreters: explicit, dynamic, migratable agents. Everything is a stream; everything is a continuation. Data is just syntax until an interpreter observes a local slice and imposes meaning.

This handles ambiguity, evolution, and multiplicity of meaning naturally. Multiple interpreters can observe the same data stream and extract different semantics. Crucial in multi-agent systems and AI environments where no single interpretation suffices.

Type Systems Are Logical Systems That Require Interpreters

The schema approach often points to type systems as proof that structure can carry semantics. But type systems are formal logical systems, and logical systems require interpreters.

Through the Curry–Howard correspondence, we know that types correspond to logical propositions, and type checking corresponds to proof verification. A type system expresses constraints as logical predicates:

  • isInteger(x) (a logical constraint about x)
  • hasProperty(obj, "email") (a constraint about object structure)
  • satisfiesContract(fn, A → B) (a constraint about function behavior)

These constraints don't evaluate themselves. They require an interpreter (the type checker). The type checker interprets both the logical constraints and the program terms to produce semantic judgments:

  1. Reads the type annotations (logical propositions)
  2. Reads the program terms (expressions, values, structures)
  3. Evaluates whether the terms satisfy the constraints
  4. Returns a judgment (well-typed or type error)

This is interpretation. The type checker interprets a formal logical system to determine whether programs are valid.

Different type systems use different logical foundations:

  • ML-family → Hindley–Milner → second-order logic
  • Haskell → System Fω → higher-order logic with constraints
  • Agda/Coq → dependent types → constructive type theory
  • Rust → borrow checker → affine types and region calculus

The more expressive your type system, the more complex your interpreter becomes. Dependent types require a full theorem prover. Linear types require tracking resource usage. You haven't eliminated interpretation. You've moved it into an increasingly sophisticated logical evaluator.

Ontologies follow the same pattern. OWL predicates like Person subClassOf Animal or hasParent domain Person require an inference engine (reasoner) to evaluate them. The reasoner interprets these logical constraints to derive semantic conclusions. Same mechanism, different syntax.

There is no escape. Any time you say "we just need a shared schema/ontology," you've secretly signed up to write a logical interpreter (even if you don't call it that). The interpreter might run in a type checker, a reasoner, a validator, or in the mind of a human reader. But it's always there, performing the act of interpretation that creates meaning.

The DNA-Ribosome Bootstrap

If interpreters are always required, how do they originate? This seems circular: you need an interpreter to create meaning, but interpreters themselves are structures that need interpretation. How does the first interpreter come to exist? Biology shows us the answer.

Consider a biological analogy. DNA is structure (syntax). Ribosomes are interpreters (semantics). Yet ribosomes are built from DNA, and DNA is useless without ribosomes.

This creates a paradox if you assume structure and interpreter must originate together. But they don't. They evolved independently (just chemicals with shape, just catalytic surfaces) until one accidentally amplified the other.

Structure and interpreter evolved independently, but by coincidence, an interpreter happened to be able to make use of an existing structure to encode itself.

Once that feedback loop formed, semantics became real. Codons "mean" amino acids. Sequences "mean" proteins. But that meaning is not intrinsic to the molecules. It arises from the mapping enforced by the interpreter.

Nothing chemical about AUG means "start codon." It means "start" only because ribosomes interpret it that way. Meaning = stable correlation that persists through time.

Correlation, Causation, and Exploitation

This connects to a deeper principle: persistent correlation becomes a causal signal when a system can exploit it.

Before the genetic code existed, patterns in RNA correlated with certain folds. Some peptides correlated with stabilizing strands. None of this was causal, just regularity.

Then a molecule arose capable of amplifying these regularities. The correlation loop closed. The system began using sequences as instructions. Suddenly, meaningless correlation became:

  • "This codon causes this amino acid to appear"
  • "This sequence causes this protein to be built"
  • "This mutation causes this phenotype difference"

Causation emerged from correlation. The same pattern appears everywhere:

  • Economies: Prices correlate with supply/demand → markets exploit them → they become causal signals
  • Neural networks: Patterns correlate in data → networks learn to exploit them → correlations become operative
  • Distributed systems: Nodes co-observe events → correlation is exploitable → becomes causal ordering
  • Datom.world agents: Interpreters respond to stream patterns → patterns become causal triggers

When Schemas Fail

Before examining what Datom.world does differently, let's look at concrete scenarios where the structure-as-semantics model breaks down:

Semantic Evolution

Schemas ossify. Once you define a structure, changing it requires migration. Every consumer must update. In large distributed systems, this is intractable.

With external interpreters, semantics can evolve independently. Old agents keep running old interpretations. New agents adopt new ones. The same stream supports both during transition.

Multiple Perspectives

A sales team views customer data as leads and conversions. A support team views the same data as tickets and satisfaction scores. A finance team views it as revenue and churn.

One schema cannot capture all perspectives. Forcing a unified ontology means either:

  • Bloating the schema with every perspective (becomes unmaintainable)
  • Privileging one perspective and marginalizing others (political problem)

External interpreters solve this. Each team runs its own agent that observes the stream and extracts its needed semantics. The stream remains minimal; interpretation is pluralistic.

AI Agent Collaboration

Future systems will have dozens of AI agents collaborating. Each agent has different goals, different training, different context. They cannot share a single schema.

They need a shared substrate (the stream) with heterogeneous semantics (each agent's interpreter). This is what Datom.world enables.

What Datom.world Does Better

Semantic Benefits

Dynamic Semantics

Instead of freezing semantics into schemas, Datom.world makes semantics a runtime phenomenon. Interpreters are mobile continuations that can evolve, migrate, and observe streams differently. This supports semantic drift, multiple interpretations, and agent autonomy.

Schemas cannot capture everything. Real systems evolve. Semantics drift. AI agents need dynamic interpretation. Runtime semantics isn't a bug. It's reality.

Multi-Agent Ecosystem

Because semantics is external (Axiom 5), multiple agents can observe the same stream and extract different meanings. Essential when AI agents, humans, and systems collaborate. No single interpretation needs to dominate.

Architectural Benefits

Distributed Computation

Built on π-calculus + continuations + streams. Dynamic topology. Programs are agents that move, observe, emit datoms. Distributed computation is native. Every process can append without coordination (Axiom 3), enabling entangled nodes, migratable agents, and interpretation-as-flow.

This is a true concurrent model where semantics can run at multiple boundaries: kernel, network, WASM, yin.vm, or app layer. No static system can match this dynamism.

Local-First Architecture

Works offline. Survives partitioning. Operates across heterogeneous networks (BLE, ad-hoc mesh). No global schema, no global clock, no global truth. Local-first with eventual coherence via entangled transactors.

This aligns with the Web's future: local-first + AI agent autonomy + P2P.

Theoretical Foundation

The model is deeper and unifies:

  • AI agents
  • Distributed operating systems
  • Data interoperability
  • App runtimes
  • Semantic layers
  • WASM portability
  • Process mobility

It's closer to how biological, economic, and quantum systems actually work. The wavefunction has no intrinsic meaning. Meaning arises when an observer interacts with it. A datom [e a v t m] has no semantics until an agent observes it and imposes meaning.

Convergence: What Both Approaches Can Learn

The schema approach isn't wrong—it's incomplete. Rather than viewing these as competing philosophies, we can see them as complementary strategies that converge on similar goals from different starting points. Here's where Datom.world can borrow the best aspects of structural approaches without compromising its foundation:

Clear Onboarding

Structure helps. Humans like structure. FirstCognition focuses on making data editing comfortable with beautiful, rigid UI for collaborative data modeling. Easy mental model: "it's like Notion, but typed and structured."

Datom.world's "app as interpreter of stream" model is more powerful, but less immediately obvious. Investment in polished templates, clean onboarding, and examples for common semantic patterns (CRM, inventory, logs, tasks) would lower the entry barrier.

Visual Metaphors

Dynamic streams + continuations can feel abstract. Visual representations that help non-engineers create structured streams or template datoms would make the system more approachable.

Optional Light Ontologies

Structure helps federation. Datom.world can offer optional schema suggestions, optional "light ontologies," optional typed helpers. Not required, just helpful. They provide consistency without forcing semantics.

This doesn't compromise the model. Schemas and types are still interpreters under the hood. But packaging them as "helpful constraints" rather than "required structure" maintains the flexibility while reducing foot-guns.

Packaging Deep Theory

FirstCognition packages ideas as a collaborative tool rather than as a grand unifying theory. Datom.world can create accessible entry points, small demos, simple "recipes," quick wins.

The depth is a strength, but it needs clear packaging. Users don't need to understand π-calculus to benefit from mobile continuations. They don't need to grasp quantum mechanics to appreciate that meaning emerges from observation.

Scaling and Interoperability

Schema approaches assume centralization via cloud. A shared structural ontology is enforced. This works in controlled environments but breaks in heterogeneous, distributed, or adversarial contexts.

Datom.world is distributed and entangled by design. Works offline. Survives partitioning. Works across heterogeneous networks. No reliance on centralized coordination.

Optional structure can help. Not as enforcement, but as convention. Datom.world can offer optional schema suggestions that improve federation without requiring global agreement.

The Philosophical Depth

The schema approach says: data as structure → structure as meaning. Strong on type theory. Weak on dynamic semantics.

Datom.world says: meaning emerges from interaction. Continuations as ontology. Streams as reality substrate. Borrowing from π-calculus, quantum mechanics, stigmergy, and Taoist metaphors.

This isn't metaphor. It's structural correspondence. The way meaning emerges in quantum mechanics (observer collapses wavefunction) is the same way meaning emerges in Datom.world (interpreter observes stream). The way biological systems bootstrap (DNA-ribosome loop) is the same way semantic systems bootstrap (structure-interpreter loop).

The model is universal. It applies to:

  • Software (data + interpreter)
  • Biology (DNA + ribosome)
  • Language (word + speaker)
  • Physics (state + observer)
  • Economics (currency + participants)

Everywhere, causation is stable correlation that persists through time, and meaning is interpretation of that persistent correlation.

Practical Implications

This isn't just theory. The choice between structure-as-meaning and interpretation-as-meaning has concrete consequences:

For Data Migration

Schema approach: migration is painful. Change the schema, update all consumers, coordinate deployment.

Datom.world: no migration. Deploy new interpreters. Old ones keep running. Transition is gradual.

For API Versioning

Schema approach: versioned APIs (v1, v2, v3). Each version is a maintenance burden. Deprecation is fraught.

Datom.world: append-only streams. Interpreters decide which datoms to observe. Old agents ignore new attributes. New agents process everything. No API versioning needed.

For Collaboration

Schema approach: teams negotiate shared ontology. Political. Slow. Compromises everyone.

Datom.world: teams run independent interpreters. Shared substrate, heterogeneous semantics. Each team gets what it needs without blocking others.

For AI Integration

Schema approach: AI must conform to the schema. Brittle. Limits what AI can learn.

Datom.world: AI agents are just interpreters. They observe streams, form hypotheses, test predictions. No schema to constrain them. They evolve their own semantics through interaction.

Why Schemas Feel Safe (But Aren't)

Schemas feel safe because they're explicit. You can see the structure. You can validate it. You can generate documentation from it.

But that safety is an illusion in complex systems. Schemas work in controlled environments with:

  • Single authority (one team, one company)
  • Slow evolution (requirements change yearly, not daily)
  • Homogeneous consumers (everyone runs the same version)

Once any of those conditions breaks (distributed authority, rapid evolution, heterogeneous consumers), schemas become anchors, not safety nets.

The real safety comes from isolation and contracts. In Datom.world:

  • Streams are append-only (isolation of writes)
  • Agents are sandboxed (isolation of execution)
  • Interpretation is external (isolation of semantics)
  • Tests guard contracts, not schemas

This provides safety through boundaries, not through central control.

The Middle Path

The argument isn't "schemas are bad." It's: schemas are interpreters, and pretending they're not leads to brittleness.

Datom.world can embrace schemas as one kind of interpreter:

  • Optional schema validators (agents that observe streams and check conformance)
  • Type assistants (agents that suggest types based on observed patterns)
  • Migration helpers (agents that rewrite old datom patterns into new ones)
  • Ontology explorers (agents that visualize semantic relationships)

These are tools, not foundations. They help where structure helps, without forcing structure where it hinders.

Conclusion

The question "where does meaning live?" isn't philosophical indulgence. It determines whether your system can adapt, evolve, and support multiple perspectives.

Structure cannot carry meaning alone. Only interpretation creates meaning. And interpretation requires an interpreter (whether you call it that or not).

Datom.world is complete because it explicitly treats semantics as a runtime phenomenon carried by mobile interpreters, not frozen into the shape of bytes.

This is the architecture for:

  • Distributed systems that survive partitioning
  • Multi-agent environments with heterogeneous goals
  • AI-driven computation that evolves semantics over time
  • Local-first systems that work offline
  • Semantic interoperability without centralized ontologies

The schema approach offers simplicity and familiarity. Datom.world offers truth and adaptability. In controlled environments, simplicity wins. In complex, evolving, distributed systems, truth is the only foundation that holds.

Because ultimately: causation is stable correlation that persists through time. Meaning is interpretation of that persistent correlation by an agent that can exploit it. Life and computation are what happen when these correlations stabilize and interpreters emerge.


The DNA-ribosome analogy is a conceptual metaphor to illustrate the structure-interpreter relationship, not a claim about precise biological mechanisms. The actual origin of the genetic code and ribosomal machinery involves complex biochemistry that remains an active area of research. The key insight (that structure and interpreter can co-evolve through feedback loops) applies across biological, computational, and semantic systems.