Computation Moves, Data Stays: The Yin.vm Continuation Model

In most programming languages, a continuation is a snapshot of everything: the control state, the environment, the stack, and often the entire graph of data those things reference.

This is why continuations are rarely mobile. They are simply too heavy.

A continuation that carries megabytes of heap or deeply nested closures is not something you can cheaply send across threads, CPU cores, processes, network boundaries, or language runtimes.

Smalltalk's image-based persistence is the canonical example. When you save a Smalltalk image, you snapshot the entire VM state: every object, every class definition, every method, the full heap. The result is a self-contained world, but one that weighs hundreds of megabytes and is fundamentally immobile. You cannot send a Smalltalk continuation to another process or machine without sending the entire universe it inhabits.

Yet Smalltalk's vision was correct. The ability to save and restore the entire computational state is powerful. The problem is not the goal, but the implementation. Smalltalk was designed in the era of local computing when the internet was just a research project. Yin.vm is designed for the era of distributed computing when the internet is global over high-speed broadband networks.

This architectural difference changes everything.

Because Yin.vm unifies continuations, functions, and closures, and because eval is the VM itself, saving computational state becomes straightforward. A continuation in Yin captures control flow. The environment lives in streams. The heap lives in streams. To save the world, you save the streams and the continuation references into them.

When you restore, the continuation resumes with the same stream positions. The data was never copied into the continuation. It was always external. This means you can save state, migrate it, fork it, or replay it without the monolithic weight of a Smalltalk image.

More importantly, because streams can be distributed across the network, continuations can migrate to where the data lives. This inverts the traditional model. In most systems, data moves to where the code is. But data can be many orders of magnitude larger than code. It makes far more sense to move the computation to where the data is than to move the data to where the computation is.

Smalltalk's image must stay monolithic. Yin's continuations can be decomposed, distributed, and reassembled wherever computation needs to happen.

The Naive Objection: Doesn't This Make Every Access Expensive?

This raises a natural question. If a continuation becomes thin by externalizing its state into streams, doesn't that make every access expensive?

If you write something as simple as (+ x 1) and the symbol x is backed by a stream somewhere on the network, does that mean every addition incurs latency?

Only if your VM is naïve.

Yin.vm isn't.

Symbols Resolve Intelligently

The key idea is that symbols resolve intelligently.

In the AST that the Yang compiler produces, and in the IR that Yin interprets or compiles, a symbol is not just a name. It is annotated with storage semantics.

The interpreter knows whether x is a value or a reference to a stream, and it uses the right strategy accordingly.

The expression (+ x 1) therefore does not blindly fetch from a stream. Instead, the IR contains a specialized operation:

  • Either: "load a local value from slot 3"
  • Or: "load a stream-backed value from slot 3"

The choice is not made at runtime.

It is made during lowering of the AST. The resolution of a symbol becomes a static fact embedded in the program representation.

Separation of Concerns: AST vs. IR

This separation is crucial:

  • Symbols in the high-level AST remain semantic objects
  • Their storage class is tracked in the IR

This lets Yin.vm do something that stack-based VMs or conventional CESK machines cannot:

Yin can treat the continuation as pure control state while treating the environment and heap as distributions.

A continuation migrates cheaply because it carries only what cannot be reconstructed elsewhere:

  • The code ID
  • The program counter
  • A small set of hot locals

Everything else stays behind as cursor references into streams.

Avoiding the Worst Case

This avoids the worst case where every variable access is a network lookup.

  • Hot locals stay local
  • Cold or large data lives in streams

The VM and compiler decide which category a binding belongs to:

  • Sometimes the programmer leaves a hint
  • Sometimes the optimizer decides after observing usage patterns
  • Sometimes the JIT specializes multiple versions of a function: one for local values, one for stream-backed values

A New Relationship Between Computation and Storage

What emerges is a fundamental inversion:

Traditional systems:

Storage chooses the cost model and computation inherits it.

Yin.vm:

Computation chooses the cost model and storage adapts to it.

The AST you write remains pure and mathematical. The bytecode that Yin executes understands the physical structure of the system, deciding when to touch a stream and when to operate on a register.

Structural Laziness

This is not laziness bolted on as an afterthought. Yin's lazy state model is structural.

The environment itself can be viewed as:

  • A stream of bindings
  • The heap as a stream of objects
  • The stack as a stream of frames

A continuation is a collection of offsets into these streams.

When migrated:

  • The continuation moves
  • The streams do not

When resumed:

  • Yin hydrates only what must be touched
  • Memoizes when appropriate
  • Leaves the majority of state undisturbed and un-copied

Values as Presence

Yin.vm treats:

  • Values as immediate presence
  • Streams as deferred presence

They share the same interface, so the evaluation algorithm remains simple, but they diverge in cost.

This is how Yin makes distributed computing feel local without pretending that everything is local.

By annotating the boundary, the VM preserves the illusion of immediacy while keeping the architecture honest.

Everything Is a Continuation

The deeper consequence is that Yin.vm no longer thinks of functions, closures, or continuations as opaque runtime entities.

They are all just continuations.

And continuations are just structured references into streams.

The architecture becomes:

  • The VM acts as the scheduler of these control flows
  • DaoDB and DaoStream act as the long-term memory
  • The Yin-Yang stack becomes a fabric where programs are not objects running in memory, but processes weaving across streams, attaching themselves to data rather than dragging data with them

The Cost Model

This is how a continuation becomes lightweight without compromise:

  • The cost of mobility becomes the cost of moving a handful of registers and a program counter
  • The cost of recomputing state becomes proportional to how much of the state you actually use

And the entire system begins to look less like a virtual machine and more like an organism whose parts live at different temperatures, adapting to how information flows through it.

Intelligence at Every Layer

Yin.vm turns:

  • Symbol resolution into intelligence
  • The environment into a distributed stream
  • Continuations into portable centers of gravity that migrate toward data instead of dragging data toward themselves

And it does all this while preserving the mathematical purity of the AST.

The Core Principle

This is the heart of Yin's design:

Computation moves. Data stays. Resolution adapts.

Continuations are not heavy because they don't need to be. They carry only control, not context. Context lives in streams. Streams are addressed, not copied. Resolution is static, not dynamic. Hot paths stay fast. Cold paths stay lazy. The boundary between local and distributed is annotated in the IR, not discovered at runtime.

This is what makes Yin.vm different from every other continuation-based VM. It doesn't just support continuations. It builds the entire execution model around the idea that continuations should be lightweight, mobile, and stream-aware from the ground up.

Traditional VMs optimize for locality by pulling data toward computation. Yin optimizes for mobility by pushing computation toward data.

The difference is fundamental.

Learn more: