Propagators, Brains in Vats, and the Future of Secure Computing

In a recent MetaFox Talks session, MetaMask welcomed Christine Lemmer Webber. Christine is one of the designers of the ActivityPub protocol, founder and CTO of the Spritely Institute, and a long-time explorer of object capability security and Lisp systems.

Her talk was ostensibly about one thing – propagators. In practice, it became a fast tour through constraint-based computing, distributed object capabilities, explainable AI, and how all of this might reshape how we build secure, responsive systems for the web.

This post recaps the key ideas, why they matter for MetaMask and web3, and where this research might go next.

What are propagators, really?

At the core of the talk is a simple idea. Treat computation as a network of constraints that gradually share information until they cannot deduce anything more.

Instead of running a function once, top to bottom, you set up a graph of:

  • Cells. little boxes that hold information, sometimes partial and sometimes exact.
  • Propagators. stateless processes that watch cells, recompute when inputs change, and write new information into other cells.

A propagator network keeps running until it reaches quiescence – the point where no propagator can add any new information.

A few important properties make this powerful.

1. Partial information is first class

Most systems want all or nothing. either you have the final answer or you have nothing useful. Propagators are happy with “I know a range” or “I know one aspect but not another.”

Christine illustrated this with a classic physics puzzle.

  • You want to know the height of a building.
  • You have a barometer whose height you only measured roughly. “between 0.3 and 0.32 units.”
  • You measure the shadows of both barometer and building, each with some uncertainty.
  • You add a second line of evidence. drop the barometer from the roof, time the fall, and use gravity and t² to infer height.

Each measurement is an interval, not an exact number. The propagator network combines these constraints.

  • First pass. it gives a rough interval for the building height.
  • After adding the fall time and gravity, the range for the building becomes narrower.
  • Even more surprising. the system can refine the barometer height itself, because the combined equations and evidence make some parts of the original range inconsistent.

In other words, new information does not just refine answers, it can improve your inputs.

2. Computation can run “both ways”

In a normal function, you do something like:

fahrenheit_to_celsius(113) -> 45

With propagators, the same functional definition can be turned into a constraint network. Now you can:

  • Give F, solve for C.
  • Give C, solve for F.
  • Give F and C, solve for the constant 32 that appears in the equation.

The program becomes a relationship, not a one-way pipeline. That matters a lot for debugging, inference, and “what if” exploration.

3. They are inherently parallel and distributed

Propagators communicate via asynchronous messages. That maps neatly to actor models and distributed object systems. If you already have a capability secure distributed actor platform, you already have most of the substrate needed to implement propagators across machines.

For MetaMask and other web3 systems, that is key. You can imagine constraint networks that span nodes, services, and user devices, while still respecting object capability security boundaries.

Brainy. brains in vats on Spritely Goblins

Christine’s current propagator implementation is called Brainy. Brainy is built on top of Spritely Goblins, a distributed object system designed around object capability security and transactional actors.

Why this stack matters.

  • Goblins gives you actors with transactions and time travel debugging. You can roll a system back to a previous state and reenter it inside a debugger.
  • When a propagator network hits a contradiction, Goblins can simply abort the transaction. The bad update never becomes visible. You do not have to manually “repair the graph.”
  • A “brain” in this model is essentially an event loop plus objects and propagators. locally consistent, but able to collaborate with other brains over CapTP without sharing global state.

This is where the “brain in a vat” joke comes from. The vat is the event loop, the brain is the object graph plus propagators living inside it.

The upshot. it becomes practical to run complex propagator networks inside secure, distributed ocap systems, with safe failure modes and very powerful debugging tools.

Truth Maintenance and Exploring Multiple World Views

Christine also touched on truth maintenance systems, even though there was not time to dig in deeply. The idea is simple and very human.

  • People hold contradictory ideas all the time.
  • When someone points out a contradiction, we do not crash. We laugh, reconsider, and adjust.

Truth maintenance systems extend propagators with the ability to:

  • Track the origin of beliefs. what facts or witnesses led to a conclusion.
  • Explore multiple possible worlds. “What if we trust this witness but not that one.”
  • Remember “no good” combinations of assumptions, so the system does not waste time re-exploring obviously inconsistent paths.

Combined with propagators, you get dependency-directed backtracking. a smart way to search problem spaces like Sudoku, Wordle, or more serious constraint satisfaction problems, pruning huge chunks of the search as soon as a contradiction appears.

Now imagine combining that with object capabilities.

  • Witnesses can only submit testimony through capabilities they hold.
  • Different “brains” can maintain different sets of trusted sources.
  • A user can toggle beliefs. “Trust this maintainer list, distrust this one” and immediately see which sites are still flagged as phishing, for example.

That is a very different model for misinformation handling and trust management than a single global block list.

Why MetaMask and web3 Should Care

Toward the end of the session, Dan Finlay grounded the discussion in MetaMask use cases.

1. Phishing and reputation as constraint networks

Today MetaMask maintains a phishing block list in a fairly traditional way. a GitHub repo of entries curated by maintainers.

In propagator terms, that is just one big cell of “sites to distrust.”

With propagators and truth maintenance systems, you can imagine something richer.

  • Multiple lists from multiple curators flowing into different cells.
  • Users choosing which sets of curators to believe.
  • The system surfacing contradictions, for example “this domain cannot both be trusted and untrusted given these assumptions.”
  • Being able to toggle belief in a source and see the consequences instantly.

This fits very naturally with web3’s ethos of user choice and pluralistic governance.

2. Declarative smart contracts

Christine suggested another possibility. model parts of smart contracts declaratively as constraints that complete when enough information arrives.

Instead of one big imperative script that runs at a moment in time, you could express contractual relationships like.

  • “When these three conditions have all become true, this transfer becomes valid.”
  • “If any of these safety constraints are violated, this position must be closed.”

Propagators can monitor relevant cells and propagate consequences as new facts arrive, which maps nicely onto on-chain events, off-chain data feeds, and user actions.

3. Secure user interfaces

Christine emphasized that what draws her to MetaMask is not the blockchain part, but the focus on secure user interfaces.

  • Capability aware front ends.
  • Clear boundaries around what each dapp can do.
  • Smarter warning and explanation when something looks wrong.

Propagators and truth maintenance systems can help here too.

  • They can maintain partial hypotheses about what a dapp is trying to do.
  • They can combine signals from multiple sources. phishing lists, behavioral heuristics, user preferences.
  • They can support “explainable” warnings. not just “this looks bad,” but “given your trusted sources and past choices, this transaction conflicts with assumptions X and Y.”

Explainable AI and taking the car to court

One of the intellectual seeds for Christine’s propagator obsession comes from Gerald Sussman.

He once said, roughly.

If a self driving car crashes, I do not just want to take the manufacturer to court. I want to take the car to court.

Meaning. we need systems that can explain themselves.

Today’s neural nets are powerful but opaque. They give answers, not reasons.

Christine highlighted the work of Leilani Gilpin, one of Sussman’s students. In her dissertation, she uses propagators on top of neural networks that drive cars.

  • The neural net drives.
  • When a crash happens, a propagator system reconstructs a human readable explanation after the fact.
  • The result is narrative. “The right wheel force increased beyond threshold, friction behaved like this, therefore the car did X.”

Propagators are a key part of that pipeline because they are good at composing partial information and tracing how a conclusion flows from assumptions.

This kind of explainability is extremely relevant for financial systems, security tools, and automated agents in web3. If your wallet declines a transaction or flags a contract, you want to know why, not just “no.”

Looking ahead. compilers, proofs, and beyond

Christine closed by waving at a larger vision. propagators not just as a niche solver, but as a substrate for:

  • Functional reactive programming (FRP) across machines. time itself becomes partial information, and updates stream through the network over CapTP.
  • Compiler and type checker infrastructure. Using propagators to infer types, detect impossible code paths, and even drive JIT decisions as more information becomes known.
  • Proof assistants that give partial results. Instead of “proof complete or proof failed,” you get “here is what we can prove so far, and here are the unknowns.” You can even spend more resources to refine particular parts of the proof space.

All of this plays nicely with Lisp’s strength in building composable domain-specific languages. Brainy is written in Scheme, but you do not have to expose raw parentheses to end users. You can hide powerful propagator-based engines behind friendly syntax and user interfaces.

Closing thoughts

This MetaFox session was not “how to use a new MetaMask feature.” It was a glimpse into a research stack that blends:

  • Constraint based computing with partial information.
  • Object capability security and distributed actors.
  • Time travel debugging and transactional state.
  • Explainable AI and truth maintenance.

It is early-stage work. Brainy is a weekend research project layered on top of Spritely Goblins, which in turn is part of a broader push to build safer, more user-respecting networked systems.

But the connections to MetaMask are real.

  • Richer, user-driven trust and phishing models.
  • More declarative and analyzable smart contracts.
  • Smarter, explainable security UX.

If you care about how we can make the web less phishable, more accountable, and more programmable in the right ways, this is a space to watch.

Related Posts

DCF x Endo Receive Foresight Institute Grant to Advance Safe AI Code Execution

The Endo team is pleased to share that Foresight Institute has awarded a grant to DCF to support our work on the

When a Worm Targets the JavaScript Ecosystem: What This Incident Reveals About npm, Install Scripts, and Capability Security

A recent wave of malicious npm packages has reminded the JavaScript community of a recurring truth. Supply chain attacks continue to exploit

Object Capabilities

The Quiet Rise of Object Capabilities

Why Cloudflare, Agoric, and MetaMask Are Betting on Ocaps For decades, security models on the web have been bolted on after the