DCF x Endo Receive Foresight Institute Grant to Advance Safe AI Code Execution

The Endo team is pleased to share that Foresight Institute has awarded a grant to DCF to support our work on the safe execution of AI-generated code. This funding will accelerate two upcoming milestones that extend Endo’s object capability model into the rapidly growing world of AI-assisted software development.

Endo is an open-source object-capability framework developed at Agoric and championed by the Decentralized Cooperation Foundation. It brings the Principle of Least Authority (POLA) into JavaScript in a way that is practical for real systems. Developers can compose mutually suspicious components while keeping authority minimal and explicit. This protects users from supply chain attacks. It helps isolate untrusted modules and provides a framework for secure cooperation across local and distributed systems.

What the grant makes possible

The funded work focuses on two technical milestones that demonstrate how Endo can contain AI-generated code while preserving developer workflow and functionality.

1. AI integration

We will create a local MCP Server, or similar technology, that exposes a confined JavaScript execution environment as a tool the AI can call. The model can propose code. Endo will run that code inside a strict authority boundary. This shows that it is possible to give an AI the ability to write and run code without giving it the full authority of the user and enabling it to obtain narrow capabilities from the user upon request.

2. Capability-based closed loop for file access

We will extend Endo’s daemon and MCP integration so that the AI can request specific capabilities like reading or writing a file. Endo will then prompt the user to grant a real capability, a mock capability, or nothing at all. The AI learns to request only the authority it needs. The user remains in full control. This demonstrates AI to system interaction with practical POLA constraints in place.

Why Endo is positioned for this moment

HardenedJS already provides the secure code execution container. OcapN will supply a distributed object capability protocol, to connect users across a vast, possibly peer-to-peer, capability substrate. Agoric and MetaMask have shown that these ideas scale in production across blockchains, browser wallets, and embedded systems. Extending these foundations to AI workflows is the natural next step. AI code generation speeds up development. It also increases the need for mechanical safety boundaries that do not depend on perfect human review.

Endo provides those boundaries in a concrete and usable form. PoLA reduces the blast radius. Capabilities create precise authority edges. Petnames help developers and users understand precisely what is being granted. These are not abstract principles. They are engineering tools shaped by decades of real-world experience and steady uptake in products like web browsers and embedded systems.

The value becomes clear in everyday development. AI-generated scripts often attempt to read project files, modify directories, or interact with local configurations. Endo intercepts those actions and requires explicit user approval before any authority is exercised. The user controls the scope and duration of that authority. If the AI makes a mistake or produces unsafe code, the impact is contained to the small set of permissions that were granted. Developers gain speed and convenience without exposing their machines, data or projects to unnecessary risk. This is the practical foundation needed for safe human AI cooperation.

DCF’s role and support

DCF is supporting this work as part of its mission to advance secure decentralized cooperation. The grant is administered by DCF and includes support for publication, documentation, and educational materials to accompany the milestones.

What comes next

The team will deliver both milestones early next year. Demos, documentation, and all artifacts will be released as open source. We will publish updates as each part becomes available. As the work evolves, we expect to show not only how AI-generated code can be executed with safety but also how humans, AI systems, and programs can collaborate inside a coherent capability model.

This is an important step toward a world where AI assistance is common and where safety is not an afterthought. We appreciate the support of Foresight Institute and look forward to sharing what we build.

And, we look forward to building on this common foundation for the benefit of a growing ecosystem of networked object capabilities and AI tools.

For more information about Foresight Institute, visit here
For more information about Decentralized Cooperation Foundation, visit here

Check out our blogpost on Object Capabilities

Related Posts

Propagators, Brains in Vats, and the Future of Secure Computing

In a recent MetaFox Talks session, MetaMask welcomed Christine Lemmer Webber. Christine is one of the designers of the ActivityPub protocol, founder

When a Worm Targets the JavaScript Ecosystem: What This Incident Reveals About npm, Install Scripts, and Capability Security

A recent wave of malicious npm packages has reminded the JavaScript community of a recurring truth. Supply chain attacks continue to exploit

Object Capabilities

The Quiet Rise of Object Capabilities

Why Cloudflare, Agoric, and MetaMask Are Betting on Ocaps For decades, security models on the web have been bolted on after the