Sandbox for AI Agents

Hardware-isolated microVMs for AI agent code execution, sub-200ms cold start.

$100 free credits · No credit card required

Model‑agnostic
Works with the tools you already use. OpenAI, Claude, Gemini, LangChain — and your own stack.
View integrations

Why teams choose InstaVM

185mscold start

Sub-200ms Boot

MicroVMs boot in under 200 milliseconds — 10x faster than containers. No pre-warming, no cold-start penalties.

100%isolated

Hardware-Level Isolation

Every execution runs in a dedicated microVM with its own kernel. Full network, filesystem, and process isolation. Nothing persists between runs.

3 linesto integrate

Drop-in SDKs

Python and JavaScript SDKs that work with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add sandboxed execution to existing agents in minutes.

New

Secret Injection

Inject third-party credentials at runtime without exposing them to sandboxed code. Eliminates credential leaks from prompt injection.

Local or Cloud

Run CodeRunner locally for development, deploy to our cloud for production. Same API, same code.

Soon

Pause & Snapshot

Pause running VMs, snapshot state, resume or clone instantly. Ideal for long-running agent workflows.

Soon

Egress Control

Fine-grained outbound network rules. Allowlist domains, block egress, or scope access per-VM.

Quickstart

Add secure execution to your agents in minutes. Same API for cloud microVMs and local development.

Python SDK
python3 -m pip install instavm
from instavm import InstaVM
client = InstaVM(api_key="your_api_key")
# Execute untrusted code
result = client.execute("print(100**100)")
print(result.stdout)
# Browser + network access
browser = client.browser.create_session(1920, 1080)
browser.navigate("https://example.com")
screenshot = browser.screenshot()
Read the docs or browse integrations.

Run AI agents securely on your Mac

Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.

100% Local Execution

Code runs on your Mac using Apple containers - no cloud upload required

Process Local Files

Work with local documents, databases, and files without uploading to cloud

AI Agent Ready

Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more

Frequently asked questions

Everything you need to know about InstaVM

How fast is “sub‑200ms startup”?

MicroVMs boot in ~200ms on average (P95 around 185ms). That’s cold start without pre‑warming, so agent tool calls stay responsive.

Is my code really isolated?

Yes. Every execution runs in a fresh Firecracker microVM with network, filesystem, and process isolation. VMs are destroyed after execution with zero data persistence between runs.

What languages and packages are supported?

Python, JavaScript/Node.js, Go, Bash, and more. Install pip/npm packages on demand, and use prebuilt runtimes for common stacks.

Can I use InstaVM in production?

Yes. We’re built for production workloads with stable APIs, observability hooks, and reliability features for high‑volume agent executions.

How does pricing work?

Pay only for what you use, billed per hour based on vCPU and RAM. Start with $100 in free credits and scale up when you're ready.

Get Started in Minutes

Free forever. No credit card required.

$100 free credits • Pay per hour for vCPU & RAM

We use cookies
We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking Accept, you agree to our use of cookies.

Learn more