Sandbox for AI Agents
Hardware-isolated microVMs for AI agent code execution, sub-200ms cold start.
$100 free credits · No credit card required
Why teams choose InstaVM
Sub-200ms Boot
MicroVMs boot in under 200 milliseconds — 10x faster than containers. No pre-warming, no cold-start penalties.
Hardware-Level Isolation
Every execution runs in a dedicated microVM with its own kernel. Full network, filesystem, and process isolation. Nothing persists between runs.
Drop-in SDKs
Python and JavaScript SDKs that work with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add sandboxed execution to existing agents in minutes.
Secret Injection
Inject third-party credentials at runtime without exposing them to sandboxed code. Eliminates credential leaks from prompt injection.
Local or Cloud
Run CodeRunner locally for development, deploy to our cloud for production. Same API, same code.
Pause & Snapshot
Pause running VMs, snapshot state, resume or clone instantly. Ideal for long-running agent workflows.
Egress Control
Fine-grained outbound network rules. Allowlist domains, block egress, or scope access per-VM.
Quickstart
Add secure execution to your agents in minutes. Same API for cloud microVMs and local development.
python3 -m pip install instavmfrom instavm import InstaVMclient = InstaVM(api_key="your_api_key")# Execute untrusted coderesult = client.execute("print(100**100)")print(result.stdout)# Browser + network accessbrowser = client.browser.create_session(1920, 1080)browser.navigate("https://example.com")screenshot = browser.screenshot()
Session ID: local-abc123
Startup time: 45ms
Status: Ready
Run AI agents securely on your Mac
Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.
100% Local Execution
Code runs on your Mac using Apple containers - no cloud upload required
Process Local Files
Work with local documents, databases, and files without uploading to cloud
AI Agent Ready
Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more
Frequently asked questions
Everything you need to know about InstaVM
How fast is “sub‑200ms startup”?
MicroVMs boot in ~200ms on average (P95 around 185ms). That’s cold start without pre‑warming, so agent tool calls stay responsive.
Is my code really isolated?
Yes. Every execution runs in a fresh Firecracker microVM with network, filesystem, and process isolation. VMs are destroyed after execution with zero data persistence between runs.
What languages and packages are supported?
Python, JavaScript/Node.js, Go, Bash, and more. Install pip/npm packages on demand, and use prebuilt runtimes for common stacks.
Can I use InstaVM in production?
Yes. We’re built for production workloads with stable APIs, observability hooks, and reliability features for high‑volume agent executions.
How does pricing work?
Pay only for what you use, billed per hour based on vCPU and RAM. Start with $100 in free credits and scale up when you're ready.
