Sandbox for AI Agents

Run untrusted AI code and agent actions in hardware-isolated environments. Each request gets an instant, dedicated computer.

$100 of free credits, no credit card required

Get Started

Choose your preferred language

python3 -m pip install instavm
from instavm import InstaVM
client = InstaVM('your_api_key')
result = client.execute('print("Hello!")')
print(result.stdout)
Need help? Join Discord, check docs, or
.

Run AI agents securely on your Mac

Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.

100% Local Execution

Code runs on your Mac using Apple containers - no cloud upload required

Process Local Files

Work with local documents, databases, and files without uploading to cloud

AI Agent Ready

Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more

Everything you need to run
untrusted code safely.

Production-ready infrastructure for AI-generated code execution at scale.

185ms
cold start

Blazing Fast

MicroVMs boot in under 200 milliseconds. 10x faster than traditional containers. No pre-warming needed.

100%
isolated

Complete Isolation

Every execution runs in a fresh Firecracker microVM. Full network, filesystem, and process isolation. Zero data persistence.

3 lines
to integrate

Drop-in Integration

Works with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add secure execution to your existing agents instantly.

0
code changes

Local or Cloud

Run CodeRunner locally for development or deploy to our cloud for production. Same API, same code, your choice.

Works with popular AI frameworks

Need help? Join Discord, check docs, or email us.

Python SDK
# python3 -m pip install instavm
from instavm import InstaVM
 
client = InstaVM(api_key="your-key")
 
# Execute code
result = client.execute("print(100**100)")
print(result.stdout)
 
# Browser automation
browser = client.browser.create_session(1920, 1080)
browser.navigate("https://example.com")
screenshot = browser.screenshot()

LLM model agnostic

Works with any AI model or framework. Use OpenAI, Claude, Gemini, or your own.

OpenAI logoAnthropic logoLangChain logoLlamaIndex logoGoogle AI logoAzure logoDSPy logo

Frequently asked questions

Everything you need to know about InstaVM

How fast is 'sub-200ms startup'?

Our VMs boot in under 200 milliseconds on average (P95 at 185ms). This includes cold starts with no pre-warming needed. We use Firecracker microVMs for instant isolation.

Is my code really isolated?

Yes. Every execution runs in a dedicated Firecracker microVM with complete network, filesystem, and process isolation. Each VM is destroyed after execution with zero data persistence between runs.

What languages and packages are supported?

Python, JavaScript/Node.js, Go, Bash, and more. We auto-install pip/npm packages on-demand. Check our docs for the full list of pre-installed packages and custom runtime options.

Can I use InstaVM in production?

Absolutely. We're production-ready with 99.9% uptime SLA and serve millions of executions monthly. Enterprise plans include dedicated infrastructure and 24/7 support.

How does pricing work?

Pay only for what you use, billed per-second based on vCPU, RAM, and storage. Start with $100 in free credits. No hidden fees, no credit card required to get started.

Can I integrate with OpenAI/Anthropic/etc?

Yes. We provide native integrations for OpenAI, Anthropic Claude, LangChain, LlamaIndex, and DSPy. Your AI agents get secure execution tools with 2 lines of code.

Get Started in Minutes

Free forever. No credit card required.

$100 free credits • Pay per second for vCPU, RAM & storage

We use cookies
We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking Accept, you agree to our use of cookies.

Learn more