Run untrusted AI code and agent actions in hardware-isolated environments. Each request gets an instant, dedicated computer.
Choose your preferred language
python3 -m pip install instavmfrom instavm import InstaVMclient = InstaVM('your_api_key')result = client.execute('print("Hello!")')print(result.stdout)
Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.
Code runs on your Mac using Apple containers - no cloud upload required
Work with local documents, databases, and files without uploading to cloud
Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more
Production-ready infrastructure for AI-generated code execution at scale.
MicroVMs boot in under 200 milliseconds. 10x faster than traditional containers. No pre-warming needed.
Every execution runs in a fresh Firecracker microVM. Full network, filesystem, and process isolation. Zero data persistence.
Works with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add secure execution to your existing agents instantly.
Run CodeRunner locally for development or deploy to our cloud for production. Same API, same code, your choice.
Works with any AI model or framework. Use OpenAI, Claude, Gemini, or your own.
Everything you need to know about InstaVM
Our VMs boot in under 200 milliseconds on average (P95 at 185ms). This includes cold starts with no pre-warming needed. We use Firecracker microVMs for instant isolation.
Yes. Every execution runs in a dedicated Firecracker microVM with complete network, filesystem, and process isolation. Each VM is destroyed after execution with zero data persistence between runs.
Python, JavaScript/Node.js, Go, Bash, and more. We auto-install pip/npm packages on-demand. Check our docs for the full list of pre-installed packages and custom runtime options.
Absolutely. We're production-ready with 99.9% uptime SLA and serve millions of executions monthly. Enterprise plans include dedicated infrastructure and 24/7 support.
Pay only for what you use, billed per-second based on vCPU, RAM, and storage. Start with $100 in free credits. No hidden fees, no credit card required to get started.
Yes. We provide native integrations for OpenAI, Anthropic Claude, LangChain, LlamaIndex, and DSPy. Your AI agents get secure execution tools with 2 lines of code.
By clicking Accept, you agree to our use of cookies.
Learn more