Infrastructure for AI Agents
Hardware-isolated Linux computers for AI agents. Start and stop thousands of instances every second.
$50 free credits · No credit card required
Use Cases
What developers are shipping
Code Interpreter
Execute AI agent code in isolated microVMs with RESTful API access and persistent state. Each agent gets its own Linux environment with full filesystem, networking, and package management.
from instavm import InstaVMclient = InstaVM()vm = client.vms.create(vcpu=4, memory_mb=8192)# Upload agent codeclient.upload(vm.id, "agent.py", agent_code)# Execute with full network accessresult = client.execute(vm.id, "python3 agent.py")print(result.stdout)# Snapshot state for resumptionsnap = client.snapshot(vm.id)
ssh instavm.dev
Manage everything from your terminal
No dashboards, no clicking through UIs. One SSH endpoint to list, create, clone, share, and destroy VMs.
ssh instavm.dev lsList all your VMs
ssh <vm_id>@instavm.devSSH directly into a VM
ssh instavm.dev clone <vm_id>Clone via snapshot
ssh instavm.dev share create <vm_id> 8080 --publicShare a port publicly
ssh instavm.dev rm <vm_id>Terminate a VM
ssh instavm.dev whoamiShow account identity
Full SSH CLI reference → docs/ssh
Why teams choose InstaVM
Sub-200ms Boot
MicroVMs boot in under 200 milliseconds — 10x faster than containers. No pre-warming, no cold-start penalties.
Hardware-Level Isolation
Every execution runs in a dedicated microVM with its own kernel. Full network, filesystem, and process isolation. Nothing persists between runs.
Drop-in SDKs
Python and JavaScript SDKs that work with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add sandboxed execution to existing agents in minutes.
Secret Injection
Inject third-party credentials at runtime without exposing them to sandboxed code. Eliminates credential leaks from prompt injection.
Local or Cloud
Run CodeRunner locally for development, deploy to our cloud for production. Same API, same code.
Pause & Snapshot
Pause running VMs, snapshot state, resume or clone instantly. Ideal for long-running agent workflows.
Egress Control
Fine-grained outbound network rules. Allowlist domains, block egress, or scope access per-VM.
Model-agnostic
Works with the tools you already use. OpenAI, Claude, Gemini, LangChain — and your own stack.
DSPy$ git clone https://github.com/instavm/coderunner.git
$ cd coderunner && ./install.sh
# Run AI-generated code locally on Mac
$ python my_script.py
✓ Running in isolated Apple container
✓ No cloud upload — processes local files
✓ Complete VM-level isolation
Session ID: local-abc123
Startup time: 45ms
Status: Ready
Run AI agents securely on your Mac
Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.
Code runs on your Mac using Apple containers — no cloud upload required
Work with local documents, databases, and files without uploading to cloud
Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more
SDK
Three lines to sandbox
Python and JavaScript SDKs. Create a VM, configure resources and egress, execute code, get results. Same API for cloud and local development.
from instavm import InstaVMclient = InstaVM(api_key="your_api_key")vm = client.vms.create(vcpu_count=4,memory_mb=8192,egress_policy={"allowed": ["pypi.org", "github.com"]})result = client.execute(vm.id, "python3 agent.py")print(result.stdout)
REST API
Developer API
https://api.instavm.io/v1 — Full REST API for VMs, snapshots, egress, shares, execution, and file transfer.
FAQ
MicroVMs boot in ~147ms on average (P95 ≈ 185ms). That's cold start without pre-warming, so agent tool calls stay responsive.
Yes. Every execution runs in a fresh Firecracker microVM with its own kernel, network namespace, and filesystem. VMs are destroyed after execution.
Anything. Python, Node.js, Go, Rust, Bash — full Linux environment. Install pip/npm packages, run browsers, spawn processes, use the GPU.
Set per-VM or per-session egress policies. Whitelist specific domains (e.g. pypi.org, github.com), allow all, or block all outbound traffic.
Pay per hour based on vCPU and RAM. Start with $50 in free credits. No minimum commitment, no idle charges when VMs are terminated.