Y#1 on Hacker News1,143 votes

Infrastructure for AI Agents

Hardware-isolated Linux computers for AI agents. Start and stop thousands of instances every second.

$50 free credits · No credit card required

Use Cases

What developers are shipping

Code Interpreter

Execute AI agent code in isolated microVMs with RESTful API access and persistent state. Each agent gets its own Linux environment with full filesystem, networking, and package management.

coding-agents.py
from instavm import InstaVM
client = InstaVM()
vm = client.vms.create(vcpu=4, memory_mb=8192)
# Upload agent code
client.upload(vm.id, "agent.py", agent_code)
# Execute with full network access
result = client.execute(vm.id, "python3 agent.py")
print(result.stdout)
# Snapshot state for resumption
snap = client.snapshot(vm.id)

ssh instavm.dev

Manage everything from your terminal

No dashboards, no clicking through UIs. One SSH endpoint to list, create, clone, share, and destroy VMs.

ssh instavm.dev ls

List all your VMs

ssh <vm_id>@instavm.dev

SSH directly into a VM

ssh instavm.dev clone <vm_id>

Clone via snapshot

ssh instavm.dev share create <vm_id> 8080 --public

Share a port publicly

ssh instavm.dev rm <vm_id>

Terminate a VM

ssh instavm.dev whoami

Show account identity

Full SSH CLI reference → docs/ssh

Why teams choose InstaVM

185ms
cold start

Sub-200ms Boot

MicroVMs boot in under 200 milliseconds — 10x faster than containers. No pre-warming, no cold-start penalties.

100%
isolated

Hardware-Level Isolation

Every execution runs in a dedicated microVM with its own kernel. Full network, filesystem, and process isolation. Nothing persists between runs.

3 lines
to integrate

Drop-in SDKs

Python and JavaScript SDKs that work with OpenAI, Claude, LangChain, LlamaIndex, and DSPy. Add sandboxed execution to existing agents in minutes.

NEW

Secret Injection

Inject third-party credentials at runtime without exposing them to sandboxed code. Eliminates credential leaks from prompt injection.

Local or Cloud

Run CodeRunner locally for development, deploy to our cloud for production. Same API, same code.

SOON

Pause & Snapshot

Pause running VMs, snapshot state, resume or clone instantly. Ideal for long-running agent workflows.

SOON

Egress Control

Fine-grained outbound network rules. Allowlist domains, block egress, or scope access per-VM.

Model-agnostic

Works with the tools you already use. OpenAI, Claude, Gemini, LangChain — and your own stack.

View integrations
Terminal

$ git clone https://github.com/instavm/coderunner.git

$ cd coderunner && ./install.sh

# Run AI-generated code locally on Mac

$ python my_script.py

✓ Running in isolated Apple container

✓ No cloud upload — processes local files

✓ Complete VM-level isolation

Session ID: local-abc123

Startup time: 45ms

Status: Ready

Claude DesktopOpenAI AgentsGemini CLIKiro
Sponsored by
MicrosoftGitHub

Run AI agents securely on your Mac

Open-source MCP server for running AI-generated code locally using Apple containers. Process local files without cloud upload, with complete VM-level isolation.

100% Local Execution

Code runs on your Mac using Apple containers — no cloud upload required

Process Local Files

Work with local documents, databases, and files without uploading to cloud

AI Agent Ready

Integrates with Claude Desktop, OpenAI agents, Gemini CLI, and more

SDK

Three lines to sandbox

Python and JavaScript SDKs. Create a VM, configure resources and egress, execute code, get results. Same API for cloud and local development.

Configure vCPU, RAM, and egress per VM
Works with OpenAI, Anthropic, LangChain, DSPy
Upload files, download results, stream output
from instavm import InstaVM
client = InstaVM(api_key="your_api_key")
vm = client.vms.create(
vcpu_count=4,
memory_mb=8192,
egress_policy={"allowed": ["pypi.org", "github.com"]}
)
result = client.execute(vm.id, "python3 agent.py")
print(result.stdout)

REST API

Developer API

https://api.instavm.io/v1 — Full REST API for VMs, snapshots, egress, shares, execution, and file transfer.

METHODENDPOINTDESCRIPTION
POST/v1/vmsCreate a VM with vCPU, memory, and egress policy
GET/v1/vmsList all running VMs
POST/v1/vms/{vm_id}/snapshotSnapshot a running VM
POST/v1/vms/{vm_id}/cloneClone a VM from snapshot
GET/v1/executionsFetch code executions
POST/v1/egress/vm/{vm_id}Set network egress policy
POST/v1/sharesExpose a VM port as a public URL
POST/v1/ssh-keysRegister an SSH public key
GET/v1/snapshotsList all snapshots
POST/uploadUpload a file to a VM
POST/downloadDownload a file from a VM
POST/v1/custom-domainsMap a custom domain to a VM share
Full reference → docs/api

FAQ

MicroVMs boot in ~147ms on average (P95 ≈ 185ms). That's cold start without pre-warming, so agent tool calls stay responsive.

Yes. Every execution runs in a fresh Firecracker microVM with its own kernel, network namespace, and filesystem. VMs are destroyed after execution.

Anything. Python, Node.js, Go, Rust, Bash — full Linux environment. Install pip/npm packages, run browsers, spawn processes, use the GPU.

Set per-VM or per-session egress policies. Whitelist specific domains (e.g. pypi.org, github.com), allow all, or block all outbound traffic.

Pay per hour based on vCPU and RAM. Start with $50 in free credits. No minimum commitment, no idle charges when VMs are terminated.

Start building

$50 in free credits. No credit card.

We use cookies
We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking Accept, you agree to our use of cookies.

Learn more