Back to blog
Abhishek·April 17, 2026·5 min read

InstaVM is a sandbox provider for the OpenAI Agents SDK.

OpenAIAgents SDKSandboxAI AgentsInstaVMPTY

InstaVM is a sandbox provider for the OpenAI Agents SDK. Agents built on SandboxAgent can run inside Firecracker microVMs with PTY support, egress controls, and port exposure.

OpenAI recently added native sandbox execution to the Agents SDK. That makes it possible to plug external compute into the SDK's agent loop without rebuilding the harness yourself. That's the seam we built into. If you already use InstaVM, you can keep your VM workflows and add the SDK's tool routing, tracing, and session lifecycle on top. Every account gets $50 in free compute to try it.

Why use VMs for agent sandboxes

Most sandbox products start with containers. That works well for fast startup and high density. But containers still share a kernel with the host. If an agent can install packages, run arbitrary code, make network requests, and modify files, that shared kernel matters.

InstaVM runs each sandbox inside a Firecracker microVM. Every agent gets its own kernel, filesystem, and network stack. The isolation is enforced by KVM rather than Linux namespaces and cgroups. If a process inside the sandbox is compromised, it still has to escape the VM itself.

For workloads like code review and data analysis, that stronger boundary is the point. We covered the architecture in How Claude Code and Codex approach sandboxing.

What you get

The current InstaVM sandbox provider supports:

  • SandboxAgent and Manifest for workspace files, environment variables, and output directories
  • Shell with both exec and write_stdin
  • Filesystem operations including apply_patch and view_image
  • Session create, resume, and destroy
  • Egress controls
  • Port exposure as a TLS URL for previews and services

We do not support S3Mount, GCSMount, AzureBlobMount, or Skills materialization from Git repos yet. Today we support File, Dir, LocalFile, and LocalDir.

How it plugs into the SDK

The Agents SDK separates the agent harness from the sandbox compute. The harness keeps model calls, tool routing, tracing, and your application logic in your own stack. The sandbox provider is responsible for sessions, command execution, files, and ports.

That provider boundary is small, which makes the integration straightforward. If you want the full model, read the Sandboxes guide.

A minimal example

import asyncio

from agents import Runner
from agents.run import RunConfig
from agents.sandbox import Manifest, SandboxAgent, SandboxRunConfig
from agents.sandbox.entries import File

from instavm.integrations.openai_agents import InstaVMSandboxClient


async def main():
    agent = SandboxAgent(
        name="Data analyst",
        model="gpt-4.1",
        instructions="Inspect sales.csv and summarize the outliers.",
        default_manifest=Manifest(
            entries={
                "sales.csv": File(
                    content=b"region,revenue\nus,120\nin,980\nuk,140\n"
                )
            }
        ),
    )

    result = await Runner.run(
        agent,
        "Analyze the dataset and explain what stands out.",
        run_config=RunConfig(
            sandbox=SandboxRunConfig(
                client=InstaVMSandboxClient(api_key="your-instavm-api-key")
            ),
        ),
    )
    print(result.final_output)


asyncio.run(main())

The SDK materializes the manifest into a fresh InstaVM, the agent can inspect files and run commands, and the session is cleaned up when the run ends. The same pattern works for code review agents that need a writable workspace and for data analysis agents that need a real filesystem.

Interactive shells via PTY

The Agents SDK's Shell capability has two modes. Basic command execution (exec) sends a command and gets stdout/stderr back. Interactive mode (write_stdin) opens a persistent shell session where the model can send input, read output, and interact with long-running processes.

Interactive mode requires PTY support from the sandbox provider. Without it, the SDK raises NotImplementedError and the agent can't use write_stdin at all.

We implemented full PTY support end-to-end:

  • In-VM: A PTY session manager inside each VM handles pty.openpty(), process lifecycle, resize (TIOCSWINSZ), and exit code tracking.
  • Transport: A WebSocket relay carries stdin, stdout, and control messages between the SDK and the VM.
  • SDK integration: supports_pty() returns True, and pty_exec_start / pty_write_stdin work as the SDK contract specifies.

This lets an agent keep a shell open across multiple tool calls and interact with programs that expect a terminal.

Getting started

Sign up at instavm.io and grab an API key. Every account gets $50 in free compute.

pip install "instavm[agents]"

Start with the InstaVM quickstart, install the SDK package on PyPI, and plug InstaVMSandboxClient into RunConfig(sandbox=SandboxRunConfig(client=...)).

Get free execution credits

Run your AI agents in secure, isolated microVMs. $50 in free credits to start.

Get started free
We use cookies
We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

By clicking Accept, you agree to our use of cookies.

Learn more