Build a Code Interpreter
Build a ChatGPT-style code interpreter where users send messages, code runs in a persistent sandbox, and results stream back.
How it works
- User sends a natural language request
- An LLM generates Python code
- Code executes in an InstaVM session (state persists between turns)
- Output is returned to the user and fed back to the LLM
Minimal example
from instavm import InstaVM
from openai import OpenAI
vm = InstaVM("your_instavm_key")
openai_client = OpenAI()
messages = [
{"role": "system", "content": "You are a code interpreter. Write and execute Python code to answer questions. Always use the execute_code tool."}
]
tools = [{
"type": "function",
"function": {
"name": "execute_code",
"description": "Execute Python code. State persists between calls.",
"parameters": {
"type": "object",
"properties": {"code": {"type": "string"}},
"required": ["code"]
}
}
}]
Multi-turn conversation
The key difference from a simple agent: the session persists. Variables, files, and packages from previous turns are still available.
import json
def chat(user_message):
messages.append({"role": "user", "content": user_message})
while True:
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools
)
msg = response.choices[0].message
messages.append(msg)
if not msg.tool_calls:
print(f"Assistant: {msg.content}")
return msg.content
for call in msg.tool_calls:
code = json.loads(call.function.arguments)["code"]
print(f"[Running]\n{code}\n")
result = vm.execute(code)
output = result.get("output", "No output")
print(f"[Output]\n{output}\n")
messages.append({
"role": "tool",
"tool_call_id": call.id,
"content": output
})
# Multi-turn session
chat("Load the iris dataset from sklearn and show the first 5 rows")
chat("Train a random forest classifier and show the accuracy")
chat("Plot the confusion matrix and save it as confusion.png")
Each turn reuses the same InstaVM session. The model can reference df, trained models, and files from earlier turns.
Handling file outputs
When code generates files (plots, CSVs), download them:
import os
def chat_with_files(user_message, download_dir="./outputs"):
result = chat(user_message)
# Check if any files were created
file_check = vm.execute("import os; print([f for f in os.listdir('/app') if not f.startswith('.')])")
files = eval(file_check.get("output", "[]"))
os.makedirs(download_dir, exist_ok=True)
for filename in files:
local = os.path.join(download_dir, filename)
vm.download_file(f"/app/{filename}", local_path=local)
print(f"Downloaded: {local}")
return result
Session lifecycle
# Create a long-lived session for the interpreter
vm = InstaVM("your_key", cpu_count=4, memory_mb=4096)
# Pre-install common packages
vm.execute("""
import subprocess, sys
for pkg in ['pandas', 'numpy', 'matplotlib', 'scikit-learn', 'seaborn']:
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-q', pkg])
print("Packages ready")
""")
# ... run the chat loop ...
# Clean up when the user's session ends
vm.kill()
Next steps
- AI Agent Tool -- single-turn agent pattern
- File Operations -- handling file uploads and downloads
- Sessions -- how session persistence works