Your AI Agent Can Run Python for Free. Here's How.

Published: (February 3, 2026 at 08:41 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

What sandpy solves

  • Zero‑cost execution – runs entirely client‑side.
  • Safety – timeouts prevent infinite loops from hanging the tab.
  • Streaming output – see results as they are printed.
  • State persistence – snapshots let you save and restore a Python session.
  • Visualization support – auto‑captures Matplotlib plots as base64 images.
  • Vision hook – feed generated images to a vision model (e.g., GPT‑4V).
  • Ready‑made integrations – LangChain and Vercel AI SDK adapters are included.

Installation

npm install sandpy

Basic usage

import { Sandpy } from 'sandpy';

const sandbox = await Sandpy.create();
const result = await sandbox.run('print(2 + 2)');
console.log(result.stdout); // "4"

That’s it—Python runs in the browser.

LangChain integration

import { createLangChainTool } from 'sandpy';
import { DynamicTool } from 'langchain/tools';

const pythonTool = new DynamicTool(
  createLangChainTool({ timeout: 30_000 })
);

// Add to your agent
const agent = new Agent({
  tools: [pythonTool],
});

If the generated code misbehaves, the sandbox times out instead of hanging forever.

Vercel AI SDK integration

import { createVercelAITool } from 'sandpy';
import { generateText } from 'ai';

const pythonTool = createVercelAITool({ timeout: 30_000 });

const result = await generateText({
  model: yourModel,
  tools: { python: pythonTool },
  prompt: 'Calculate the first 20 Fibonacci numbers',
});

Capturing visualizations & using a vision model

When a Matplotlib plot is produced, sandpy returns a base64‑encoded image that can be sent to a vision model:

const result = await sandbox.run(plotCode, {
  describeArtifact: async (artifact) => {
    const response = await openai.chat.completions.create({
      model: 'gpt-4-vision-preview',
      messages: [
        {
          role: 'user',
          content: [
            { type: 'text', text: 'Describe this chart.' },
            {
              type: 'image_url',
              image_url: {
                url: `data:${artifact.type};base64,${artifact.content}`,
              },
            },
          ],
        },
      ],
    });
    return response.choices[0].message.content;
  },
});

artifact.alt might contain something like: “A line chart showing exponential growth from 0 to 100.”
Your agent can then self‑correct based on the description.

File system & persistence

// Preload pandas (optional)
const sandbox = await Sandpy.create({ preload: ['pandas'] });

// Write a CSV file
await sandbox.writeFile('/sandbox/data.csv', csvContent);

// Read it from Python
const result = await sandbox.run(`
import pandas as pd
df = pd.read_csv('/sandbox/data.csv')
print(df.describe())
`);

Files under /sandbox/ survive page reloads via the Origin‑Private File System (OPFS) with an IndexedDB fallback.

Snapshots & session restore

// Build some state
await sandbox.run('x = 42');
await sandbox.run('data = [1, 2, 3]');

// Save snapshot
const snapshot = await sandbox.snapshot();
localStorage.setItem('session', JSON.stringify(snapshot));

// Later (even after a refresh)
const saved = JSON.parse(localStorage.getItem('session'));
await sandbox.restore(saved);

await sandbox.run('print(x)'); // "42"

Browser vs. Server sandbox comparison

FeatureServer SandboxBrowser Sandbox
Cost~ $0.10 / sessionFree
LatencyNetwork round‑tripInstant
PrivacyCode sent to serverStays in browser
OfflineNoYes
ScalePay‑per‑useUnlimited

For consumer‑facing AI agents, the browser‑native approach is the sensible default.

Demo & source

  • Live demo:
  • GitHub repository:
  • npm package: npm install sandpy

If you need code execution for your AI agents, give sandpy a try. Feel free to ask questions in the comments.

Back to Blog

Related posts

Read more »