Deep Research Agents
Computer Use Agents
Automations Agents
Background Agents
Reinforcement Learning
Secure MCPs
Deep Research Agents
AI Sandboxes for
Open-source, secure environment with real-world tools for enterprise-grade agents.
[.500]
[.873]
[.542]
[.704]
[.285]
[.717]
[.598]
[.557]
[.232]
[.746]
[.211]
[.013]
[.510]
[.718]
[.621]
[.223]
[.124]
[.801]
[.798]
[.117]️
[.817]
[.070]
[.353]
[.833]
[.477]
[.620]
[.829]
[.195]
[.245]
[.891]
[.454]
[.145]
[.984]
[.634]
[.342]
[.746]
[.330]
[.103]
[.742]
[.004]
[.165]
[.459]
[.597]
[.910]
[.072]
[.336]
[.788]
[.400]
[.410]
[.273]
[.477]
[.087]
[.707]
[.212]
[.642]
[.829]
[.616]
[.805]
[.206]
[.505]
[.265]
[.043]
[.829]
[.195]
[.245]
[.891]
[.505]
[.265]
[.043]
[.195]
[.245]
[.891]
[.410]
[.273]
[.505]
[.765]
[.143]
[.095]
[.335]
[.891]
[.287]
[.921]
[.206]
[.813]
[.104]
[.665]
[.083]
[.900]
[.040]
[.784]
[.087]
[.171]
[.616]
[.805]
[.206]
[.505]
[.265]
[.043]
[.829]
[.195]
[.245]
[.891]
[.921]
[.820]
[.061]️
[.679]
[.034]
[.810]
[.322]
[.061]
[.381]️
[.285]
[.679]
[.034]
[.810]
[.061]
[.381]️
[.285]
[.179]
[.034]
[.810]
[.061]
[.001]️
[.275]
[.551]
[.707]
[.212]
[.642]
[.660]
[.102]
[.790]
[.041]
[.081]️
[.445]
[.021]
[.517]
[.019]
[.311]
[.921]
[.820]
[.061]️
[.679]
[.034]
[.810]
[.322]
[.061]
[.381]️
[.285]
[.817]
[.070]
[.353]
[.744]
[.663]
[.844]
[.452]
[.045]
[.305]
[.027]
[.744]
[.663]
[.844]
[.452]
[.045]
[.027]
[.733]
[.463]
[.824]
[.452]
[.145]
[.677]
[.505]
[.265]
[.043]
[.829]
[.733]
[.463]
[.824]
[.452]
[.145]
[.677]
[.505]
[.265]
[.043]
[.829]
[.817]
[.070]
[.353]
[.744]
[.663]
[.844]
[.452]
[.045]
[.305]
[.027]
[.820]
[.061]️
[.679]
[.034]
[.810]
[.322]
[.070]
[.353]
[.744]
[.663]
[.034]
[.810]
[.322]
[.070]
[.353]
[.663]
[.114]
[.077]
[.722]
[.084]
[.253]
[.665]
[.452]
[.045]
[.305]
[.027]
[.874]
[.104]
[.022]
[.604]
[.310]
[.103]
[.502]
[.178]
[.285]
[.006]
]·········[
]·········[
]·········[
]·*·······[
]·········[
]··*······[
]···*·····[
]·*·······[
]····*····[
]·····*···[
]···*·····[
]·······*·[
]·······*·[
]·····*···[
]·········[
]·········[
]·······*·[
]·········[
RUNNING CODE…
]·····[
]·····[
]·····[
]·*···[
]·····[
]··*··[
]···*·[
]·*···[
]·····[
]·····[
]··*··[
]·····[
]·····[
]···*·[
]·····[
]·····[
]·····[
]·····[
]·····[
]·····[
]·····[
]·*···[
]·····[
]··*··[
]···*·[
]·*···[
]·····[
]·····[
]··*··[
]·····[
]·····[
]···*·[
]·····[
]·····[
]·····[
]·····[
[_______________]
[%%%%%__________]
[%%%%%%%%%%_____]
[%%%%%%%%%%%%%%%]
CPU: 8 × ▤ / RAM: 4 GB
]·········[
]·········[
]·········[
]·*·······[
]·········[
]··*······[
]···*·····[
]·*·······[
]····*····[
]·····*···[
]···*·····[
]·······*·[
]·······*·[
]·····*···[
]·········[
]·········[
]·······*·[
]·········[
8 – ––––– ––– ––––– ––– ––––– –––
7 – ––––– ––– @@@@@ ––– ––––– –––
6 – ––––– ––– @@@@@ ––– ––––– –––
5 – @@@@@ ––– @@@@@ ––– ––––– –––
4 – @@@@@ ––– @@@@@ ––– ––––– –––
3 – @@@@@ ––– @@@@@ ––– @@@@@ –––
2 – @@@@@ ––– @@@@@ ––– @@@@@ –––
1 – @@@@@ ––– @@@@@ ––– @@@@@ –––
–––––––––––––––––––––––––––––––––
A B C
✓ CHART-1
______ ______ ______
❘ ❘_\ ❘ ❘_\ ❘ ❘_\
╔═══════╗ ╔═══════╗ ╔═══════╗
║ CSV ║ ║ TXT ║ ║ .JS ║
╚═══════╝ ╚═══════╝ ╚═══════╝
❘______❘ ❘______❘ ❘______❘
✓ File
╔ Email ══════════════╗
║ your@email.com ║
╚═════════════════════╝
╔ Pw ═════════════════╗
║ ******** ║
╚═════════════════════╝
╔═════════════════════╗
║ Sign In ║
╚═════════════════════╝
✓ UI
NVDA @
$120.91 @
+32% @
@@@ @
@@@ @ @ @
@@ @ @ @
@@@@ @ @
@@@@ @@@
@
@
✓ CHART-2
1999 @@@@@@@@@@ │ │
1998 @@@ │ │
1997 @@@@@@@@@@@@@ │
1996 @@@@@@@ │ │
1995 @@@@@@@@@@@@@@@@ │
1994 @@@@@@@@@@@@@@@@@@@@@@@@@
1993 @@@@@ │ │
1992 @@@@@@@@@@ │ │
–––––––––––––––––––––––––––––––––
2 4 6
✓ CHART-3
/!\
Error: [$rootScope:inprog] $apply
already in progress
http://errors.angular.js.org/1.3
.15/$rootScope/inprog?p0=
%24apply
at angular.js:63
☓ Error
8 ––––│–––––––––│–––––––––│–––––@
7 ––––│–––––––––│–––@–––––│––––@–
6 ––––│–––––––––│––@–@–@@@@–––@––
5 ––––│–––––@@@–│–@–––@–––│@–@–@@
4 @@@@│–––@@–––@│@–––@–@––│–@–@––
3 ––––@@@@––@@@@@–––@–––@@│@@@–––
2 –––@│@@@@@––––│@@@––––––│––––––
1 –@@–│–––––––––│–––––––––│––––––
–––––––––––––––––––––––––––––––––
A B C
✓ CHART-4
88%
of Fortune 100 COMPANIES
AI AGENTS
A1 AG3-TS
–I 4GEN7S
A* AG3NT5
4I #GEN7S
Built for AI Agents, LLM Training, and MCPs
> HOVER (↓↓)
/EXPLORE
Deep research agents
Enable your agent to conduct time-consuming research on large datasets.
@@@
@@@@@@
@@@@@@@@@
/\_/
/\/
_/ X%
%%%%%% %%%%%%%%%
%%%
@ @
@ @ @ @
@ @ @ @ @
AI data analysis & visualization
Connect your data to an isolated sandbox to securely explore data and generate charts.
======
========
=== ===
<
======
========
=== ===
====<
========
=== ===
====
<
========
=== ===
====
===== =<
=== ===
====
===== =
<
=== ===
====
===== =
= ==<
====
===== =
= ==
<
====
===== =
= ==
== =====<
===== =
= ==
== =====
<
===== =
= ==
== =====
======<
= ==
== =====
======
<
= ==
== =====
======
========<
== =====
======
========
<
== =====
======
========
=== ===<
Coding agents
Securely execute code, use I/O, access the internet, or start terminal commands.
╔ ═ ═ ═ ╗
╠═ ═══ ═╣
╚ ═ ═ ══╝
╔═══════╗
╠═══════╣
║ ║
╚═══════╝
╔═══════╗
╠═══════╣
║ ✓ ║
╚═══════╝
Vibe coding
Use a sandbox as a code runtime for AI-generated apps. Supports any language and framework.
==╔═══╗==
==║ ✓ ║== ==╚═══╝==
==╔═══╗==
==║ ✓ ║== ==╚═══╝==
==╔═══╗==
==║ ✓ ║== ==╚═══╝==
==╔═══╗==
==║ ✓ ║== ==╚═══╝==
==╔═══╗==
==║ ✓ ║== ==╚═══╝==
==╔═══╗==
==║ × ║== ==╚═══╝==
==╔═══╗==
==║ × ║== ==╚═══╝==
Reinforcement learning
Use tens of thousands of concurrent sandboxes to run and evaluate reward functions.
Computer use
Use Desktop Sandbox to provide secure virtual computers in cloud for your LLM.
A FEW LINES
A F3W L1NES
4 FE# L!NES
A FEW 7INE5
– FEW ILNES
IN YOUR CODE
WITH A FEW LINES
+
+
+
+
+
+
1
2
3
4
5
6
7
8
9
10
11
12
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
// npm install @e2b/code-interpreter
import { Sandbox } from '@e2b/code-interpreter'
// Create a E2B Code Interpreter with JavaScript kernel
const sandbox = await Sandbox.create()
// Execute JavaScript cells
await sandbox.runCode('x = 1')
const execution = await sandbox.runCode('x+=1; x')
// Outputs 2
console.log(execution.text) 1
2
3
4
5
6
7
8
9
10
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
# pip install e2b-code-interpreter
from e2b_code_interpreter import Sandbox
# Create a E2B Sandbox
with Sandbox() as sandbox:
# Run code
sandbox.run_code("x = 1")
execution = sandbox.run_code("x+=1; x")
print(execution.text) # outputs 2 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// npm install ai @ai-sdk/openai zod @e2b/code-interpreter
import { openai } from '@ai-sdk/openai'
import { generateText } from 'ai'
import z from 'zod'
import { Sandbox } from '@e2b/code-interpreter'
// Create OpenAI client
const model = openai('gpt-4o')
const prompt = "Calculate how many r's are in the word 'strawberry'"
// Generate text with OpenAI
const { text } = await generateText({
model,
prompt,
tools: {
// Define a tool that runs code in a sandbox
codeInterpreter: {
description: 'Execute python code in a Jupyter notebook cell and return result',
parameters: z.object({
code: z.string().describe('The python code to execute in a single cell'),
}),
execute: async ({ code }) => {
// Create a sandbox, execute LLM-generated code, and return the result
const sandbox = await Sandbox.create()
const { text, results, logs, error } = await sandbox.runCode(code)
return results
},
},
},
// This is required to feed the tool call result back to the LLM
maxSteps: 2
})
console.log(text) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
~
~
~
~
~
~
~
# pip install openai e2b-code-interpreter
from openai import OpenAI
from e2b_code_interpreter import Sandbox
# Create OpenAI client
client = OpenAI()
system = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"
# Send messages to OpenAI API
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt}
]
)
# Extract the code from the response
code = response.choices[0].message.content
# Execute code in E2B Sandbox
if code:
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.text
print(result) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
~
~
~
~
~
~
~
# pip install anthropic e2b-code-interpreter
from anthropic import Anthropic
from e2b_code_interpreter import Sandbox
# Create Anthropic client
anthropic = Anthropic()
system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"
# Send messages to Anthropic API
response = anthropic.messages.create(
model="claude-3-5-sonnet-20240620",
max_tokens=1024,
messages=[
{"role": "assistant", "content": system_prompt},
{"role": "user", "content": prompt}
]
)
# Extract code from response
code = response.content[0].text
# Execute code in E2B Sandbox
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.logs.stdout
print(result) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
~
~
~
~
~
# pip install mistralai e2b-code-interpreter
import os
from mistralai import Mistral
from e2b_code_interpreter import Sandbox
api_key = os.environ["MISTRAL_API_KEY"]
# Create Mistral client
client = Mistral(api_key=api_key)
system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"
# Send the prompt to the model
response = client.chat.complete(
model="codestral-latest",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
]
)
# Extract the code from the response
code = response.choices[0].message.content
# Execute code in E2B Sandbox
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.text
print(result) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
~
~
~
~
~
~
~
~
~
~
# pip install ollama
import ollama
from e2b_code_interpreter import Sandbox
# Send the prompt to the model
response = ollama.chat(model="llama3.2", messages=[
{
"role": "system",
"content": "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
},
{
"role": "user",
"content": "Calculate how many r's are in the word 'strawberry'"
}
])
# Extract the code from the response
code = response['message']['content']
# Execute code in E2B Sandbox
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.logs.stdout
print(result) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
~
~
~
~
~
# pip install langchain langchain-openai e2b-code-interpreter
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from e2b_code_interpreter import Sandbox
system_prompt = "You are a helpful assistant that can execute python code in a Jupyter notebook. Only respond with the code to be executed and nothing else. Strip backticks in code blocks."
prompt = "Calculate how many r's are in the word 'strawberry'"
# Create LangChain components
llm = ChatOpenAI(model="gpt-4o")
prompt_template = ChatPromptTemplate.from_messages([
("system", system_prompt),
("human", "{input}")
])
output_parser = StrOutputParser()
# Create the chain
chain = prompt_template | llm | output_parser
# Run the chain
code = chain.invoke({"input": prompt})
# Execute code in E2B Sandbox
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
result = execution.text
print(result) 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
~
~
~
~
~
~
~
~
~
~
~
~
from llama_index.core.tools import FunctionTool
from llama_index.llms.openai import OpenAI
from llama_index.core.agent import ReActAgent
from e2b_code_interpreter import Sandbox
# Define the tool
def execute_python(code: str):
with Sandbox() as sandbox:
execution = sandbox.run_code(code)
return execution.text
e2b_interpreter_tool = FunctionTool.from_defaults(
name="execute_python",
description="Execute python code in a Jupyter notebook cell and return result",
fn=execute_python
)
# Initialize LLM
llm = OpenAI(model="gpt-4o")
# Initialize ReAct agent
agent = ReActAgent.from_tools([e2b_interpreter_tool], llm=llm, verbose=True)
agent.chat("Calculate how many r's are in the word 'strawberry'")FEATURES
F#4TUR3S
*EA–U?E5
FE^TURES
FEA+UR3S
Features your
agents will love
AI agents need real-world tools to complete superhuman level tasks.
> MADE FOR AI
> DSCVR ALL (↓↓)
Works with any LLM
Use OpenAI, Llama, Anthropic, Mistral, or your
own custom models. E2B is LLM-agnostic
and compatible with any model.
Quick start
The E2B Sandboxes in the same region as
the client start in less than 200 ms.
NO COLD STARTS
Run
... or just any other AI-generated code.
AI-generated Python, JavaScript, Ruby, or C++? Popular framework or custom library? If you can run it on a Linux box, you can run it in the E2B sandbox.
Secure quick start
The E2B Sandboxes in the same region as
the client start in less than 200 ms.
NO COLD STARTS
Features made for agents
The full-stack of secure tools for any agentic workflow.
COMPUTERS FOR AGENTS
^
^ ^
^^^^^
^^ ^^
^^^
^
^ ^
^ ^^
^^^^
^^^
^
^ ^
^ ^^^
^^^^^
^^^
^
^^ ^
^^^
^^^^
^^^
^
^
^ ^ ^
^ ^^^
^^^
^
^ ^^
^^^^
^^^ ^
^^^
^
^ ^
^ ^ ^
^^^
^^^
Secure & isolated
Each sandbox is powered by Firecracker, a microVM made to run untrusted workflows.
FULL ISOLATION
Up to 24h long sessions
Run for a few seconds or several hours, each E2B
sandbox can run up to 24 hours.
AVAILABLE IN PRO
Install any package or system library with
and more.
Completely customize the sandbox for your use case by creating a custom sandbox template or installing a package when the sandbox is running.
BYOC, on-prem, or self-hosted
E2B works anywhere you do.
In your AWS, GCP, or Azure account, or your VPC.
COOKBOOK
C00K8OOK
COO4B––K
C*OK8*OK
©OOKBO°K
GET INSPIRed BY
OUR COOKBOOK
Production use cases & full-fledged apps.
HOVER (↓↓)
“E2B runtime unlocked rich answers and interactive charts for our users. We love working with both the product and team behind it.”
— Denis Yarats, CTO
“E2B allows us to scale-out training runs by launching hundreds of sandboxes in our experiments, which was essential in Open R1.”
— Lewis Tunstall, Research Engineer
“It took just one hour to integrate E2B end-to-end. The performance is excellent, and the support is on another level. Issues are resolved in minutes.”
— Maciej Donajski, CTO
“E2B has revolutionized our agents' capabilities. This advanced alternative to OpenAI's Code Interpreter helps us focus on our unique product.”
— Kevin J. Scott, CTO/CIO
“Manus doesn’t just run some pieces of code. It uses 27 different tools, and it needs E2B to have a full virtual computer to work as a real human.”
— Tao Zhang, Co-founder
“Executing Athena’s code inside the sandbox makes it easy to check and automatically fix errors. E2B helps us gain enterprises’ trust.”
— Brendon Geils, CEO
“We needed a fast, secure, and scalable way for code execution. E2B’s API interface made their infrastructure almost effortless to integrate.”
— Benjamin Klieger, Compound AI Lead
“We implemented E2B in a week, needing just one engineer working in spare cycles. Building it in-house would've taken weeks and multiple people.”
— Luiz Scheidegger, Head of Engineering
“LLM-generated API integrations to external services make Gumloop incredibly useful. E2B is essential to make that happen at scale and to deliver reliable performance to our users.”
— Max Brodeur-Urbas, CEO
Today
T0D–Y
TOD4Y
7O#AY
T0D4Y
GET STARTED TODAY
Open-source, secure environment with real-world tools for enterprise-grade agents.
/RUN CODE
>
>
>>