Skip to main content

Tensorlake SDK for Document Ingestion API and Serverless Applications

Project description

Group 39884

Build agents with sandboxes and serverless orchestration runtime

PyPI Version Python Support License Documentation Slack

Tensorlake is a compute infrastructure platform for building agentic applications with sandboxes.

The Sandbox API creates MicroVM sandboxes which you can use to run agents, or use them as an isolated environment for running tools or LLM generated code.

In addition to stateful VMs, you can also add long running orchestration capabilites to Agents using a serverless funtion runtime with fan-out capabilities.

Sandboxes

Tensorlake Sandboxes are stateful Firecracker MicroVMs built for instant, stateful execution environments for AI agents — spin up millions of VMs with near-SSD filesystem performance.

Key capabilities

  • Fastest Filesystem I/O — Block-based storage achieving near-SSD speeds inside virtual machines. In SQLite benchmarks (2 vCPUs, 4 GB RAM), Tensorlake completes in 2.45s vs Vercel 3.00s (1.2×), E2B 3.92s (1.6×), Modal 4.66s (1.9×), and Daytona 5.51s (2.2×).
  • Fast startup — Sandboxes created in under a second via Lattice, a dynamic cluster scheduler.
  • Snapshots & cloning — Snapshot at any point to create durable memory and filesystem checkpoints; clone running sandboxes instantaneously across machines.
  • Auto suspend/resume — Sandboxes suspend when idle and resume in under a second without losing any memory or filesystem state.
  • Live migration — Sandboxes automatically move between machines during updates with only a brief pause of a few seconds.
  • Scale — Supports up to 5 million sandboxes in a single project.

Installation

pip install tensorlake

Setup

Sign up at cloud.tensorlake.ai and get your API key.

export TENSORLAKE_API_KEY="your-api-key"
tensorlake login

Create Your First Sandbox (CLI)

Create a sandbox, run a command, and clean up:

# Create a sandbox
tensorlake sbx create --image python:3.11-slim

# Run a command inside it
tensorlake sbx exec <sandbox-id> -- python -c "print('Hello from the sandbox!')"

# Copy a file into the sandbox
tensorlake sbx cp ./my_script.py <sandbox-id>:/tmp/my_script.py

# Open an interactive terminal
tensorlake sbx ssh <sandbox-id>

# Terminate when done
tensorlake sbx terminate <sandbox-id>

Create a Sandbox Programmatically

from tensorlake.sandbox import SandboxClient

client = SandboxClient.for_cloud(api_key="your-api-key")

# Create a sandbox and connect to it
with client.create_and_connect(image="python:3.11-slim") as sandbox:
    # Run a command
    result = sandbox.run("python", ["-c", "print('Hello from the sandbox!')"])
    print(result.stdout)  # "Hello from the sandbox!"

    # Write and read files
    sandbox.write_file("/tmp/data.txt", b"some data")
    content = sandbox.read_file("/tmp/data.txt")

    # Start a long-running process
    proc = sandbox.start_process("python", ["-m", "http.server", "8080"])
    print(proc.pid)

# Sandbox is automatically terminated when the context manager exits

Snapshots

Save the state of a sandbox and restore it later:

# Snapshot a running sandbox
snapshot = client.snapshot_and_wait(sandbox_id)

# Later, create a new sandbox from the snapshot
with client.create_and_connect(snapshot_id=snapshot.snapshot_id) as sandbox:
    # Picks up right where you left off
    result = sandbox.run("ls", ["/tmp"])
    print(result.stdout)

Sandbox Pools

Pre-warm containers for fast startup:

# Create a pool with warm containers
pool = client.create_pool(
    image="python:3.11-slim",
    warm_containers=3,
)

# Claim a sandbox instantly from the pool
resp = client.claim(pool.pool_id)
sandbox = client.connect(resp.sandbox_id)

Orchestrate

Create orchestration APIs on a distributed runtime with automatic scaling, fan-out capabilities and built-in tracking. The orchestration APIs can be invoked using HTTP requests or using the Python SDK.

Quickstart

Decorate your entrypoint with @application() and functions with @function(). Each function runs in its own isolated sandbox.

Example: City guide using OpenAI Agents with web search and code execution:

from agents import Agent, Runner
from agents.tool import WebSearchTool, function_tool
from tensorlake.applications import application, function, Image

# Define the image with necessary dependencies
FUNCTION_CONTAINER_IMAGE = Image(base_image="python:3.11-slim", name="city_guide_image").run(
    "pip install openai openai-agents"
)

@function_tool
@function(
    description="Gets the weather for a city using an OpenAI Agent with web search",
    secrets=["OPENAI_API_KEY"],
    image=FUNCTION_CONTAINER_IMAGE,
)
def get_weather_tool(city: str) -> str:
    """Uses an OpenAI Agent with WebSearchTool to find current weather."""
    agent = Agent(
        name="Weather Reporter",
        instructions="Use web search to find current weather in Fahrenheit for the city.",
        tools=[WebSearchTool()],  # Agent can search the web
    )
    result = Runner.run_sync(agent, f"City: {city}")
    return result.final_output.strip()

@application(tags={"type": "example", "use_case": "city_guide"})
@function(
    description="Creates a guide with temperature conversion using function_tool",
    secrets=["OPENAI_API_KEY"],
    image=FUNCTION_CONTAINER_IMAGE,
)
def city_guide_app(city: str) -> str:
    """Uses an OpenAI Agent with function_tool to run Python code for conversion."""

    @function_tool
    def convert_to_celsius_tool(python_code: str) -> float:
        """Converts Fahrenheit to Celsius - runs as Python code via Agent."""
        return float(eval(python_code))

    agent = Agent(
        name="Guide Creator",
        instructions="Using the appropriate tools, get the weather for the purposes of the guide. If the city uses Celsius, call convert_to_celsius_tool to convert the temperature, passing in the code needed to convert the temperature to Celsius. Create a friendly guide that references the temperature of the city in Celsius if the city typically uses Celsius, otherwise reference the temperature in Fahrenheit. Only reference Celsius or Farenheit, not both.",
        tools=[get_weather_tool, convert_to_celsius_tool],  # Agent can execute this Python function
    )
    result = Runner.run_sync(agent, f"City: {city}")
    return result.final_output.strip()

Deploy to Tensorlake

  1. Set your API keys:
export TENSORLAKE_API_KEY="your-api-key"
tl secrets set OPENAI_API_KEY "your-openai-key"
  1. Deploy:
tl deploy examples/readme_example/city_guide.py

Call via HTTP

# Invoke the application
curl https://api.tensorlake.ai/applications/city_guide_app \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY" \
  --json '"San Francisco"'
# Returns: {"request_id": "beae8736ece31ef9"}

# Get the result
curl https://api.tensorlake.ai/applications/city_guide_app/requests/{request_id}/output \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY"

# Stream results with SSE
curl https://api.tensorlake.ai/applications/city_guide_app \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY" \
  -H "Accept: text/event-stream" \
  --json '"San Francisco"'

Learn More

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorlake-0.4.32.tar.gz (2.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tensorlake-0.4.32-py3-none-win_amd64.whl (11.5 MB view details)

Uploaded Python 3Windows x86-64

tensorlake-0.4.32-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ x86-64

tensorlake-0.4.32-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (10.9 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ ARM64

tensorlake-0.4.32-py3-none-macosx_11_0_arm64.whl (10.4 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

File details

Details for the file tensorlake-0.4.32.tar.gz.

File metadata

  • Download URL: tensorlake-0.4.32.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tensorlake-0.4.32.tar.gz
Algorithm Hash digest
SHA256 b5bd4f642a534c5b3f3f68c644777032857f7083b041a0402f4c43b4de0d5d64
MD5 7a19b9d67b0af16e2a9a85f6e1d5faa7
BLAKE2b-256 d00050cfc5853cc20d1f149a51123ee582b5f77bb7c25f1cc123c13075a9202f

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.32.tar.gz:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.32-py3-none-win_amd64.whl.

File metadata

  • Download URL: tensorlake-0.4.32-py3-none-win_amd64.whl
  • Upload date:
  • Size: 11.5 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tensorlake-0.4.32-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 2f18a90040dca6adf9e6c833c64a1f74e3ad89695f66f53e96062dd318e628a9
MD5 012d512f24af7abada235c16c3809a81
BLAKE2b-256 89c99300d74d82dd3782d08c405a71c6586dea64f6a3d69ff34a96fea1643f69

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.32-py3-none-win_amd64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.32-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.32-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cf5abde3c7b055b1acde572737a5104705ea53c6a71c26e55ab1878e31ceeb9f
MD5 cc1536cb25bd43f7cbeb5b6e5269f5bf
BLAKE2b-256 5be7ef3eead2ae5455c51929b001ca051981d3d5176c5e6b59bf2039c7f8134a

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.32-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.32-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.32-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 79d5b227ebc42bafb16393a06b8abff8813f9d83455db18176fe618748e46f02
MD5 32befbac7ca9fe6898d00626075942cf
BLAKE2b-256 aa8cd27574181162512623b04a5e854f33ecd4e42fc6ec412c7675149248516f

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.32-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.32-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.32-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 55e37f101184e6122d8007fb435e12ff772ee2a437dea47fd6f8eae28294842c
MD5 70c1deb3099c276d54855bf957ddb6f8
BLAKE2b-256 04800cc2586661e8971473f11b07d98ac49c3ff31e05510c713dc70565daa3ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.32-py3-none-macosx_11_0_arm64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page