Skip to main content

Tensorlake SDK for Document Ingestion API and Serverless Applications

Project description

Group 39884

Build agents with sandboxes and serverless orchestration runtime

PyPI Version Python Support License Documentation Slack

Tensorlake is a compute infrastructure platform for building agentic applications with sandboxes.

The Sandbox API creates MicroVM sandboxes which you can use to run agents, or use them as an isolated environment for running tools or LLM generated code.

In addition to stateful VMs, you can also add long running orchestration capabilites to Agents using a serverless funtion runtime with fan-out capabilities.

Sandboxes

Tensorlake Sandboxes are stateful Firecracker MicroVMs built for instant, stateful execution environments for AI agents — spin up millions of VMs with near-SSD filesystem performance.

Key capabilities

  • Fastest Filesystem I/O — Block-based storage achieving near-SSD speeds inside virtual machines. In SQLite benchmarks (2 vCPUs, 4 GB RAM), Tensorlake completes in 2.45s vs Vercel 3.00s (1.2×), E2B 3.92s (1.6×), Modal 4.66s (1.9×), and Daytona 5.51s (2.2×).
  • Fast startup — Sandboxes created in under a second via Lattice, a dynamic cluster scheduler.
  • Snapshots & cloning — Snapshot at any point to create durable memory and filesystem checkpoints; clone running sandboxes instantaneously across machines.
  • Auto suspend/resume — Sandboxes suspend when idle and resume in under a second without losing any memory or filesystem state.
  • Live migration — Sandboxes automatically move between machines during updates with only a brief pause of a few seconds.
  • Scale — Supports up to 5 million sandboxes in a single project.

Installation

pip install tensorlake

Setup

Sign up at cloud.tensorlake.ai and get your API key.

export TENSORLAKE_API_KEY="your-api-key"
tensorlake login

Create Your First Sandbox (CLI)

Create a sandbox, run a command, and clean up:

# Create a sandbox
tensorlake sbx create --image python:3.11-slim

# Run a command inside it
tensorlake sbx exec <sandbox-id> -- python -c "print('Hello from the sandbox!')"

# Copy a file into the sandbox
tensorlake sbx cp ./my_script.py <sandbox-id>:/tmp/my_script.py

# Open an interactive terminal
tensorlake sbx ssh <sandbox-id>

# Terminate when done
tensorlake sbx terminate <sandbox-id>

Create a Sandbox Programmatically

from tensorlake.sandbox import SandboxClient

client = SandboxClient.for_cloud(api_key="your-api-key")

# Create a sandbox and connect to it
with client.create_and_connect(image="python:3.11-slim") as sandbox:
    # Run a command
    result = sandbox.run("python", ["-c", "print('Hello from the sandbox!')"])
    print(result.stdout)  # "Hello from the sandbox!"

    # Write and read files
    sandbox.write_file("/tmp/data.txt", b"some data")
    content = sandbox.read_file("/tmp/data.txt")

    # Start a long-running process
    proc = sandbox.start_process("python", ["-m", "http.server", "8080"])
    print(proc.pid)

# Sandbox is automatically terminated when the context manager exits

Snapshots

Save the state of a sandbox and restore it later:

# Snapshot a running sandbox
snapshot = client.snapshot_and_wait(sandbox_id)

# Later, create a new sandbox from the snapshot
with client.create_and_connect(snapshot_id=snapshot.snapshot_id) as sandbox:
    # Picks up right where you left off
    result = sandbox.run("ls", ["/tmp"])
    print(result.stdout)

Sandbox Pools

Pre-warm containers for fast startup:

# Create a pool with warm containers
pool = client.create_pool(
    image="python:3.11-slim",
    warm_containers=3,
)

# Claim a sandbox instantly from the pool
resp = client.claim(pool.pool_id)
sandbox = client.connect(resp.sandbox_id)

Orchestrate

Create orchestration APIs on a distributed runtime with automatic scaling, fan-out capabilities and built-in tracking. The orchestration APIs can be invoked using HTTP requests or using the Python SDK.

Quickstart

Decorate your entrypoint with @application() and functions with @function(). Each function runs in its own isolated sandbox.

Example: City guide using OpenAI Agents with web search and code execution:

from agents import Agent, Runner
from agents.tool import WebSearchTool, function_tool
from tensorlake.applications import application, function, Image

# Define the image with necessary dependencies
FUNCTION_CONTAINER_IMAGE = Image(base_image="python:3.11-slim", name="city_guide_image").run(
    "pip install openai openai-agents"
)

@function_tool
@function(
    description="Gets the weather for a city using an OpenAI Agent with web search",
    secrets=["OPENAI_API_KEY"],
    image=FUNCTION_CONTAINER_IMAGE,
)
def get_weather_tool(city: str) -> str:
    """Uses an OpenAI Agent with WebSearchTool to find current weather."""
    agent = Agent(
        name="Weather Reporter",
        instructions="Use web search to find current weather in Fahrenheit for the city.",
        tools=[WebSearchTool()],  # Agent can search the web
    )
    result = Runner.run_sync(agent, f"City: {city}")
    return result.final_output.strip()

@application(tags={"type": "example", "use_case": "city_guide"})
@function(
    description="Creates a guide with temperature conversion using function_tool",
    secrets=["OPENAI_API_KEY"],
    image=FUNCTION_CONTAINER_IMAGE,
)
def city_guide_app(city: str) -> str:
    """Uses an OpenAI Agent with function_tool to run Python code for conversion."""

    @function_tool
    def convert_to_celsius_tool(python_code: str) -> float:
        """Converts Fahrenheit to Celsius - runs as Python code via Agent."""
        return float(eval(python_code))

    agent = Agent(
        name="Guide Creator",
        instructions="Using the appropriate tools, get the weather for the purposes of the guide. If the city uses Celsius, call convert_to_celsius_tool to convert the temperature, passing in the code needed to convert the temperature to Celsius. Create a friendly guide that references the temperature of the city in Celsius if the city typically uses Celsius, otherwise reference the temperature in Fahrenheit. Only reference Celsius or Farenheit, not both.",
        tools=[get_weather_tool, convert_to_celsius_tool],  # Agent can execute this Python function
    )
    result = Runner.run_sync(agent, f"City: {city}")
    return result.final_output.strip()

Deploy to Tensorlake

  1. Set your API keys:
export TENSORLAKE_API_KEY="your-api-key"
tl secrets set OPENAI_API_KEY "your-openai-key"
  1. Deploy:
tl deploy examples/readme_example/city_guide.py

Call via HTTP

# Invoke the application
curl https://api.tensorlake.ai/applications/city_guide_app \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY" \
  --json '"San Francisco"'
# Returns: {"request_id": "beae8736ece31ef9"}

# Get the result
curl https://api.tensorlake.ai/applications/city_guide_app/requests/{request_id}/output \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY"

# Stream results with SSE
curl https://api.tensorlake.ai/applications/city_guide_app \
  -H "Authorization: Bearer $TENSORLAKE_API_KEY" \
  -H "Accept: text/event-stream" \
  --json '"San Francisco"'

Learn More

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tensorlake-0.4.33.tar.gz (2.2 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

tensorlake-0.4.33-py3-none-win_amd64.whl (13.8 MB view details)

Uploaded Python 3Windows x86-64

tensorlake-0.4.33-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.4 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ x86-64

tensorlake-0.4.33-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (12.9 MB view details)

Uploaded Python 3manylinux: glibc 2.17+ ARM64

tensorlake-0.4.33-py3-none-macosx_11_0_arm64.whl (12.4 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

File details

Details for the file tensorlake-0.4.33.tar.gz.

File metadata

  • Download URL: tensorlake-0.4.33.tar.gz
  • Upload date:
  • Size: 2.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tensorlake-0.4.33.tar.gz
Algorithm Hash digest
SHA256 abe2a1b1b9f766550e05fe128b769cff01f697631bd080fccf28e244dba09d09
MD5 3fc21fde721d22befa95401656780863
BLAKE2b-256 e1541f322745a554290c0143b12ec431129aff9878578532dfac839d37f20208

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.33.tar.gz:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.33-py3-none-win_amd64.whl.

File metadata

  • Download URL: tensorlake-0.4.33-py3-none-win_amd64.whl
  • Upload date:
  • Size: 13.8 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tensorlake-0.4.33-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 b4d95c745d20aecd543b6a73850a55f7d16abfb00d85add88d441ee822287217
MD5 9d56dcf901e40a1abbf70de08cb0d2b7
BLAKE2b-256 23495dbd0c6710a7961d14bf3a20daf3b2fe6cdb5e5e9dd938063bf3b24409a3

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.33-py3-none-win_amd64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.33-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.33-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 338e8b9a95c32b42b49d0adc39e6e0071826387fc84e3932915851aba9ce9ed2
MD5 b100116062d7d0e632f468fb0a231de8
BLAKE2b-256 12d7fdcf08589d44808a10c614f2d8fde725161edbd34eb47bb4559540c8be63

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.33-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.33-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.33-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 699fde72c011e19dbe57a498c5988ca888eb35e533d3ba0a0ce8fbd12095187a
MD5 b75038de0902845c2f28f856efed2b22
BLAKE2b-256 0ae2f56ce9dbd69759c5fb60981347f1ab853e1f2b8166e606917d51fbe73a2a

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.33-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tensorlake-0.4.33-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for tensorlake-0.4.33-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 bacdd444c479ae620f09978e7858938a8b9ed0efca3b42aba7d2b64e637f6538
MD5 d92a5a91085e25bf0cf4d93958da08d8
BLAKE2b-256 bc6238b2f2b3285327bffa5e57b42b702c7151bdbdc4c9f9cce06b93f683bd01

See more details on using hashes here.

Provenance

The following attestation bundles were made for tensorlake-0.4.33-py3-none-macosx_11_0_arm64.whl:

Publisher: publish_pypi.yaml on tensorlakeai/tensorlake

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page