Skip to main content

Python SDK for bashlet sandboxed bash execution - provide bashlet as a tool for AI agents

Project description

bashlet

Python SDK for bashlet - a sandboxed bash execution environment. This SDK allows you to create bashlet instances and provide them as tools for AI agents.

Features

  • Sandboxed Execution: Run shell commands in isolated environments
  • Sync & Async: Both synchronous and asynchronous clients
  • Multi-Framework Support: Generate tools for LangChain, OpenAI, Anthropic, and MCP
  • Session Management: Create persistent sessions for stateful operations
  • File Operations: Read, write, and list files in the sandbox
  • Type-Safe: Full type hints with py.typed marker

Installation

pip install bashlet

Make sure you have bashlet installed:

cargo install bashlet

Optional Dependencies

Install with framework support:

# For LangChain
pip install bashlet[langchain]

# For OpenAI
pip install bashlet[openai]

# For Anthropic
pip install bashlet[anthropic]

# For MCP
pip install bashlet[mcp]

# For all frameworks
pip install bashlet[all]

Quick Start

Synchronous Client

from bashlet import Bashlet, Mount

bashlet = Bashlet(
    mounts=[Mount("./src", "/workspace")],
)

# Execute a command
result = bashlet.exec("ls -la /workspace")
print(result.stdout)
print(f"Exit code: {result.exit_code}")

Asynchronous Client

import asyncio
from bashlet import AsyncBashlet

async def main():
    bashlet = AsyncBashlet()
    result = await bashlet.exec("echo hello")
    print(result.stdout)

asyncio.run(main())

Usage with AI Frameworks

LangChain

from langchain_openai import ChatOpenAI
from bashlet import Bashlet

bashlet = Bashlet(
    mounts=[{"host_path": "./project", "guest_path": "/workspace"}],
)

# Get LangChain tools
tools = bashlet.to_langchain_tools()

# Bind tools to LLM
llm = ChatOpenAI(model="gpt-4-turbo")
llm_with_tools = llm.bind_tools(tools.all())

# Use in agent
response = llm_with_tools.invoke("List files in /workspace")

OpenAI Function Calling

from openai import OpenAI
from bashlet import Bashlet

client = OpenAI()
bashlet = Bashlet()

# Get OpenAI tools
handler = bashlet.to_openai_tools()

# Create completion with tools
response = client.chat.completions.create(
    model="gpt-4-turbo",
    tools=handler.definitions,
    messages=[{"role": "user", "content": "List files in current directory"}],
)

# Handle tool calls
for tool_call in response.choices[0].message.tool_calls or []:
    result = handler.handle(
        tool_call.function.name,
        tool_call.function.arguments  # JSON string or dict
    )
    print(result)

Anthropic Tool Use

from anthropic import Anthropic
from bashlet import Bashlet

client = Anthropic()
bashlet = Bashlet()

# Get Anthropic tools
handler = bashlet.to_anthropic_tools()

# Create message with tools
response = client.messages.create(
    model="claude-3-opus-20240229",
    max_tokens=1024,
    tools=handler.definitions,
    messages=[{"role": "user", "content": "List files in current directory"}],
)

# Handle tool use
for block in response.content:
    if block.type == "tool_use":
        result = handler.handle(block.name, block.input)
        print(result)

MCP (Model Context Protocol)

from mcp.server import Server
from mcp.server.stdio import stdio_server
from bashlet import Bashlet

bashlet = Bashlet()
handler = bashlet.to_mcp_tools()

server = Server("bashlet-server")

@server.list_tools()
async def list_tools():
    return handler.definitions

@server.call_tool()
async def call_tool(name: str, arguments: dict):
    result = handler.handle(name, arguments)
    return result.content

async def main():
    async with stdio_server() as (read_stream, write_stream):
        await server.run(read_stream, write_stream)

Generic/Framework-Agnostic

from bashlet import Bashlet, create_generic_tools

bashlet = Bashlet()
tools = create_generic_tools(bashlet)

# Use with any framework
for tool in tools:
    print(f"Tool: {tool.name}")
    print(f"Description: {tool.description}")
    print(f"Parameters: {tool.parameters}")

# Execute a tool
exec_tool = next(t for t in tools if t.name == "bashlet_exec")
result = exec_tool.execute(command="echo hello")
print(result)

API Reference

Bashlet / AsyncBashlet

bashlet = Bashlet(
    binary_path="bashlet",      # Path to bashlet binary
    preset=None,                # Default preset name
    mounts=None,                # Default mounts
    env_vars=None,              # Default environment variables
    workdir=None,               # Default working directory
    timeout=300,                # Command timeout in seconds
    config_path=None,           # Path to config file
)

Methods

Method Description
exec(command, **options) Execute a one-shot command
create_session(**options) Create a persistent session
run_in_session(id, command) Run command in session
terminate(session_id) Terminate a session
list_sessions() List all sessions
read_file(path) Read file contents
write_file(path, content) Write to file
list_dir(path) List directory

Tool Generators

Method Returns Framework
to_langchain_tools() BashletLangChainTools LangChain
to_openai_tools() OpenAIToolHandler OpenAI
to_anthropic_tools() AnthropicToolHandler Anthropic
to_mcp_tools() MCPToolHandler MCP
to_generic_tools() list[GenericTool] Any

Available Tools

Tool Name Description
bashlet_exec Execute shell commands
bashlet_read_file Read file contents
bashlet_write_file Write to files
bashlet_list_dir List directory contents

Session Management

from bashlet import Bashlet

bashlet = Bashlet()

# Create a session
session_id = bashlet.create_session(
    name="my-session",
    ttl="1h",
    mounts=[{"host_path": "./data", "guest_path": "/data"}],
)

# Run commands in session (state persists)
bashlet.run_in_session(session_id, "cd /data && npm init -y")
bashlet.run_in_session(session_id, "npm install express")
result = bashlet.run_in_session(session_id, "cat package.json")
print(result.stdout)

# List sessions
sessions = bashlet.list_sessions()
for s in sessions:
    print(f"{s.id}: {s.name or 'unnamed'}")

# Terminate session
bashlet.terminate(session_id)

Error Handling

from bashlet import (
    Bashlet,
    BashletError,
    CommandExecutionError,
    BinaryNotFoundError,
    TimeoutError,
)

bashlet = Bashlet()

try:
    result = bashlet.exec("some-command")
except CommandExecutionError as e:
    print(f"Command failed with exit code {e.exit_code}")
    print(f"stderr: {e.stderr}")
except TimeoutError as e:
    print(f"Command timed out after {e.timeout_seconds}s")
except BinaryNotFoundError as e:
    print(f"Bashlet not found at: {e.binary_path}")
except BashletError as e:
    print(f"Bashlet error: {e}")

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bashlet-1.3.2.tar.gz (23.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bashlet-1.3.2-py3-none-any.whl (25.4 kB view details)

Uploaded Python 3

File details

Details for the file bashlet-1.3.2.tar.gz.

File metadata

  • Download URL: bashlet-1.3.2.tar.gz
  • Upload date:
  • Size: 23.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for bashlet-1.3.2.tar.gz
Algorithm Hash digest
SHA256 069ebcb632b050e97b5faf1f1429bd9ffdfbafc71359ac601fac43716405aaaa
MD5 b21ac385b311a75d29d972c5857c653b
BLAKE2b-256 75f0eb05352c1c511611bd4a31ee8559b56af4893074106f8237713bc6ed378a

See more details on using hashes here.

File details

Details for the file bashlet-1.3.2-py3-none-any.whl.

File metadata

  • Download URL: bashlet-1.3.2-py3-none-any.whl
  • Upload date:
  • Size: 25.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for bashlet-1.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5ab905f170bf641b2c66160b79fd11fc919888771e9cd17a5f720538796f05df
MD5 3ef58c71b1cca06557e722cb261179e2
BLAKE2b-256 d4274791c1921ed11ba12df788b65b40523a8202587ae713769864ea3095f4b9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page