Skip to main content

Virtual bash environment for AI agents, backed by Supermemory.

Project description

supermemory-bash (Python)

A virtual bash environment for AI agents, backed by your Supermemory container. Files persist across sessions, and a built-in sgrep command does semantic search across the entire filesystem.

Install

pip install supermemory-bash

You'll need a Supermemory API key. Get one at https://supermemory.ai.

Quickstart

import asyncio
from supermemory_bash import create_bash

async def main():
    result = await create_bash(
        api_key="sm-...",
        container_tag="user_42",
    )
    bash = result.bash

    # Run any shell command:
    r = await bash.exec("echo 'hello' > /a.md && cat /a.md")
    print(r.stdout)  # "hello\n"

    # Files persist across sessions:
    r2 = await bash.exec("cat /a.md")
    print(r2.stdout)  # "hello\n"

    # Semantic search across the whole container:
    r3 = await bash.exec("sgrep 'authentication tokens'")
    print(r3.stdout)

asyncio.run(main())

Hand the bash tool to your LLM

create_bash returns a tool_description string ready to drop into your tool schema.

Anthropic

import anthropic
from supermemory_bash import create_bash

result = await create_bash(api_key="sm-...", container_tag="user_42")
bash, tool_description = result.bash, result.tool_description

client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=4096,
    tools=[{
        "name": "bash",
        "description": tool_description,
        "input_schema": {
            "type": "object",
            "properties": {"cmd": {"type": "string"}},
            "required": ["cmd"],
        },
    }],
    messages=[{"role": "user", "content": "Find my notes about authentication."}],
)

# In your tool-use loop, call `await bash.exec(cmd)` and feed the result back.

OpenAI

from openai import OpenAI
from supermemory_bash import create_bash

result = await create_bash(api_key="sm-...", container_tag="user_42")
bash, tool_description = result.bash, result.tool_description

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Search my notes for auth."}],
    tools=[{
        "type": "function",
        "function": {
            "name": "bash",
            "description": tool_description,
            "parameters": {
                "type": "object",
                "properties": {"cmd": {"type": "string"}},
                "required": ["cmd"],
            },
        },
    }],
)

Options

await create_bash(
    api_key="sm-...",
    container_tag="user_42",        # one container per user / project
    base_url=None,                  # API override
    eager_load=True,                # warm path index at construction
    eager_content=True,             # also warm content cache
    cache_ttl_ms=150_000,           # 2.5 min. None = never expires. 0 = no cache.
    cwd="/home/user",               # default working directory
    env=None,                       # extra environment variables
)

For very large containers (10k+ docs), set eager_content=False to skip the content warm and pay HTTP per cat. Path resolution stays warm.

cache_ttl_ms controls how long the in-memory content cache trusts itself. The default (2.5 min) assumes other writers exist. Single-writer apps can pass None for max speed.

Supported commands

The built-in shell interpreter handles the commands agents use most:

  • Files: cat, head, tail, touch, stat, tee
  • Directories: ls, mkdir, rmdir, pwd, cd
  • Management: rm, mv, cp
  • Search: grep (regex), sgrep (semantic)
  • Text: echo, printf, wc, sort, uniq, sed, cut, tr
  • Utility: find, test/[, basename, dirname, seq, date, true, false
  • Operators: pipes (|), redirects (>, >>), chaining (&&, ||, ;), variables ($VAR)

What's not supported

  • chmod, utimes, symlinks — Supermemory has no permission/symlink model.
  • /dev/null redirects — not a real device.
  • For loops, while loops, if/then/fi — use && / || chaining instead.
  • Binary uploads — content is text-extracted server-side.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

supermemory_bash-0.0.1.tar.gz (25.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

supermemory_bash-0.0.1-py3-none-any.whl (21.3 kB view details)

Uploaded Python 3

File details

Details for the file supermemory_bash-0.0.1.tar.gz.

File metadata

  • Download URL: supermemory_bash-0.0.1.tar.gz
  • Upload date:
  • Size: 25.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.12

File hashes

Hashes for supermemory_bash-0.0.1.tar.gz
Algorithm Hash digest
SHA256 1ea26ad98fb4fadfe33b5f7183bf028aa0250c5c7b6d4c970b73811ccce3554a
MD5 0115d80ab802934d391cb9e661fb204d
BLAKE2b-256 e65998846c01d12d6af1a73547b8873cbddb48c961b499ea2df2f24f3e7d76d9

See more details on using hashes here.

File details

Details for the file supermemory_bash-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for supermemory_bash-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b62c5b7b792d97981bdab519eaf200750273201771ec8199dfdb0ae757cb160d
MD5 96e063d649366a3f99067b711391d5ea
BLAKE2b-256 3e1f94aa8e3b10a3a87743b66341723da256bd19e218871653ae8f062946a604

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page