Skip to main content

Hindsight embedded CLI - local memory operations without a server

Project description

hindsight-embed

Hindsight embedded CLI - local memory operations with automatic daemon management.

This package provides a simple CLI for storing and recalling memories using Hindsight's memory engine. It automatically manages a background daemon for fast operations - no manual server setup required.

How It Works

hindsight-embed uses a background daemon architecture for optimal performance:

  1. First command: Automatically starts a local daemon (first run downloads dependencies and loads ML models - can take 1-3 minutes)
  2. Subsequent commands: Near-instant responses (~1-2s) since daemon is already running
  3. Auto-shutdown: Daemon automatically exits after 5 minutes of inactivity

The daemon runs on localhost:8889 and uses an embedded PostgreSQL database (pg0) - everything stays local on your machine.

Installation

pip install hindsight-embed
# or with uvx (no install needed)
uvx hindsight-embed --help

Quick Start

# Interactive setup (recommended)
hindsight-embed configure

# Or set your LLM API key manually
export OPENAI_API_KEY=sk-...

# Store a memory (bank_id = "default")
hindsight-embed memory retain default "User prefers dark mode"

# Recall memories
hindsight-embed memory recall default "What are user preferences?"

Commands

configure

Interactive setup wizard:

hindsight-embed configure

This will:

  • Let you choose an LLM provider (OpenAI, Groq, Google, Ollama)
  • Configure your API key
  • Set the model and memory bank ID
  • Start the daemon with your configuration

memory retain

Store a memory:

hindsight-embed memory retain default "User prefers dark mode"
hindsight-embed memory retain default "Meeting on Monday" --context work
hindsight-embed memory retain myproject "API uses JWT authentication"

memory recall

Search memories:

hindsight-embed memory recall default "user preferences"
hindsight-embed memory recall default "upcoming events"

Use -o json for JSON output:

hindsight-embed memory recall default "user preferences" -o json

memory reflect

Get contextual answers that synthesize multiple memories:

hindsight-embed memory reflect default "How should I set up the dev environment?"

bank list

List all memory banks:

hindsight-embed bank list

daemon

Manage the background daemon:

hindsight-embed daemon status    # Check if daemon is running
hindsight-embed daemon start     # Start the daemon
hindsight-embed daemon stop      # Stop the daemon
hindsight-embed daemon logs      # View last 50 lines of logs
hindsight-embed daemon logs -f   # Follow logs in real-time
hindsight-embed daemon logs -n 100  # View last 100 lines

Configuration

Interactive Setup

Run hindsight-embed configure for a guided setup that saves to ~/.hindsight/embed.

Environment Variables

Variable Description Default
HINDSIGHT_EMBED_LLM_API_KEY LLM API key (or use OPENAI_API_KEY) Required
HINDSIGHT_EMBED_LLM_PROVIDER LLM provider (openai, groq, google, ollama) openai
HINDSIGHT_EMBED_LLM_MODEL LLM model gpt-4o-mini
HINDSIGHT_EMBED_BANK_ID Default memory bank ID (optional, used when not specified in CLI) default

Note: All banks share a single pg0 database (pg0://hindsight-embed). Bank isolation happens within the database via the bank_id parameter passed to CLI commands.

Files

Path Description
~/.hindsight/embed Configuration file
~/.hindsight/config.env Alternative config file location
~/.hindsight/daemon.log Daemon logs
~/.hindsight/daemon.lock Daemon lock file (PID)

Use with AI Coding Assistants

This CLI is designed to work with AI coding assistants like Claude Code, Cursor, and Windsurf. Install the Hindsight skill:

curl -fsSL https://hindsight.vectorize.io/get-skill | bash

This will configure the LLM provider and install the skill to your assistant's skills directory.

Troubleshooting

Daemon won't start:

# Check logs for errors
hindsight-embed daemon logs

# Stop any stuck daemon and restart
hindsight-embed daemon stop
hindsight-embed daemon start

Slow first command: This is expected - the first command needs to download dependencies, start the daemon, and load ML models. First run can take 1-3 minutes depending on network speed. Subsequent commands will be fast (~1-2s).

Change configuration:

# Re-run configure (automatically restarts daemon)
hindsight-embed configure

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hindsight_embed-0.4.6.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hindsight_embed-0.4.6-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file hindsight_embed-0.4.6.tar.gz.

File metadata

  • Download URL: hindsight_embed-0.4.6.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hindsight_embed-0.4.6.tar.gz
Algorithm Hash digest
SHA256 049b0084cebf275b76234e4b52694ea1b2b0d6c277a6647b829899fa3171863f
MD5 722508c961cbf107c326abd90435e26d
BLAKE2b-256 9fce058507f2838d9ad96b57224a08435c681d428725719793e7b7a9faed534a

See more details on using hashes here.

Provenance

The following attestation bundles were made for hindsight_embed-0.4.6.tar.gz:

Publisher: release.yml on vectorize-io/hindsight

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hindsight_embed-0.4.6-py3-none-any.whl.

File metadata

File hashes

Hashes for hindsight_embed-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 67c5df75408caeb5b98d3f7ded568c88265439fc82e9b402943d5abcf4de9224
MD5 60b8e2b0f8bbd13bb801f9a7062c13a0
BLAKE2b-256 2c7d9b63b569b2a09c979c50b164e88ee28b841d021715f650b0b1659c83e258

See more details on using hashes here.

Provenance

The following attestation bundles were made for hindsight_embed-0.4.6-py3-none-any.whl:

Publisher: release.yml on vectorize-io/hindsight

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page