Hindsight embedded CLI - local memory operations without a server
Project description
hindsight-embed
Hindsight embedded CLI - local memory operations with automatic daemon management.
This package provides a simple CLI for storing and recalling memories using Hindsight's memory engine. It automatically manages a background daemon for fast operations - no manual server setup required.
How It Works
hindsight-embed uses a background daemon architecture for optimal performance:
- First command: Automatically starts a local daemon (first run downloads dependencies and loads ML models - can take 1-3 minutes)
- Subsequent commands: Near-instant responses (~1-2s) since daemon is already running
- Auto-shutdown: Daemon automatically exits after 5 minutes of inactivity
The daemon runs on localhost:8889 and uses an embedded PostgreSQL database (pg0) - everything stays local on your machine.
Installation
pip install hindsight-embed
# or with uvx (no install needed)
uvx hindsight-embed --help
Quick Start
# Interactive setup (recommended)
hindsight-embed configure
# Or set your LLM API key manually
export OPENAI_API_KEY=sk-...
# Store a memory (bank_id = "default")
hindsight-embed memory retain default "User prefers dark mode"
# Recall memories
hindsight-embed memory recall default "What are user preferences?"
Commands
configure
Interactive setup wizard:
hindsight-embed configure
This will:
- Let you choose an LLM provider (OpenAI, Groq, Google, Ollama)
- Configure your API key
- Set the model and memory bank ID
- Start the daemon with your configuration
memory retain
Store a memory:
hindsight-embed memory retain default "User prefers dark mode"
hindsight-embed memory retain default "Meeting on Monday" --context work
hindsight-embed memory retain myproject "API uses JWT authentication"
memory recall
Search memories:
hindsight-embed memory recall default "user preferences"
hindsight-embed memory recall default "upcoming events"
Use -o json for JSON output:
hindsight-embed memory recall default "user preferences" -o json
memory reflect
Get contextual answers that synthesize multiple memories:
hindsight-embed memory reflect default "How should I set up the dev environment?"
bank list
List all memory banks:
hindsight-embed bank list
daemon
Manage the background daemon:
hindsight-embed daemon status # Check if daemon is running
hindsight-embed daemon start # Start the daemon
hindsight-embed daemon stop # Stop the daemon
hindsight-embed daemon logs # View last 50 lines of logs
hindsight-embed daemon logs -f # Follow logs in real-time
hindsight-embed daemon logs -n 100 # View last 100 lines
Configuration
Interactive Setup
Run hindsight-embed configure for a guided setup that saves to ~/.hindsight/embed.
Environment Variables
| Variable | Description | Default |
|---|---|---|
HINDSIGHT_EMBED_LLM_API_KEY |
LLM API key (or use OPENAI_API_KEY) |
Required |
HINDSIGHT_EMBED_LLM_PROVIDER |
LLM provider (openai, groq, google, ollama) |
openai |
HINDSIGHT_EMBED_LLM_MODEL |
LLM model | gpt-4o-mini |
HINDSIGHT_EMBED_BANK_ID |
Memory bank ID | default |
Files
| Path | Description |
|---|---|
~/.hindsight/embed |
Configuration file |
~/.hindsight/config.env |
Alternative config file location |
~/.hindsight/daemon.log |
Daemon logs |
~/.hindsight/daemon.lock |
Daemon lock file (PID) |
Use with AI Coding Assistants
This CLI is designed to work with AI coding assistants like Claude Code, Cursor, and Windsurf. Install the Hindsight skill:
curl -fsSL https://hindsight.vectorize.io/get-skill | bash
This will configure the LLM provider and install the skill to your assistant's skills directory.
Troubleshooting
Daemon won't start:
# Check logs for errors
hindsight-embed daemon logs
# Stop any stuck daemon and restart
hindsight-embed daemon stop
hindsight-embed daemon start
Slow first command: This is expected - the first command needs to download dependencies, start the daemon, and load ML models. First run can take 1-3 minutes depending on network speed. Subsequent commands will be fast (~1-2s).
Change configuration:
# Re-run configure (automatically restarts daemon)
hindsight-embed configure
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hindsight_embed-0.3.0.tar.gz.
File metadata
- Download URL: hindsight_embed-0.3.0.tar.gz
- Upload date:
- Size: 12.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
476c08c9143df2b7b71149b136c505fa1f0530477a33a0b47fbe043b7ee774ee
|
|
| MD5 |
82e52684979a50289d44a331a9e2df75
|
|
| BLAKE2b-256 |
768f6419b90dc39f37b690a4e33784f8b8ea72b50bf1283fa1194d52fa5a4b47
|
Provenance
The following attestation bundles were made for hindsight_embed-0.3.0.tar.gz:
Publisher:
release.yml on vectorize-io/hindsight
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hindsight_embed-0.3.0.tar.gz -
Subject digest:
476c08c9143df2b7b71149b136c505fa1f0530477a33a0b47fbe043b7ee774ee - Sigstore transparency entry: 819222296
- Sigstore integration time:
-
Permalink:
vectorize-io/hindsight@a75dcfebf5b1126ca1f8a508079c2e972e744539 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/vectorize-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a75dcfebf5b1126ca1f8a508079c2e972e744539 -
Trigger Event:
push
-
Statement type:
File details
Details for the file hindsight_embed-0.3.0-py3-none-any.whl.
File metadata
- Download URL: hindsight_embed-0.3.0-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e68d7f910f48f8408c4afa2021210d4f75c920f5bb15bc9d4e28e2d9602fb898
|
|
| MD5 |
58337841196a619609ef46eddf8dea01
|
|
| BLAKE2b-256 |
fdb134131342caf388241ec33edbdac94ffaed02b16979ce46bffd0ce8fe0244
|
Provenance
The following attestation bundles were made for hindsight_embed-0.3.0-py3-none-any.whl:
Publisher:
release.yml on vectorize-io/hindsight
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hindsight_embed-0.3.0-py3-none-any.whl -
Subject digest:
e68d7f910f48f8408c4afa2021210d4f75c920f5bb15bc9d4e28e2d9602fb898 - Sigstore transparency entry: 819222314
- Sigstore integration time:
-
Permalink:
vectorize-io/hindsight@a75dcfebf5b1126ca1f8a508079c2e972e744539 -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/vectorize-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a75dcfebf5b1126ca1f8a508079c2e972e744539 -
Trigger Event:
push
-
Statement type: