Skip to main content

Endee vector database integration for CrewAI agent memory

Project description

crewai-endee

Endee vector database integration for CrewAI agent memory

crewai-endee connects Endee to CrewAI, giving your agents persistent memory with dense, hybrid, and filtered retrieval.


Installation

Requires Python 3.9+.

pip install crewai-endee

This installs endee, endee_model, crewai, and fastembed automatically.

Add your embedding provider (only install the one you use):

pip install cohere       # Cohere
pip install openai       # OpenAI
pip install google-genai # Google Gemini

Connect to Endee

First, configure your embedding provider:

embedder_config = {
    "provider": "cohere",
    "config": {"model_name": "small", "api_key": "YOUR_COHERE_API_KEY"},
}

With API token

Sign up at endee.io and get your token. See the Endee docs for details.

from crewai_endee import EndeeStorage

storage = EndeeStorage(
    index_name="my_index",
    api_token="YOUR_ENDEE_API_TOKEN",
)

Without API token (local)

Run the open-source Endee server locally. See github.com/endee-io/endee for setup instructions. Then omit api_token:

storage = EndeeStorage(
    index_name="my_index",
)

Dense Mode

from crewai_endee import EndeeStorage
from crewai.memory.unified_memory import Memory

storage = EndeeStorage(
    index_name="demo_dense",
    api_token=ENDEE_API_TOKEN,
)

memory = Memory(storage=storage, embedder=embedder_config, llm="gemini/gemini-2.5-flash")

Hybrid Mode

Add sparse_model_name to enable dense + sparse (BM25):

hybrid_storage = EndeeStorage(
    index_name="demo_hybrid",
    api_token=ENDEE_API_TOKEN,
    sparse_model_name="endee/bm25",
)

hybrid_memory = Memory(storage=hybrid_storage, embedder=embedder_config, llm="gemini/gemini-2.5-flash")

CrewAI Integration

Pass the Memory object directly to Crew:

from crewai import LLM, Agent, Crew, Process, Task
from crewai.memory.unified_memory import Memory
from crewai_endee import EndeeStorage

# Storage + Memory
storage = EndeeStorage(index_name="crew_memory", api_token=ENDEE_API_TOKEN)
memory = Memory(storage=storage, embedder=embedder_config, llm="gemini/gemini-2.5-flash")

# LLM
llm = LLM(model="gemini/gemini-2.5-flash", api_key=GOOGLE_API_KEY)

# Agents
analyst = Agent(
    role="Software Analyst",
    goal="Extract and interpret programming language characteristics",
    backstory="You study programming language design, typing systems, and paradigms.",
    llm=llm,
)

classifier = Agent(
    role="Language Classifier",
    goal="Categorise languages by paradigm and typing discipline",
    backstory="You classify languages using the analyst's extracted data.",
    llm=llm,
)

validator = Agent(
    role="Quality Validator",
    goal="Cross-check facts and produce a quality report",
    backstory="You verify accuracy of classifications against known facts.",
    llm=llm,
)

# Tasks
analysis_task = Task(
    description="Analyse key characteristics of Python, Java, Go, Rust, and C++.",
    expected_output="Structured summary of each language's typing, paradigm, and key features.",
    agent=analyst,
)

classification_task = Task(
    description="Classify each language by paradigm and typing discipline.",
    expected_output="A table mapping each language to its paradigm and typing.",
    agent=classifier,
)

validation_task = Task(
    description="Cross-check the classification against known facts and flag errors.",
    expected_output="Quality report with accuracy score and corrections.",
    agent=validator,
)

# Crew with Endee-backed memory
crew = Crew(
    agents=[analyst, classifier, validator],
    tasks=[analysis_task, classification_task, validation_task],
    process=Process.sequential,
    memory=memory,
    verbose=True,
)

result = crew.kickoff()
print(result)

Querying After Crew Execution

After the crew runs, query memory directly — no LLM call needed:

matches = memory.recall("Who created Go?", limit=2)
for match in matches:
    print(f"  [{match.score:.3f}] {match.record.content[:100]}")

Memory Recall Across Sessions

A new crew can recall everything the previous crew stored — no re-processing:

recall_agent = Agent(
    role="Tech Knowledge Tester",
    goal="Answer technical questions using previously stored memory",
    backstory="You answer questions by recalling stored knowledge.",
    llm=llm,
)

recall_task = Task(
    description="Using stored memory, answer: What concurrency mechanism does Go provide?",
    expected_output="A concise factual answer.",
    agent=recall_agent,
)

recall_crew = Crew(
    agents=[recall_agent],
    tasks=[recall_task],
    memory=memory,  # Same Memory object -> same Endee index
)

result = recall_crew.kickoff()
print(result)

Hybrid Mode with CrewAI Memory

One-line change — add sparse_model_name. The CrewAI wiring is identical:

hybrid_storage = EndeeStorage(
    index_name="crew_hybrid_memory",
    api_token=ENDEE_API_TOKEN,
    sparse_model_name="endee/bm25",
)

hybrid_memory = Memory(storage=hybrid_storage, embedder=embedder_config, llm="gemini/gemini-2.5-flash")

crew = Crew(
    agents=[analyst],
    tasks=[analysis_task],
    memory=hybrid_memory,
)

API Reference

Constructor Parameters

Parameter Type Required Default Description
index_name str Yes Unique Endee index name
api_token str No None Endee API token (omit for local mode)
vector_dim int No None Dense vector dimension (auto-detected from first embedding if omitted)
sparse_model_name str No None "endee/bm25" for BM25, "splade_pp" for SPLADE, or any supported model key

For space_type, precision, ef_con, and other Endee index options, see the Endee docs.

StorageBackend Methods

Method Description
save(records) Save a list of MemoryRecord objects to the index
search(query_embedding, ...) Vector similarity search returning [(MemoryRecord, score)]
get_record(record_id) Retrieve a single record by ID
update(record) Update an existing record
delete(...) Delete records by IDs, scope, categories, or metadata
count(scope_prefix) Count records in the index
reset(scope_prefix) Delete the entire index
list_records(...) List records in a scope
list_scopes(parent) List child scopes
list_categories(scope_prefix) List categories and counts
get_scope_info(scope) Get scope metadata

Utility Functions

from crewai_endee import list_supported_models

for name, config in list_supported_models().items():
    print(f"  {name}{config['description']}")

Full Endee documentation: docs.endee.io | GitHub: endee-io/endee | CrewAI docs: docs.crewai.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crewai_endee-0.1.1b4.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

crewai_endee-0.1.1b4-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file crewai_endee-0.1.1b4.tar.gz.

File metadata

  • Download URL: crewai_endee-0.1.1b4.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for crewai_endee-0.1.1b4.tar.gz
Algorithm Hash digest
SHA256 4f5603e439c6add4a0b1dded659555bb4fe7488064e8e5d8dc534d150f1b7fe0
MD5 74b57c9b8269e41693cdd40e23121471
BLAKE2b-256 0501046a67357f5ce080d81e3509bc6875617174f3ff7193492bd3e52eec44e5

See more details on using hashes here.

File details

Details for the file crewai_endee-0.1.1b4-py3-none-any.whl.

File metadata

  • Download URL: crewai_endee-0.1.1b4-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for crewai_endee-0.1.1b4-py3-none-any.whl
Algorithm Hash digest
SHA256 72441ceaf86c9aa14fa691fbeb4b52219516de1d89aead90bd9582bc02d2bce8
MD5 a212b761a2be2638de4d2699a94eba97
BLAKE2b-256 cf108f2d6fab3b68aec6b1ee53f76f5c14e8f1c5cee7c2591afdb1e1abf0b53d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page