Skip to main content

List directories (safe root), filter .txt/.md files, read as text, chunk, embed, and push to Chroma.

Project description

HecVec

List directories with a safe root, filter .txt/.md files, read them as text, and optionally chunk and push to Chroma — library only, no API.

Install

pip install hecvec

One-call pipeline (list → filter → token-chunk → Chroma):

pip install hecvec[chroma]

Optional chunking only (no Chroma):

pip install hecvec[chunk]

Usage

One-call pipeline (list → filter → chunk → Chroma)

Runs entirely in the library (no API). You need Chroma running (e.g. docker run -p 8000:8000 chromadb/chroma) and OPENAI_API_KEY set (in the environment or in a .env file; the library loads .env via python-dotenv when you use hecvec[chroma]).

import hecvec

# Class-style: use defaults, then slice
test = hecvec.Slicer()
result = test.slice(path="/path/to/folder")
# → {"files": N, "chunks": M, "collection": "hecvec"}

# Or call slice on the class (same flow)
result = hecvec.Slicer.slice(path="/path/to/folder")

Flow: resolve path → listdir → filter .txt/.md → token-chunk (200 tokens, cl100k_base) → embed with OpenAI → push to Chroma.

Optional config (instance or Slicer.slice(..., key=value)):

  • root, collection_name, chroma_host, chroma_port
  • embedding_model, chunk_size, chunk_overlap, encoding_name, batch_size
  • openai_api_key (or set OPENAI_API_KEY in the environment or in a .env file; optional dotenv_path to point to a specific .env)

Low-level building blocks

from pathlib import Path
from hecvec import ListDir, ListDirTextFiles, ReadText

root = Path("/path/to/repo")

# List all entries under a path (restricted to root)
lister = ListDir(root=root)
for rel in lister.listdir("."):
    print(rel)

# Only .txt and .md files, recursively
text_lister = ListDirTextFiles(root=root)
paths = text_lister.listdir_recursive_txt_md("docs")

# Read each file as text
reader = ReadText(paths)
for path, text in reader:
    print(path, len(text))

Chunking (optional)

With pip install hecvec[chunk]:

from hecvec import ListDirTextFiles, ReadText
from hecvec.chunking import chunk_documents

lister = ListDirTextFiles(root=root)
paths = lister.listdir_recursive_txt_md(".")
reader = ReadText(paths)
path_and_text = reader.read_all()
chunks = chunk_documents(path_and_text)
# list of {"path": "...", "chunk_index": 0, "content": "..."}

CLI

hecvec-listdir [path] [root]
# or
python -m hecvec.cli [path] [root]

Test the full pipeline (the method that does everything)

From the project root, with Chroma running and OPENAI_API_KEY set (e.g. in .env):

# Start Chroma (one terminal)
docker run -p 8000:8000 chromadb/chroma

# Run the test script (another terminal)
uv run python scripts/test_slice.py
# or: python scripts/test_slice.py

The script creates a temp folder with two .txt files, runs Slicer.slice(path=...), and prints PASS or FAIL with the result (files, chunks, collection).

Modular layout (easy to study)

Each step of the pipeline lives in its own module:

Module Responsibility
hecvec.env Load .env and OPENAI_API_KEY
hecvec.listdir List dirs under a safe root; filter by extension (.txt/.md)
hecvec.reading Read files as text (UTF-8 / latin-1 / cp1252 fallback)
hecvec.token_splitter Token-based chunking (TokenTextSplitter)
hecvec.chunking Recursive-character chunking (RecursiveCharacterTextSplitter)
hecvec.embeddings OpenAI embeddings (embed_texts)
hecvec.chroma_client Chroma client, get/create collection, add documents
hecvec.chroma_list List Chroma collections and counts
hecvec.pipeline Orchestrator: Slicer and slice(path=...)

Example: use one step on its own:

from hecvec import embed_texts, token_chunk_text, list_collections

chunks = token_chunk_text("Some long document...", chunk_size=200)
vecs = embed_texts(chunks, api_key="sk-...")
names_and_counts = list_collections(host="localhost", port=8000)

Development

From the repo root:

uv sync
uv run python -c "from hecvec import ListDir; print(ListDir('.').listdir('.'))"

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hecvec-0.4.4.tar.gz (244.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hecvec-0.4.4-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file hecvec-0.4.4.tar.gz.

File metadata

  • Download URL: hecvec-0.4.4.tar.gz
  • Upload date:
  • Size: 244.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.4.tar.gz
Algorithm Hash digest
SHA256 6be7d02d6df71cb9f29e151a0ae9c01096e5369ffd3ddada34b63a7e2e393f50
MD5 b04c78e1cabf62913c7aebe5d74e3ef1
BLAKE2b-256 bf0e945923f723c52f90749bdfe043cd68c1967e5cc1c8092ab155e1a4406027

See more details on using hashes here.

File details

Details for the file hecvec-0.4.4-py3-none-any.whl.

File metadata

  • Download URL: hecvec-0.4.4-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 cc26199f7b8cff2bfe654d2836eb81386da5ed4534c4d388c30cac6e46325919
MD5 5f7ba72e24ff8e164005c6f610f167fe
BLAKE2b-256 340115abde3ce8fcf709ad791a604bf26c8d1e967071c0e168e156180a858aa3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page