Skip to main content

List directories (safe root), filter .txt/.md files, read as text, chunk, embed, and push to Chroma.

Project description

HecVec

List directories with a safe root, filter .txt/.md files, read them as text, and optionally chunk and push to Chroma — library only, no API.

Install

pip install hecvec

One-call pipeline (list → filter → token-chunk → Chroma):

pip install hecvec[chroma]

Optional chunking only (no Chroma):

pip install hecvec[chunk]

Usage

One-call pipeline (list → filter → chunk → Chroma)

Runs entirely in the library (no API). You need Chroma running (e.g. docker run -p 8000:8000 chromadb/chroma) and OPENAI_API_KEY set (in the environment or in a .env file; the library loads .env via python-dotenv when you use hecvec[chroma]).

import hecvec

# Class-style: use defaults, then slice
test = hecvec.Slicer()
result = test.slice(path="/path/to/folder")
# → {"files": N, "chunks": M, "collection": "hecvec"}

# Or call slice on the class (same flow)
result = hecvec.Slicer.slice(path="/path/to/folder")

Flow: resolve path → listdir → filter .txt/.md → token-chunk (200 tokens, cl100k_base) → embed with OpenAI → push to Chroma.

Optional config (instance or Slicer.slice(..., key=value)):

  • root, collection_name, chroma_host, chroma_port
  • embedding_model, chunk_size, chunk_overlap, encoding_name, batch_size
  • openai_api_key (or set OPENAI_API_KEY in the environment or in a .env file; optional dotenv_path to point to a specific .env)

Low-level building blocks

from pathlib import Path
from hecvec import ListDir, ListDirTextFiles, ReadText

root = Path("/path/to/repo")

# List all entries under a path (restricted to root)
lister = ListDir(root=root)
for rel in lister.listdir("."):
    print(rel)

# Only .txt and .md files, recursively
text_lister = ListDirTextFiles(root=root)
paths = text_lister.listdir_recursive_txt_md("docs")

# Read each file as text
reader = ReadText(paths)
for path, text in reader:
    print(path, len(text))

Chunking (optional)

With pip install hecvec[chunk]:

from hecvec import ListDirTextFiles, ReadText
from hecvec.chunking import chunk_documents

lister = ListDirTextFiles(root=root)
paths = lister.listdir_recursive_txt_md(".")
reader = ReadText(paths)
path_and_text = reader.read_all()
chunks = chunk_documents(path_and_text)
# list of {"path": "...", "chunk_index": 0, "content": "..."}

CLI

hecvec-listdir [path] [root]
# or
python -m hecvec.cli [path] [root]

Test the full pipeline (the method that does everything)

From the project root, with Chroma running and OPENAI_API_KEY set (e.g. in .env):

# Start Chroma (one terminal)
docker run -p 8000:8000 chromadb/chroma

# Run the test script (another terminal)
uv run python scripts/test_slice.py
# or: python scripts/test_slice.py

The script creates a temp folder with two .txt files, runs Slicer.slice(path=...), and prints PASS or FAIL with the result (files, chunks, collection).

Modular layout (easy to study)

Each step of the pipeline lives in its own module:

Module Responsibility
hecvec.env Load .env and OPENAI_API_KEY
hecvec.listdir List dirs under a safe root; filter by extension (.txt/.md)
hecvec.reading Read files as text (UTF-8 / latin-1 / cp1252 fallback)
hecvec.token_splitter Token-based chunking (TokenTextSplitter)
hecvec.chunking Recursive-character chunking (RecursiveCharacterTextSplitter)
hecvec.embeddings OpenAI embeddings (embed_texts)
hecvec.chroma_client Chroma client, get/create collection, add documents
hecvec.chroma_list List Chroma collections and counts
hecvec.pipeline Orchestrator: Slicer and slice(path=...)

Example: use one step on its own:

from hecvec import embed_texts, token_chunk_text, list_collections

chunks = token_chunk_text("Some long document...", chunk_size=200)
vecs = embed_texts(chunks, api_key="sk-...")
names_and_counts = list_collections(host="localhost", port=8000)

Development

From the repo root:

uv sync
uv run python -c "from hecvec import ListDir; print(ListDir('.').listdir('.'))"

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hecvec-0.4.6.tar.gz (244.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hecvec-0.4.6-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file hecvec-0.4.6.tar.gz.

File metadata

  • Download URL: hecvec-0.4.6.tar.gz
  • Upload date:
  • Size: 244.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.6.tar.gz
Algorithm Hash digest
SHA256 d55fe2d635ff0b756bcb48cb36696403eec95790e3abee3030a5538943195c83
MD5 f96d6cf30f5ad912a7b06cc13c5d1d39
BLAKE2b-256 3a24f13e1e5a04bc13e833901384eac1942de2dfd352715f9c950729e4d1143d

See more details on using hashes here.

File details

Details for the file hecvec-0.4.6-py3-none-any.whl.

File metadata

  • Download URL: hecvec-0.4.6-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.6-py3-none-any.whl
Algorithm Hash digest
SHA256 23dfc4d32f033f1fa3ef06291d5d53b943c6de7730f7a917f5388e1e3de01631
MD5 993df8621151512642f7578315116976
BLAKE2b-256 5bee82aa66bbd34093493428e2a3c3734ca25b71e9a9fa0d6d440bf4f7411823

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page