Skip to main content

List directories (safe root), filter .txt/.md files, read as text, chunk, embed, and push to Chroma.

Project description

HecVec

List directories with a safe root, filter .txt/.md files, read them as text, and optionally chunk and push to Chroma — library only, no API.

Install

pip install hecvec

One-call pipeline (list → filter → token-chunk → Chroma):

pip install hecvec[chroma]

Optional chunking only (no Chroma):

pip install hecvec[chunk]

Usage

One-call pipeline (list → filter → chunk → Chroma)

Runs entirely in the library (no API). You need Chroma running (e.g. docker run -p 8000:8000 chromadb/chroma) and OPENAI_API_KEY set (in the environment or in a .env file; the library loads .env via python-dotenv when you use hecvec[chroma]).

import hecvec

# Class-style: use defaults, then slice
test = hecvec.Slicer()
result = test.slice(path="/path/to/folder")
# → {"files": N, "chunks": M, "collection": "hecvec"}

# Or call slice on the class (same flow)
result = hecvec.Slicer.slice(path="/path/to/folder")

Flow: resolve path → listdir → filter .txt/.md → token-chunk (200 tokens, cl100k_base) → embed with OpenAI → push to Chroma.

Optional config (instance or Slicer.slice(..., key=value)):

  • root, collection_name, chroma_host, chroma_port
  • embedding_model, chunk_size, chunk_overlap, encoding_name, batch_size
  • openai_api_key (or set OPENAI_API_KEY in the environment or in a .env file; optional dotenv_path to point to a specific .env)

Low-level building blocks

from pathlib import Path
from hecvec import ListDir, ListDirTextFiles, ReadText

root = Path("/path/to/repo")

# List all entries under a path (restricted to root)
lister = ListDir(root=root)
for rel in lister.listdir("."):
    print(rel)

# Only .txt and .md files, recursively
text_lister = ListDirTextFiles(root=root)
paths = text_lister.listdir_recursive_txt_md("docs")

# Read each file as text
reader = ReadText(paths)
for path, text in reader:
    print(path, len(text))

Chunking (optional)

With pip install hecvec[chunk]:

from hecvec import ListDirTextFiles, ReadText
from hecvec.chunking import chunk_documents

lister = ListDirTextFiles(root=root)
paths = lister.listdir_recursive_txt_md(".")
reader = ReadText(paths)
path_and_text = reader.read_all()
chunks = chunk_documents(path_and_text)
# list of {"path": "...", "chunk_index": 0, "content": "..."}

CLI

hecvec-listdir [path] [root]
# or
python -m hecvec.cli [path] [root]

Test the full pipeline (the method that does everything)

From the project root, with Chroma running and OPENAI_API_KEY set (e.g. in .env):

# Start Chroma (one terminal)
docker run -p 8000:8000 chromadb/chroma

# Run the test script (another terminal)
uv run python scripts/test_slice.py
# or: python scripts/test_slice.py

The script creates a temp folder with two .txt files, runs Slicer.slice(path=...), and prints PASS or FAIL with the result (files, chunks, collection).

Modular layout (easy to study)

Each step of the pipeline lives in its own module:

Module Responsibility
hecvec.env Load .env and OPENAI_API_KEY
hecvec.listdir List dirs under a safe root; filter by extension (.txt/.md)
hecvec.reading Read files as text (UTF-8 / latin-1 / cp1252 fallback)
hecvec.token_splitter Token-based chunking (TokenTextSplitter)
hecvec.chunking Recursive-character chunking (RecursiveCharacterTextSplitter)
hecvec.embeddings OpenAI embeddings (embed_texts)
hecvec.chroma_client Chroma client, get/create collection, add documents
hecvec.chroma_list List Chroma collections and counts
hecvec.pipeline Orchestrator: Slicer and slice(path=...)

Example: use one step on its own:

from hecvec import embed_texts, token_chunk_text, list_collections

chunks = token_chunk_text("Some long document...", chunk_size=200)
vecs = embed_texts(chunks, api_key="sk-...")
names_and_counts = list_collections(host="localhost", port=8000)

Development

From the repo root:

uv sync
uv run python -c "from hecvec import ListDir; print(ListDir('.').listdir('.'))"

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hecvec-0.4.1.tar.gz (243.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hecvec-0.4.1-py3-none-any.whl (21.0 kB view details)

Uploaded Python 3

File details

Details for the file hecvec-0.4.1.tar.gz.

File metadata

  • Download URL: hecvec-0.4.1.tar.gz
  • Upload date:
  • Size: 243.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.1.tar.gz
Algorithm Hash digest
SHA256 b91b35d38bcb81a27325287bf2808d0af1c3e22401c3ac9dcb793afdf33a1585
MD5 3d8acac739900c1fa1d8ef000db686d6
BLAKE2b-256 928a637a9af0441c96008f595c7500df2d727ba373b9462836278a8239f2637d

See more details on using hashes here.

File details

Details for the file hecvec-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: hecvec-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 21.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for hecvec-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 91ee1036760f9bf2ab5e1847e93ba27e678f0e46ae1fb4146d0be70860c1b423
MD5 9a598959e271acf69d3435fce3bb4bd0
BLAKE2b-256 39a7a9b323a873d1cd5746e815e77db61508dbcd9f6d39ee7b1b4f5c4ce86a7c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page