Skip to main content

List directories (safe root), filter .txt/.md files, read as text, chunk, embed, and push to Chroma.

Project description

HecVec

HecVec is a Python library that discovers .txt and .md files, chunks them (token, text, semantic, or LLM-based), embeds with OpenAI or local Stanford GloVe vectors, and stores vectors in Chroma or PostgreSQL (pgvector). It is library-only — no HTTP API. All work runs in-process.


Table of contents


Install

Full pipeline (list → verify backend is up → read → chunk → embed → write vectors):

pip install hecvec

A vector database backend must be reachable (server="chroma", server="chroma_cloud", or server="pgvector"). There is no local/ephemeral mode.


Requirements to run the pipeline

To use the full Slicer.slice(...) pipeline you need:

  1. Python 3.9–3.13.
  2. Dependencies installed via pip install hecvec.
  3. OpenAI API key for embeddings when using OpenAI models (and for semantic / llm chunking). Set OPENAI_API_KEY in the environment or in a .env file (see Environment and API key). Not required if you use GloVe embeddings (see below).
  4. Vector DB backend:
    • Chroma self-hosted: server listening at host:port (default localhost:8000).
    • Chroma Cloud: valid cloud API key (+ optional tenant/database).
    • pgvector: PostgreSQL reachable at host:port (default 5432 for pgvector mode), with a target database; the SDK creates extension/table automatically.

Local GloVe embeddings (no OpenAI)

Set embedding_model to glove-6B-50d or glove-6B-100d. The pipeline mean-pools Stanford GloVe word vectors per chunk (same tokenization as the companion slicer doc: lowercase, \b[a-z']+\b). Vectors are always cached under ~/.slicer/models/.

First run: the library downloads the needed vectors from Stanford’s glove.6B.zip (HTTP Range — not the whole zip as a single download). You need outbound HTTPS to nlp.stanford.edu for that. Respect Stanford’s terms of use. Later runs load from disk only.

Collection dimension is 50 or 100 respectively — do not mix with OpenAI collections of the same name.


Embedding models and providers

Set embedding_model on Slicer / Slicer.slice(...) / Slicer.slice_path(...). The provider is chosen automatically from that string (infer_embedding_provider).

Provider embedding_model values API key for embeddings? Vector size
OpenAI Any model id whose name starts with text-embedding-, text-search-, or text-similarity- (passed through to the OpenAI API). Common examples: text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002. Yes (llm_token or OPENAI_API_KEY) Model-dependent (e.g. 1536 / 3072 for many text-embedding-3-* models — see OpenAI docs).
GloVe (local) glove-6B-50d, glove-6B-100d (case-insensitive) No 50 or 100

Other ids (e.g. Gemini-style names) are rejected with ValueError until supported.


Workflow

The main entry point is Slicer.slice(path=..., **kwargs). It runs six logged steps:

Step Description
0 Resolve path, resolve collection name (base_name + _ + chunking_method).
1 Discover files: single .txt/.md file or recursive list under a directory.
2 Backend connectivity check: connect to the configured backend and fail fast before read/chunk/embed. The client/connection is reused for final write.
3 Read file contents as text (UTF-8 with fallbacks).
4 Chunk using the chosen method (token, text, semantic, or llm).
5 Generate embeddings. Provider is inferred from embedding_model (e.g. text-embedding-3-small → OpenAI; glove-6B-50d → local GloVe).
6 Write vectors to backend. Chroma appends to existing collection names; pgvector writes into the configured table_name.

Progress is logged as [0/6][6/6] with timings.


Quick start

import hecvec

# Default: token chunking, Chroma at localhost:8000
result = hecvec.Slicer.slice(
    path="/path/to/folder_or_file",
    embedding_model="text-embedding-3-small",
)
# → {"files": N, "chunks": M, "collection": "folder_or_file_name_token_cs200"}

# Custom host/port and semantic chunking
result = hecvec.Slicer.slice(
    path="/path/to/docs",
    host="localhost",
    port=8000,
    embedding_model="text-embedding-3-small",
    chunking_method="semantic",
)

Or use an instance:

slicer = hecvec.Slicer(
    host="chroma",  # e.g. Docker Compose service name (see `.devcontainer/docker-compose.yml`)
    port=8000,
    embedding_model="text-embedding-3-small",
    chunking_method="token",
)
result = slicer.slice(path="/data/myfile.txt")

Run the test script (from repo root, with Chroma running and OPENAI_API_KEY set):

# Terminal 1: start Chroma
docker run -p 8000:8000 -v ./chroma-data:/chroma/chroma chromadb/chroma

# Terminal 2: run pipeline
uv run python scripts/test_slice.py
# Or with a path:
uv run python scripts/test_slice.py /path/to/file_or_folder

Parameters

All of these can be passed to Slicer(...) or to Slicer.slice(..., key=value).

Parameter Default Description
path (required) File or directory to process (.txt/.md only).
root path.parent (file) or path (dir) Safe root for resolving paths (used when listing under a directory).
collection_name "hecvec" Base name for the logical collection. If "hecvec", it is replaced by the file stem or directory name; the final name includes method + chunk size. Full config is recorded in collection/collections_info.md when a new collection is created.
server "chroma" Backend server: "chroma" | "chroma_cloud" | "pgvector".
host "localhost" Host for server="chroma" or server="pgvector".
port 8000 Port for server="chroma" (pgvector defaults to 5432 if left at 8000).
user None Optional auth user. For chroma: Basic Auth username (use with password). For pgvector: PostgreSQL user.
password None Optional auth password. For chroma: Basic Auth password. For pgvector: PostgreSQL password, or omit and set PGPASSWORD in .env — if both are missing, pgvector raises an error.
cloud_api_key None Chroma Cloud API key (only for server="chroma_cloud"). If not passed, hecvec reads it from .env/env.
cloud_tenant None Optional Chroma Cloud tenant (only for server="chroma_cloud"). If not passed, hecvec reads CHROMA_TENANT from .env/env.
cloud_database None Optional Chroma Cloud database (only for server="chroma_cloud"). If not passed, hecvec reads CHROMA_DATABASE from .env/env.
database None PostgreSQL database name (required for server="pgvector").
table_name None Physical table name (required for server="pgvector").
chunking_method "token" Chunking strategy: "token" | "text" | "semantic" | "llm". See Chunking methods.
chunk_size 200 Target chunk size (tokens for token, characters for text; also used by llm).
chunk_overlap 0 Overlap between consecutive chunks.
encoding_name "cl100k_base" Tiktoken encoding for token chunking.
embedding_model "text-embedding-3-small" Embedding model id; provider is inferred (OpenAI for text-embedding-*; GloVe for glove-6B-50d / glove-6B-100d). Aliases: llm_model, embeding_model (deprecated typo).
batch_size 100 Batch size for OpenAI embedding API calls (ignored for GloVe).
llm_token from env / .env OpenAI API key for OpenAI embeddings and for semantic / llm chunking. Not required for GloVe-only runs with token or text chunking.
dotenv_path None Path to .env file for loading OPENAI_API_KEY.

API reference: methods and parameters

Public methods and functions with their parameters. All are available from import hecvec unless a submodule is noted.

Pipeline

Slicer(root=None, collection_name="hecvec", server="chroma", host="localhost", port=8000, user=None, password=None, cloud_api_key=None, cloud_tenant=None, cloud_database=None, database=None, table_name=None, embedding_model="text-embedding-3-small", chunk_size=200, chunk_overlap=0, encoding_name="cl100k_base", batch_size=100, llm_token=None, dotenv_path=None)

Parameter Type Default Description
root str | Path | None None (→ cwd) Safe root for path resolution.
collection_name str "hecvec" Base collection name; see Collection naming.
server DbType "chroma" Backend server: "chroma" | "chroma_cloud" | "pgvector".
host str "localhost" Host when server="chroma" or server="pgvector".
port int 8000 Port when server="chroma"; pgvector uses 5432 default when not overridden.
user str | None None Chroma Basic Auth user or PostgreSQL user (by backend).
password str | None None Chroma Basic Auth password. For pgvector: password or read from .env (see pgvector).
cloud_api_key str | None None Chroma Cloud API key (required when server="chroma_cloud").
cloud_tenant str | None None Optional Chroma Cloud tenant. If provided, cloud_database must also be provided.
cloud_database str | None None Optional Chroma Cloud database. If provided, cloud_tenant must also be provided.
database str | None None PostgreSQL database (required when server="pgvector").
table_name str | None None pgvector physical table name (required when server="pgvector").
embedding_model str "text-embedding-3-small" Embedding model id; provider is inferred (see infer_embedding_provider). Aliases: llm_model, embeding_model (deprecated typo).
chunk_size int 200 Chunk size (tokens or chars by method).
chunk_overlap int 0 Overlap between chunks.
encoding_name str "cl100k_base" Tiktoken encoding.
batch_size int 100 Embedding batch size.
llm_token str | None None OpenAI API key for embeddings; else OPENAI_API_KEY from env / .env.
dotenv_path str | Path | None None Path to .env file.

Slicer.slice(path, *, root=None, collection_name="hecvec", server="chroma", host="localhost", port=8000, user=None, password=None, cloud_api_key=None, cloud_tenant=None, cloud_database=None, database=None, table_name=None, embedding_model="text-embedding-3-small", chunk_size=200, chunk_overlap=0, encoding_name="cl100k_base", batch_size=100, chunking_method="token", llm_token=None, dotenv_path=None)

Same parameters as above, plus:

Parameter Type Default Description
path str | Path (required) File or directory to process (.txt/.md).

Returns: dict with files, chunks, collection, and optionally message (e.g. when collection already exists).


Listing and reading

ListDir(root)

Parameter Type Description
root str | Path Root directory; all listed paths are under this.

ListDir.listdir(path=".")list[str]
List one level under path (relative to root). Returns sorted relative path strings (dirs first, then files).

Parameter Type Default Description
path str | Path "." Path under root.

ListDir.listdir_recursive(path=".", max_depth=None)list[str]
List all entries under path recursively.

Parameter Type Default Description
path str | Path "." Path under root.
max_depth int | None None Max depth; None = unlimited.

ListDirTextFiles(root, allowed_extensions=(".txt", ".md"))
Subclass of ListDir that filters to .txt/.md only.

ListDirTextFiles.filter_txt_md(relative_paths)list[Path]
From relative path strings, return full paths of files with allowed extensions.

ListDirTextFiles.listdir_txt_md(path=".")list[Path]
One-level list of .txt/.md files under path.

ListDirTextFiles.listdir_recursive_txt_md(path=".", max_depth=None)list[Path]
Recursive list of .txt/.md files under path.

ReadText(paths, encoding="utf-8")

Parameter Type Default Description
paths list[str] | list[Path] File paths to read.
encoding str "utf-8" Preferred encoding; fallbacks are latin-1, cp1252.

ReadText.read_all()list[tuple[Path, str]]
Read all files; returns (path, text) pairs. Skips non-files and unreadable paths.

ReadText is iterable: for path, text in reader: yields (path, text).


Chunking

chunk_text(text, chunk_size=400, chunk_overlap=0, separators=None)list[str]
Single-document recursive character split. Requires hecvec[chunk].

Note: Slicer.slice does not expose separators directly; it uses the defaults from the low-level chunker.

Parameter Type Default Description
text str Document text.
chunk_size int 400 Max characters per chunk.
chunk_overlap int 0 Overlap.
separators list[str] | None None Split order; default ["\n\n\n", "\n\n", "\n", ". ", " ", ""].

chunk_documents(path_and_texts, chunk_size=400, chunk_overlap=0, separators=None)list[dict]
Multiple documents, recursive character split. Each dict: {"path", "chunk_index", "content"}. Requires hecvec[chunk].

Note: Slicer.slice does not expose separators directly; it uses the defaults from the low-level chunker.

token_chunk_text(text, chunk_size=200, chunk_overlap=0, encoding_name="cl100k_base")list[str]
Single-document token split (tiktoken).

Parameter Type Default Description
text str Document text.
chunk_size int 200 Max tokens per chunk.
chunk_overlap int 0 Overlap.
encoding_name str "cl100k_base" Tiktoken encoding.

token_chunk_documents(path_and_texts, chunk_size=200, chunk_overlap=0, encoding_name="cl100k_base")tuple[list[str], list[str]]
Multiple documents, token split. Returns (ids, documents) with ids like chunk_0, chunk_1, ...

chunk_documents_by_method(path_and_texts, method="token", *, chunk_size=200, chunk_overlap=0, encoding_name="cl100k_base", separators=None, openai_api_key=None, semantic_max_chunk_size=400, semantic_min_chunk_size=50, llm_model="gpt-4o-mini")tuple[list[str], list[str]]
Chunk by method: "token" | "text" | "semantic" | "llm". Returns (ids, documents).

Note: this is a low-level helper with advanced knobs. Slicer.slice forwards only: chunk_size, chunk_overlap, encoding_name, chunking_method (as method), and (when needed) openai_api_key.
So you only need chunk_size + chunk_overlap at the Slicer level; separators, semantic_max_chunk_size, semantic_min_chunk_size, and llm_model stay at their defaults unless you call this helper directly.

Parameter Type Default Description
path_and_texts list[tuple[(Path, str)]] (path, text) pairs.
method ChunkingMethod "token" "token" | "text" | "semantic" | "llm".
chunk_size int 200 Used by token, text, llm.
chunk_overlap int 0 Used by token, text.
encoding_name str "cl100k_base" Token method.
separators list[str] | None None Text method only.
openai_api_key str | None None Required for semantic/llm.
semantic_max_chunk_size int 400 Semantic method.
semantic_min_chunk_size int 50 Semantic method.
llm_model str "gpt-4o-mini" LLM method.

Embeddings and Chroma

embed_texts(texts, *, api_key=None, embedding_model="text-embedding-3-small", batch_size=100)list[list[float]]
Embeddings for a list of strings. Provider is inferred from embedding_model. For OpenAI, api_key is required; for GloVe, omit api_key (cache: ~/.slicer/models).

embed_texts_glove(texts, *, model="glove-6B-50d")list[list[float]]
Lower-level GloVe batch embed (same cache and pooling as the pipeline).

infer_embedding_provider(embedding_model)"openai" | "glove"
Returns the provider for a model name, or raises ValueError if unknown / unsupported (e.g. Gemini ids are rejected until supported).

Parameter Type Default Description
texts list[str] Texts to embed.
api_key str | None None Required for OpenAI; unused for GloVe.
embedding_model str "text-embedding-3-small" Model id. Aliases when calling: llm_model, model.
batch_size int 100 OpenAI request batch size.

get_client(host="localhost", port=8000, user=None, password=None) → Chroma HttpClient Connects to the Chroma server at host:port. Raises if nothing is listening. If user and password are provided, uses Chroma Basic Auth.

Parameter Type Default Description
host str "localhost" Chroma server host.
port int 8000 Chroma server port.
user str | None None Basic Auth username for Chroma. Use together with password.
password str | None None Basic Auth password for Chroma. Use together with user.

get_or_create_collection(client, name, metadata=None)
Get or create a Chroma collection (default cosine similarity). metadata default: {"hnsw:space": "cosine"}.

add_documents(client, collection_name, ids, embeddings, documents)dict
Add documents to a collection. Returns {"collection_existed": bool}.

list_collections(host="localhost", port=8000, *, server="chroma", ...)list[tuple[str, int]]
List collection names and document counts: [(name, count), ...].

  • Self-hosted (server="chroma", default): uses host / port (and optional user / password). Same as Slicer(..., server="chroma").
  • Chroma Cloud (server="chroma_cloud"): pass cloud_api_key (or use .env / env vars); host and port are not used.
  • pgvector (server="pgvector"): uses host / port / user / password + required database and table_name; returns table counts for that table name (and legacy prefixed tables, if present).

If you wrote to Cloud with Slicer.slice(..., server="chroma_cloud"), list with list_collections(server="chroma_cloud", dotenv_path="...") (or pass the same cloud kwargs)—not host="localhost", port=8000.


Environment

load_dotenv_if_available(dotenv_path=None)
Load .env into os.environ if python-dotenv is installed. No-op otherwise.

load_openai_key(dotenv_path=None)str | None
Load .env if available, then return os.environ.get("OPENAI_API_KEY").


Chunking methods

Method Description Requires
token Split by token count (tiktoken, cl100k_base). Fast and deterministic.
text Recursive character splitter (paragraph → line → sentence, etc.).
semantic Embed small segments, then group by similarity (DP) into larger chunks. OPENAI_API_KEY
llm Use an LLM to choose split points for thematic sections. OPENAI_API_KEY

Use chunking_method="token" or "text" to avoid API calls during chunking. Use "semantic" or "llm" for more coherent, topic-aware chunks (at the cost of extra OpenAI usage).


Self-hosted Chroma server (server="chroma")

Use this when you run Chroma yourself (Docker, EC2, devcontainer, etc). hecvec connects to a Chroma server over HTTP using host and port.

Start a server (e.g. Docker):

docker run -p 8000:8000 -v ./chroma-data:/chroma/chroma chromadb/chroma

-v is a Docker bind mount: -v ./chroma-data:/chroma/chroma maps your local ./chroma-data directory into the container.

In practice, “persistent data” means Chroma’s database files (collections, vectors, metadata) are written to disk and survive docker stop / docker start (and even container recreation), so reruns can append without losing history.

Parameters used by server="chroma":

  • Required: host, port
  • Optional: user, password (Basic Auth header; both-or-neither)

Authentication (recommended)

Modern Chroma releases (the chromadb/chroma:latest image) do not reliably support the older CHROMA_SERVER_AUTHN_* env-var based auth providers. If you need enforced auth on a self-hosted instance (e.g. EC2), the simplest reliable pattern is:

  • Run Chroma on a private network interface (or at least don’t expose it directly).
  • Put a reverse proxy in front that enforces Basic Auth (or token/JWT) and TLS.

When you pass user= and password= to hecvec.Slicer(...), hecvec will send a standard HTTP Authorization: Basic ... header, which works with a proxy-enforced Basic Auth setup.

Two common ways to run Chroma persistently:

  1. Plain Docker on the host: run the docker run ... -v ./chroma-data:... command above.
  2. Inside the provided devcontainer: use the compose setup in .devcontainer/docker-compose.yml (the devcontainer “compose” config in this repo). It starts a chroma service with a persistent Docker volume (chroma-data) mounted at /chroma/data and IS_PERSISTENT=TRUE, so reopening the devcontainer keeps your vectors.

Containers: If the app runs in a devcontainer and Chroma is in the same Docker Compose stack, use the service name as host (in this repo: host="chroma"). If Chroma is on the host and the app in the container, use host="host.docker.internal".


Chroma Cloud (server="chroma_cloud")

Use this when you want a managed Chroma deployment with built-in auth and TLS. hecvec uses the Chroma Python client’s Cloud client under the hood.

Parameters used by server="chroma_cloud":

  • Required: cloud_api_key (or set via .env/env)
  • Optional: cloud_tenant and cloud_database (either provide both, or omit both)
  • Ignored: host, port, user, password

Environment / .env variables (loaded automatically):

  • CHROMA_CLOUD_API_KEY=... (preferred) or CHROMA_API_KEY=...
  • CHROMA_TENANT=... (optional; used when cloud_tenant not passed)
  • CHROMA_DATABASE=... (optional; used when cloud_database not passed)

Example:

import hecvec

# Minimal: only API key needed (tenant/database optional depending on your Cloud setup)
slicer = hecvec.Slicer(server="chroma_cloud", cloud_api_key="ck-...")
result = slicer.slice(path="/path/to/docs")

PostgreSQL pgvector (server="pgvector")

Use this when you want PostgreSQL as the vector store.

Requirements:

  • PostgreSQL reachable with credentials.
  • pgvector extension available in that PostgreSQL instance.
  • Python dependency: included in pip install hecvec (no extra install needed).

The SDK handles these automatically at write time:

  • CREATE EXTENSION IF NOT EXISTS vector
  • create table (if missing) for the resolved collection
  • batched writes and HNSW index creation

Parameters used by server="pgvector":

  • Required: database, table_name
  • Reused: host, port, user, password (password may be omitted if set via .env; see below)
  • Ignored: cloud_api_key, cloud_tenant, cloud_database

Password from .env: if you do not pass password=, hecvec loads .env (via dotenv_path or the default) and uses PGPASSWORD. If password is still missing after that, a ValueError is raised. A passed-in password= always wins.

Table behavior (current): single configured table

  • Final table name:
    • "{table_name}"
  • No automatic _collection_name suffixing.

Example:

import hecvec

slicer = hecvec.Slicer(
    server="pgvector",
    host="localhost",
    port=5432,
    user="postgres",
    password="postgres",
    database="hecvec",
    table_name="embeddings",
    embedding_model="glove-6B-50d",
)

result = slicer.slice(
    path="/path/to/docs",
    collection_name="lotr_test2",
    chunking_method="text",
    chunk_size=200,
)
print(result)

Environment and API key

  • OpenAI: The pipeline (and semantic / llm chunking) needs an API key. It is read in this order:

    1. Argument llm_token=...
    2. Environment variable OPENAI_API_KEY
    3. A .env file in the current working directory (loaded via python-dotenv when you use hecvec)
  • .env: Create a .env in your project root (or set dotenv_path to point to one):

    OPENAI_API_KEY=sk-...
    
  • PostgreSQL (pgvector) password: you can omit password= in code and put it in .env, for example:

    PGPASSWORD=your_password
    
  • Do not commit .env or expose the key in logs or source code.


Collection naming

  • If you pass collection_name="hecvec" (default), the base name is taken from the input:

    • Single file: path.stem (e.g. mydoc)
    • Directory: path.name (e.g. docs)
  • The final collection name is always:

    {base_name}_{chunking_method}_cs{chunk_size}

    Examples:

    • token: mydoc_token_cs200
    • text: mydoc_text_cs400
    • llm/semantic: mydoc_llm_cs200
  • Detailed collection configuration is persisted to:

    • collection/collections_info.md
    • A new row is appended only when a collection is newly created.
  • If a collection with that name already exists, the pipeline appends new chunks to it and increments append_runs in collection/collections_info.md when possible.


Building blocks

You can use the pipeline step-by-step.

List and read:

from pathlib import Path
from hecvec import ListDir, ListDirTextFiles, ReadText

root = Path("/path/to/repo")
lister = ListDir(root=root)
for rel in lister.listdir("."):
    print(rel)

text_lister = ListDirTextFiles(root=root)
paths = text_lister.listdir_recursive_txt_md("docs")
reader = ReadText(paths)
for path, text in reader:
    print(path, len(text))

Chunk only (e.g. recursive character, with hecvec[chunk]):

from hecvec import ListDirTextFiles, ReadText
from hecvec.chunking import chunk_documents

paths = ListDirTextFiles(root=root).listdir_recursive_txt_md(".")
path_and_text = ReadText(paths).read_all()
chunks = chunk_documents(path_and_text)  # list of {"path", "chunk_index", "content"}

Token chunk + embed + list Chroma collections:

from hecvec import token_chunk_text, embed_texts, list_collections

chunks = token_chunk_text("Some long document...", chunk_size=200)
vecs = embed_texts(chunks, api_key="sk-...", embedding_model="text-embedding-3-small")
# Self-hosted Chroma on localhost:8000
names_and_counts = list_collections(host="localhost", port=8000)
# Chroma Cloud (same backend as Slicer with server="chroma_cloud")
names_and_counts = list_collections(server="chroma_cloud", dotenv_path=".env")
# pgvector tables (table name + counts)
names_and_counts = list_collections(
    server="pgvector",
    host="localhost",
    port=5432,
    user="postgres",
    password="postgres",
    database="hecvec",
    table_name="embeddings",
)

CLI (list directory under a root):

hecvec-listdir [path] [root]
# or
python -m hecvec.cli [path] [root]

Module layout

Module Responsibility
hecvec.env Load .env and OPENAI_API_KEY
hecvec.listdir List dirs under a safe root; filter .txt/.md
hecvec.reading Read files as text (UTF-8 / latin-1 / cp1252 fallback)
hecvec.token_splitter Token-based chunking (tiktoken)
hecvec.chunking Recursive character chunking (chunk_documents, chunk_text)
hecvec.chunkers Multi-method chunking: token, text, semantic, llm
hecvec.embeddings embed_texts, infer_embedding_provider
hecvec.glove_embeddings embed_texts_glove, load_glove_vocab, …
hecvec.chroma_client Chroma client, get/create collection, add documents
hecvec.backend_list List backend collections/tables and counts
hecvec.pgvector_client pgvector connection, table bootstrap, batched writes
hecvec.pipeline Orchestrator: Slicer and slice(path=...)

Development

From the repo root:

uv sync
uv run python -c "from hecvec import ListDir; print(ListDir('.').listdir('.'))"

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hecvec-6.10.1.tar.gz (269.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hecvec-6.10.1-py3-none-any.whl (41.8 kB view details)

Uploaded Python 3

File details

Details for the file hecvec-6.10.1.tar.gz.

File metadata

  • Download URL: hecvec-6.10.1.tar.gz
  • Upload date:
  • Size: 269.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for hecvec-6.10.1.tar.gz
Algorithm Hash digest
SHA256 3df4dcdd0ed6d5f5625d396637363f5235df45c6fd9aaf7ee484d13a35030025
MD5 0d7a12a2dd0693928a32421f6dfbaaca
BLAKE2b-256 5c0527169c0135500176bc40f70df4968da9eabd84b2c0fe4aee2b0dcc95fbe3

See more details on using hashes here.

File details

Details for the file hecvec-6.10.1-py3-none-any.whl.

File metadata

  • Download URL: hecvec-6.10.1-py3-none-any.whl
  • Upload date:
  • Size: 41.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for hecvec-6.10.1-py3-none-any.whl
Algorithm Hash digest
SHA256 64fcd01c53b9ddc929b0ebdaab86ade79400196f2be97fe90d9f5c2d585652b1
MD5 f4f7faddf193ce71f2442e5bb98a8e60
BLAKE2b-256 8cfcba86cf230a67dcf3e63a4e2726c60b364a0b689bdc512f01299089664c24

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page