Skip to main content

A framework for holistic evaluation of LLM Inference Systems

Project description

Veeksha

Publish Release to PyPI Deploy Documentation Test Suite Run Linters

Veeksha is a high-fidelity benchmarking framework for LLM inference systems. Whether you're optimizing a production deployment, comparing serving backends, or running capacity planning experiments, Veeksha lets you measure what matters to you: realistic multi-turn conversations, agentic workflows, high-frequency stress tests, or targeted microbenchmarks. One tool, any workload.

From isolated requests to complex agentic sessions, Veeksha captures the full complexity of modern LLM workloads.

👉 Why Veeksha? — Learn what sets Veeksha apart
📚 Documentation — Full guides and API reference

Quick start

In a fresh environment (Python 3.14t recommended for true parallelism):

Install from PyPI:

pip install veeksha

Run a benchmark against an OpenAI-compatible endpoint:

python -Xgil=0 -m veeksha.benchmark \
    --client-type openai_chat_completions \
    --openai-chat-completions-client-api-base http://localhost:8000/v1 \
    --openai-chat-completions-client-model meta-llama/Llama-3.2-1B-Instruct \
    --traffic-scheduler-type rate \
    --rate-traffic-scheduler-interval-generator-type poisson \
    --rate-traffic-scheduler-poisson-interval-generator-arrival-rate 5.0 \
    --runtime-benchmark-timeout 60

Or use a YAML configuration file:

python -Xgil=0 -m veeksha.benchmark --benchmark-config-from-file my_benchmark.veeksha.yml

Installation from source

git clone https://github.com/project-vajra/veeksha.git
cd veeksha

# Install uv if needed
curl -LsSf https://astral.sh/uv/install.sh | sh

# Create environment (Python 3.14t recommended for true parallelism)
uv venv --python 3.14t
source .venv/bin/activate
uv pip install -e .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

veeksha-0.2.1.tar.gz (567.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

veeksha-0.2.1-py3-none-any.whl (235.8 kB view details)

Uploaded Python 3

File details

Details for the file veeksha-0.2.1.tar.gz.

File metadata

  • Download URL: veeksha-0.2.1.tar.gz
  • Upload date:
  • Size: 567.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for veeksha-0.2.1.tar.gz
Algorithm Hash digest
SHA256 757b024793d36347cf6ab936baad6432e6790167121aedc141245972c323bb8a
MD5 a8728f943329b6f015fade578b2ebfbb
BLAKE2b-256 377537fdb9621556aeb4ad6632529222dbdb38ba46dd550b3babc3243fa5e03e

See more details on using hashes here.

Provenance

The following attestation bundles were made for veeksha-0.2.1.tar.gz:

Publisher: publish_release.yml on project-vajra/veeksha

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file veeksha-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: veeksha-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 235.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for veeksha-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ac3c03aa5b078b341e5209fefe7aae5b5d3c404e9c7d7cad1d32c2a492df3670
MD5 70c87393494737c73bc951215f501d6d
BLAKE2b-256 3bb9c30f7789592ff0607311662a1eb20dbdb33f81eb132566f118fa36d83ec2

See more details on using hashes here.

Provenance

The following attestation bundles were made for veeksha-0.2.1-py3-none-any.whl:

Publisher: publish_release.yml on project-vajra/veeksha

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page