Skip to main content

GPU Cluster Health Management

Project description

Trainy Konduktor Logo

Built on Kubernetes. Konduktor uses existing open source tools to build a platform that makes it easy for ML Researchers to submit batch jobs and for administrative/infra teams to easily manage GPU clusters.

How it works

Konduktor uses a combination of open source projects. Where tools exist with MIT, Apache, or another compatible open license, we want to use and even contribute to that tool. Where we see gaps in tooling, we build it.

Architecture

Konduktor can be self-hosted and run on any certified Kubernetes distribution or managed by us. Contact us at founders@trainy.ai if you are just interested in the managed version. We're focused on tooling for clusters with NVIDIA cards for now but in the future we may expand to our scope to support other accelerators.

architecture

For ML researchers

  • Konduktor CLI & SDK - user friendly batch job framework, where users only need to specify the resource requirements of their job and a script to launch that makes simple to scale work across multiple nodes. Works with most ML application frameworks out of the box.
num_nodes: 100

resources:
  accelerators: H100:8
  cloud: kubernetes
  labels:
    kueue.x-k8s.io/queue-name: user-queue
    kueue.x-k8s.io/priority-class: low-priority

run: |
  torchrun \
  --nproc_per_node 8 \
  --rdzv_id=1 --rdzv_endpoint=$master_addr:1234 \
  --rdzv_backend=c10d --nnodes $num_nodes \
  torch_ddp_benchmark.py --distributed-backend nccl

For cluster administrators

  • DCGM Exporter, GPU operator, Network Operator - For installing NVIDIA driver, container runtime, and exporting node health metrics.
  • Kueue - centralized creation of job queues, gang-scheduling, and resource quotas and sharing across projects.
  • Prometheus - For publishing metrics about node health and workload queues.
  • OpenTelemetry - For pushing logs from each node
  • Grafana, Loki - Visualizations for metrics/logging solution.

Community & Support

Development Setup

Prerequisites

  • Python 3.9+ (3.10+ recommended)
  • Poetry for dependency management (installation guide)
  • kubectl and access to a Kubernetes cluster (for integration/smoke tests)

Quick Start

# Clone the repository
git clone https://github.com/Trainy-ai/konduktor.git
cd konduktor

# Install dependencies (including dev tools)
poetry install --with dev

# Verify installation
poetry run konduktor --help

Running Tests

# Run unit tests
poetry run pytest tests/unit_tests/ -v

# Run smoke tests (requires Kubernetes cluster)
poetry run pytest tests/smoke_tests/ -v

Code Formatting

All code must pass linting before being merged. Run the format script to auto-fix issues:

bash format.sh

This runs:

  • ruff - Python linter and formatter
  • mypy - Static type checking

Local Kubernetes Cluster (Optional)

For running smoke tests locally, you can set up a kind cluster:

# Install kind and set up a local cluster with JobSet and Kueue
bash tests/kind_install.sh

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

konduktor_nightly-0.1.0.dev20260401113231.tar.gz (254.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file konduktor_nightly-0.1.0.dev20260401113231.tar.gz.

File metadata

File hashes

Hashes for konduktor_nightly-0.1.0.dev20260401113231.tar.gz
Algorithm Hash digest
SHA256 d4c133a57476a914c720334005725a41b1b145751608cb695420ecb3b8f65cdf
MD5 86ef81d6e18a94788c01889722de8f27
BLAKE2b-256 19aa771c85dca8038af3eec35d1574f00daf74630666587815e2abd103afae20

See more details on using hashes here.

File details

Details for the file konduktor_nightly-0.1.0.dev20260401113231-py3-none-any.whl.

File metadata

File hashes

Hashes for konduktor_nightly-0.1.0.dev20260401113231-py3-none-any.whl
Algorithm Hash digest
SHA256 521affe0041600e8268cf45c7fa230b1da504ea0b195b7674b366de3449c179e
MD5 63395b60c966fd721b5ec486cbc1aaa4
BLAKE2b-256 2534c114cea368ae701c37498592f2c0cb746049abc60d1d264b8061b21c5858

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page