Skip to main content

GPU Cluster Health Management

Project description

Trainy Konduktor Logo

Built on Kubernetes. Konduktor uses existing open source tools to build a platform that makes it easy for ML Researchers to submit batch jobs and for administrative/infra teams to easily manage GPU clusters.

How it works

Konduktor uses a combination of open source projects. Where tools exist with MIT, Apache, or another compatible open license, we want to use and even contribute to that tool. Where we see gaps in tooling, we build it.

Architecture

Konduktor can be self-hosted and run on any certified Kubernetes distribution or managed by us. Contact us at founders@trainy.ai if you are just interested in the managed version. We're focused on tooling for clusters with NVIDIA cards for now but in the future we may expand to our scope to support other accelerators.

architecture

For ML researchers

  • Konduktor CLI & SDK - user friendly batch job framework, where users only need to specify the resource requirements of their job and a script to launch that makes simple to scale work across multiple nodes. Works with most ML application frameworks out of the box.
num_nodes: 100

resources:
  accelerators: H100:8
  cloud: kubernetes
  labels:
    kueue.x-k8s.io/queue-name: user-queue
    kueue.x-k8s.io/priority-class: low-priority

run: |
  torchrun \
  --nproc_per_node 8 \
  --rdzv_id=1 --rdzv_endpoint=$master_addr:1234 \
  --rdzv_backend=c10d --nnodes $num_nodes \
  torch_ddp_benchmark.py --distributed-backend nccl

For cluster administrators

  • DCGM Exporter, GPU operator, Network Operator - For installing NVIDIA driver, container runtime, and exporting node health metrics.
  • Kueue - centralized creation of job queues, gang-scheduling, and resource quotas and sharing across projects.
  • Prometheus - For publishing metrics about node health and workload queues.
  • OpenTelemetry - For pushing logs from each node
  • Grafana, Loki - Visualizations for metrics/logging solution.

Community & Support

Development Setup

Prerequisites

  • Python 3.9+ (3.10+ recommended)
  • Poetry for dependency management (installation guide)
  • kubectl and access to a Kubernetes cluster (for integration/smoke tests)

Quick Start

# Clone the repository
git clone https://github.com/Trainy-ai/konduktor.git
cd konduktor

# Install dependencies (including dev tools)
poetry install --with dev

# Verify installation
poetry run konduktor --help

Running Tests

# Run unit tests
poetry run pytest tests/unit_tests/ -v

# Run smoke tests (requires Kubernetes cluster)
poetry run pytest tests/smoke_tests/ -v

Code Formatting

All code must pass linting before being merged. Run the format script to auto-fix issues:

bash format.sh

This runs:

  • ruff - Python linter and formatter
  • mypy - Static type checking

Local Kubernetes Cluster (Optional)

For running smoke tests locally, you can set up a kind cluster:

# Install kind and set up a local cluster with JobSet and Kueue
bash tests/kind_install.sh

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

konduktor_nightly-0.1.0.dev20260310110939.tar.gz (254.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file konduktor_nightly-0.1.0.dev20260310110939.tar.gz.

File metadata

File hashes

Hashes for konduktor_nightly-0.1.0.dev20260310110939.tar.gz
Algorithm Hash digest
SHA256 11f85b4447c3bed2d4ae1fde06dcac2bb66279f5ef874d9a026f3eb585d1b8e1
MD5 5d545ffdb408e844b604d53d4a502989
BLAKE2b-256 0ae45431065f18970750071a5c0a49e8a3c3ad01e7a001a48a4e2d016696e82d

See more details on using hashes here.

File details

Details for the file konduktor_nightly-0.1.0.dev20260310110939-py3-none-any.whl.

File metadata

File hashes

Hashes for konduktor_nightly-0.1.0.dev20260310110939-py3-none-any.whl
Algorithm Hash digest
SHA256 7ac4714d526800eb530c23fad4e2ff1d4aa57ebfbaf9dd8df2c8d68a8152f6db
MD5 9724c9167d7520517e255663ef8c93fd
BLAKE2b-256 7bb79603a602215bd0180cf58f597fadeebb98e7fa5fc2382f95f8b2aeab6f17

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page