Skip to main content

High-performance, vendor-agnostic LLM inference library

Project description

Pyxis

High-performance, vendor-agnostic LLM inference library.

Status snapshot (2026-02-13)

  • Sprints 1-5 are completed (docs/SPRINT_CHECKLIST.md).

Quick start (local, 3 processes)

  1. Install deps (in a venv):

    • pip install -e .

    Model downloads follow your local HuggingFace/Transformers cache settings.

  2. Start the core worker:

    • python scripts/run_core.py --model TinyLlama/TinyLlama-1.1B-Chat-v1.0
  3. Start the tokenizer + detokenizer worker:

    • python scripts/run_tokenizer.py --model TinyLlama/TinyLlama-1.1B-Chat-v1.0
  4. Start the HTTP API:

    • python scripts/run_api.py --host 127.0.0.1 --port 8000
  5. Verify streaming:

    • python scripts/verify_api_real.py

Useful environment variables

  • PYXIS_MODEL_PATH: model name/path for scripts/run_core.py (defaults to TinyLlama/TinyLlama-1.1B-Chat-v1.0)
  • PYXIS_TOKENIZER_PATH: tokenizer name/path for scripts/run_tokenizer.py (defaults to TinyLlama/TinyLlama-1.1B-Chat-v1.0)
  • PYXIS_TOKENIZER_INGRESS: API → tokenizer IPC address override
  • PYXIS_DETOK_TO_API: detok → API IPC address override
  • PYXIS_CORE_REQUEST_QUEUE_SIZE: max queued generation requests inside core worker (default 1024)
  • PYXIS_MAX_INFLIGHT_REQUESTS: max concurrent API streaming requests before 429 overloaded (default 128)
  • PYXIS_PER_REQUEST_QUEUE_MAXSIZE: per-request detok queue bound in API (default 128)
  • PYXIS_STREAM_IDLE_TIMEOUT_S: stream idle timeout before API emits an error chunk (default 30)
  • PYXIS_TOKENIZER_READY_WAIT_S: wait for tokenizer ingress readiness per enqueue (default 1.0)

Smoke / integration scripts

  • python scripts/verify_ingestion.py: API-less ingestion test (tokenizer → core request)
  • python scripts/verify_api_ingress.py: API streaming test with a mocked core
  • python scripts/verify_api_real.py: API streaming test with real core+tokenizer

Realtime usage

  • Interactive chat REPL:
    • python scripts/chat_repl.py
  • End-to-end realtime harness (starts services, checks streaming/cancel/backpressure):
    • powershell -ExecutionPolicy Bypass -File scripts/test_realtime.ps1 -SkipInstall

Architecture (high level)

HTTP APITokenizerWorkerCoreWorkerTokenizerWorker (detok)HTTP streaming response

POST /v1/chat/completions streams SSE (text/event-stream) with OpenAI-style chat.completion.chunk payloads and a final [DONE].

GET /health includes readiness and API stage latency snapshots (stage_latency_ms).

See docs/ARCHITECTURE.md for details.

Session notes and recent implementation memory are tracked in docs/MEMORY.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyxislm-0.1.0.tar.gz (26.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyxislm-0.1.0-py3-none-any.whl (21.9 kB view details)

Uploaded Python 3

File details

Details for the file pyxislm-0.1.0.tar.gz.

File metadata

  • Download URL: pyxislm-0.1.0.tar.gz
  • Upload date:
  • Size: 26.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for pyxislm-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7ff03e596a2a2771fdcee133e7f24abc3ad3d9e662d7c18221d9bd850829895d
MD5 37e780240977c36c0ef7d36de2944336
BLAKE2b-256 6cd412aa7ced2c74f781193eb083b482adfb42b00eef3b185de13eda4fc150a4

See more details on using hashes here.

File details

Details for the file pyxislm-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: pyxislm-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for pyxislm-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 486f57e9fdc46ad7a1fc8ab8530767beab28cc7f8f01e0818657e31dd5a74c1c
MD5 6c3e75d5935738e9d6f7094f6fe9134f
BLAKE2b-256 54640dda45c1656dc6f2253514a968c7ed3b8fe6da02f64d0a72fcf8dbbc11fe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page