Skip to main content

A Python library for knit_space operations.

Project description

KnitSpace LLM Ranker: Automated LLM Testing Harness

KnitSpace is an automated testing harness designed to evaluate and compare the capabilities of various Large Language Models (LLMs) across a diverse set of tasks. It provides a comprehensive framework for researchers and developers to assess LLM performance in areas such as problem-solving, knowledge retrieval, coding proficiency, and safety.

🔑 Key Features

  • Multi-LLM Support: Integrates with OpenAI, Google, Cohere, Mistral, and more.
  • Diverse Test Suite: Includes mathematical reasoning, coding tasks, knowledge tests (MMLU), long-context, instruction-following, and obfuscation-based tests.
  • Elo Rating System: Scores models using task difficulty and a cognitive cost metric ("S-value") for nuanced benchmarking.
  • Secure Code Execution: Uses Docker containers to safely execute LLM-generated Python/JS code.
  • Text Obfuscation: Tests reasoning under character-mapped distortions.
  • Interactive Review: Launch a web-based viewer for test results.
  • Extensible: Easily add new LLM providers and new types of tests.

🧱 Core Components

📁 knit_space/models.py

  • Unified interface for all LLM providers.
  • Abstract Model class + subclasses like OpenAIModel, GeminiModel, etc.
  • Manages API initialization, inference calls, and model metadata.

📁 knit_space/tests/

  • Contains all test definitions.
  • base.py defines:
    • QAItem: A test prompt, answer, and scoring logic.
    • AbstractQATest: Base class for all test sets.
    • TestRegistry: Auto-discovers test modules.
  • Includes test types: math, coding, chess, long-context, MMLU, etc.

📁 knit_space/marker.py

  • Evaluates model responses.
  • Uses QAItem scoring logic and tracks correctness.
  • Implements Elo scoring using both test difficulty and S-value.
  • Launches Flask server to review test results interactively.

📁 knit_space/utils/code_executor.py

  • Runs Python and JS code from models inside Docker safely.
  • Accepts test cases (input/output pairs) for correctness validation.

📁 knit_space/obscurers/

  • Tools for generating challenging input variants.
  • CharObfuscator: Replaces characters using a bijective map to test reasoning under noise.

🐍 verify-auto.py

  • Main script to run tests.
  • Configures model, loads test classes, and executes tests.
  • Starts web server for results review.

⚙️ Setup

1. Prerequisites

  • Python 3.8+
  • Docker (for coding tasks)
  • Git

2. Installation

git clone [<repository_url>](https://github.com/C-you-know/Action-Based-LLM-Testing-Harness)
cd KnitSpace-LLM-Ranker

python -m venv venv
source venv/bin/activate  # (Windows: venv\Scripts\activate)

pip install -r requirements.txt  # Or manually install dependencies

3. API Key Setup

Set the following environment variables based on the providers you wish to use:

export OPENAI_API_KEY="..."
export GEMINI_API_KEY="..."
export MISTRAL_API_KEY="..."
export COHERE_API_KEY="..."
# Cloudflare-specific
export CLOUDFLARE_API_KEY="..."
export CLOUDFLARE_ACCOUNT_ID="..."

🚀 Running Tests

Run via verify-auto.py

  1. Configure:

    • Choose model/provider in verify-auto.py
    • Select tests in test_cases list
  2. Run:

    python verify-auto.py
    
  3. View:

    • Console logs test stats
    • Web UI opens at http://localhost:8000

Debug Test Inputs (optional)

Use QA-test.py to inspect generated test data without invoking an LLM:

python QA-test.py

🔌 Extending the Harness

➕ Adding New LLM Providers

  1. Subclass Model in knit_space/models.py

  2. Implement:

    • _initialize_client()
    • inference(...)
  3. Update:

    • PROVIDER_CLASS_MAP
    • _get_api_key_for_provider() and optionally _list_api_models()

🧪 Adding New Test Types

  1. Create a new file in knit_space/tests/
  2. Subclass AbstractQATest
  3. Implement generate() to yield QAItems
  4. Optionally register using @register_test()

📦 Install as a Package

You can also install this project as a pip package (once published):

pip install ks-llm-ranker

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ks_llm_ranker-0.1.3.tar.gz (83.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ks_llm_ranker-0.1.3-py3-none-any.whl (104.5 kB view details)

Uploaded Python 3

File details

Details for the file ks_llm_ranker-0.1.3.tar.gz.

File metadata

  • Download URL: ks_llm_ranker-0.1.3.tar.gz
  • Upload date:
  • Size: 83.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for ks_llm_ranker-0.1.3.tar.gz
Algorithm Hash digest
SHA256 79d3f823e002c90b3c5e364e68d179c902efe40eb3a9aec34ae9ab9b7aca95de
MD5 79326ab0f7d24b1b04d994ac503c014f
BLAKE2b-256 1769957049b065f5c9ff06506de612f4434d42cd2691fd2ca24ba507dfb2752a

See more details on using hashes here.

File details

Details for the file ks_llm_ranker-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: ks_llm_ranker-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 104.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for ks_llm_ranker-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 30ba6d25ff595730bb922480470d0796eda0511fc8adc47a895da8feb5cca998
MD5 6ca91ee3c6edf861ba6ce2de55677a51
BLAKE2b-256 c514f456c3c76892c4f81a28b2a8bb62d8ef1c16f82de21e8f8fa198649d4349

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page