Skip to main content

Autotest for CodeMie backend and UI

Project description

CodeMie Test Harness

End-to-end, integration, and UI test suite for CodeMie services. Test LLM assistants, workflows, tools, and integrations with a user-friendly CLI or pytest.

Quick Start

# Install and launch interactive mode
pip install codemie-test-harness
codemie-test-harness

# Or run without installing
uvx codemie-test-harness

First time? The CLI guides you through setup. Just select Configuration → Setup and follow the prompts.

Running tests? Choose Run Tests → Run Test Suite → smoke for a quick 5-10 minute validation.

Requirements

  • Python: 3.9 or higher
  • Platform: Linux, macOS, Windows (WSL recommended)
  • For UI tests: Playwright browsers (playwright install)
  • For most test suites: AWS credentials (see configuration below)

⚠️ AWS Credentials Required Most test suites (smoke, api, ui, opensource, enterprise) require AWS credentials to load integration settings from Parameter Store. Exception: The sanity suite works without AWS credentials. See AWS Credentials Setup for details.

Table of Contents


Part 1: Interactive CLI (Recommended)

The easiest way to use the test harness. No command memorization required - just navigate menus and follow prompts.

For developers: See Part 2: Running with pytest for direct pytest commands and .env configuration.

For technical details: See CLAUDE.md in the repository for architecture and development patterns.

Installation

Option 1: Install with pip (persistent installation)

pip install codemie-test-harness

Option 2: Run with uvx (no installation needed)

uvx codemie-test-harness

Verify installation:

codemie-test-harness --help

Getting Started

Launch the interactive CLI:

codemie-test-harness

You'll see the main menu:

╔═══════════════════════════════════════════════╗
║    CodeMie Test Harness - Interactive Mode    ║
╚═══════════════════════════════════════════════╝

? What would you like to do?
  🚀 Run Tests
❯ ⚙️ Configuration
  🤖 Chat with Assistant
  🔄 Execute Workflow
  ❌ Exit

First-time setup: Navigate to ConfigurationSetup (Quick Configuration) to configure your environment.

Navigation Tips

Before diving into configuration, here's how to navigate the interactive CLI:

  • Arrow Keys: Navigate menu options
  • Enter: Select an option
  • Ctrl+C: Cancel/Exit at any time
  • Back Options: Every submenu has a "Back" option
  • Menu Loops: Configuration menus loop until you select "Back"
  • Smart Defaults: Pre-filled values for common configurations
  • Validation: Input validation prevents invalid values

Configuration

The interactive CLI guides you through configuration with smart defaults and validation.

Quick Setup Wizard

Select: ConfigurationSetup (Quick Configuration)

The wizard will guide you through:

1. Select Environment

? Select environment:
  Localhost - http://localhost:8080
❯ Preview - https://codemie-preview.lab.epam.com/code-assistant-api
  Production - https://codemie.lab.epam.com/code-assistant-api
  Custom - Enter URL manually

2. Authentication Setup

The wizard automatically detects localhost and skips authentication setup. For remote environments (Preview/Production), it prompts for:

  • Auth Server URL (default provided)
  • Client ID (default provided)
  • Realm Name (default provided)
  • Client Secret
  • Optional: Username/Password authentication

3. AWS Credentials (Optional)

Configure AWS credentials to access integration settings from Parameter Store:

? How would you like to configure AWS credentials?
  📁 Use existing AWS profile
  ➕ Create new AWS profile
  🔑 Enter access keys directly
  ⏭️ Skip AWS configuration
  ⬅️ Back

4. Summary & Confirmation

The wizard displays all configured values with masked sensitive data.

Configuration for Localhost

Minimal localhost setup:

  1. Select ConfigurationSetup
  2. Choose Localhost environment
  3. Authentication is automatically skipped ✓
  4. Configure AWS credentials (see requirements below)
  5. Done! Ready to run tests

What you need:

  • CodeMie API running on localhost:8080
  • AWS credentials for integration settings

AWS Credentials Requirements:

  • REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
  • EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
  • Provides automatic loading of integration credentials from Parameter Store
  • Without AWS: You can manually configure integrations in the config file

Configuration for Preview/Production

Remote environment setup:

  1. Select ConfigurationSetup
  2. Choose Preview or Production environment
  3. Configure authentication:
    • Accept default Auth Server URL or enter custom
    • Accept default Client ID or enter custom
    • Accept default Realm Name or enter custom
    • Enter Client Secret
    • Optional: Configure username/password
  4. Configure AWS credentials (see requirements below)
  5. Done! Ready to run tests

AWS Credentials Requirements:

  • REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
  • EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
  • Provides automatic loading of integration credentials from Parameter Store

AWS Credentials Setup

AWS credentials enable automatic loading of integration settings (GitLab, JIRA, Confluence, etc.) from Parameter Store.

When do you need AWS?

  • Required: smoke, api, ui, opensource, enterprise suites
  • ⏭️ Not required: sanity suite (no integrations)
  • ⚠️ Alternative: Manual configuration in config file (tedious for 86+ variables)

Navigate to: Configuration → AWS Management

Option 1: Use Existing AWS Profile (Recommended)

Select your profile from ~/.aws/credentials:

? Select AWS profile:
❯ default
  codemie-prod
  codemie-dev

Option 2: Create New AWS Profile

The CLI guides you through:

  1. Enter profile name (e.g., "codemie")
  2. Enter AWS Access Key ID
  3. Enter AWS Secret Access Key
  4. Profile is saved to ~/.aws/credentials with secure permissions

Option 3: Enter Access Keys Directly

Keys are stored in the test harness configuration file (~/.codemie/test-harness.json).

Option 4: Remove AWS Credentials

Clears all AWS configuration from the test harness.

Integrations Management

Manage credentials for 86+ integration variables across 10 categories.

Navigate to: Configuration → Integrations Management

Features:

  1. 📋 View Current Integrations - See all configured integrations (masked or real values)
  2. 📂 View Categories - List all integration categories:
    • Version Control (GitLab, GitHub)
    • Project Management (JIRA Server/Cloud, Confluence Server/Cloud)
    • Cloud Providers (AWS, Azure, GCP)
    • Code Quality (SonarQube, SonarCloud)
    • DevOps (Azure DevOps)
    • Access Management (Keycloak)
    • Notifications (Email, OAuth, Telegram)
    • Data Management (MySQL, PostgreSQL, MSSQL, LiteLLM, Elasticsearch)
    • IT Service (ServiceNow)
    • Quality Assurance (Report Portal, Kubernetes)
  3. ⚙️ Setup by Category - Interactive wizard for specific category
  4. ✅ Validate Integrations - Check configuration completeness

Example: Setup GitLab Integration

  1. Select ConfigurationIntegrations ManagementSetup by Category
  2. Choose Version Control
  3. Enter values for each prompt (or press Enter to skip):
    GITLAB_URL: https://gitlab.example.com
    GITLAB_TOKEN: **********************
    GITLAB_PROJECT: https://gitlab.example.com/group/project
    GITLAB_PROJECT_ID: 12345
    

Running Tests

Test Suites (Recommended)

Navigate to: Run Tests → Run Test Suite

Choose from 6 predefined test suites optimized for different use cases:

Suite Use Case Description Workers Reruns Time
sanity DevOps CI/CD Fastest - API sanity checks for deployment validation 8 2 ~2 min
smoke Local Dev All smoke tests (API + UI) for rapid feedback 8 2 8-12 min
smoke-api Local Dev API-only smoke tests - fast backend validation 8 2 3-5 min
smoke-ui Local Dev UI-only smoke tests - critical user paths 4 2 3-5 min
api QA Regression Full API regression (parallel-safe tests) 10 2 30-45 min
ui QA UI Tests Full UI regression with Playwright 4 2 20-30 min
opensource Feature Testing Non-enterprise (open-source) features 10 2 25-35 min
enterprise Feature Testing Enterprise-only features 10 2 15-25 min

Interactive Flow:

  1. Select suite from the list with descriptions
  2. Configure number of parallel workers (default provided)
  3. Configure number of reruns on failure (default provided)
  4. Review execution summary with marks, workers, and reruns
  5. Confirm to start execution
  6. Tests run with live output
  7. See completion status

Example: Running Smoke Tests

? Select a test suite:
❯ smoke        - Quick smoke tests for local development
  sanity       - Sanity check for DevOps CI/CD pipelines
  api          - Full API regression suite
  [...]

? Number of parallel workers: 8
? Number of test reruns on failure: 2

Running test suite: smoke
Description: Quick smoke tests for local development
Marks: smoke
Workers: 8
Reruns: 2

? Proceed with test execution? Yes

[pytest output...]

✓ Test execution completed!

Running by Custom Marks

Navigate to: Run Tests → Run with Custom Marks

Interactive Flow:

  1. Optional: View available marks first
    • Choose format: List view (simple) or Table view (with file details)
  2. Enter pytest marks expression with logical operators
  3. Configure workers and reruns
  4. Review summary and confirm
  5. Execute tests

Common Mark Examples:

# Single mark
api

# Multiple marks with AND
smoke and api

# Multiple marks with OR
jira or confluence

# Exclude marks with NOT
api and not ui

# Complex expressions with parentheses
(gitlab or github) and code_kb

# Multiple exclusions
api and not (ui or not_for_parallel_run)

📋 Available Marks by Category:

Before running custom marks, view all available marks:

codemie-test-harness marks          # List view
codemie-test-harness marks --verbose # Detailed view with file locations

🏗️ Architecture

  • api - API integration tests
  • ui - UI tests with Playwright
  • mcp - Model Context Protocol tests
  • plugin - Plugin functionality tests

💨 Speed

  • smoke - Quick smoke tests
  • sanity - Sanity checks (fastest, no AWS required)

🔐 License

  • enterprise - Enterprise features
  • opensource - Non-enterprise features (implied by absence of enterprise)

🔗 Integrations

  • gitlab, github, git - Version control systems
  • jira, jira_cloud - JIRA integrations
  • confluence, confluence_cloud - Confluence integrations
  • ado - Azure DevOps
  • servicenow - ServiceNow

📚 Knowledge Bases

  • jira_kb - JIRA knowledge base tests
  • confluence_kb - Confluence knowledge base tests
  • code_kb - Code knowledge base tests

🤖 Features

  • assistant - Assistant functionality
  • workflow - Workflow execution
  • llm - LLM model tests
  • datasource - Datasource management
  • conversations - Conversation API

⚠️ Special

  • not_for_parallel_run - Sequential execution required

Interactive Features

Configuration Management

List Settings

  • View all configured settings
  • Sensitive values are masked by default
  • See total count of configured values

Set Specific Value

  • Set any configuration key manually
  • Autocomplete suggestions for common keys
  • Secure password input for sensitive values

Get Specific Value

  • View a single configuration value
  • Shows masked value for sensitive keys

Unset Specific Value

  • Remove specific configuration keys
  • Confirmation prompt before removal

Assistant Chat

Navigate to: Chat with Assistant

Features:

  • Start new conversations or continue existing ones
  • Stream responses in real-time
  • Langfuse tracing support
  • Interactive message input

Usage:

  1. Enter assistant ID
  2. Optional: Enter conversation ID to continue previous chat
  3. Optional: Enable streaming or Langfuse tracing
  4. Type your message
  5. View assistant response
  6. Continue conversation

Workflow Execution

Navigate to: Execute Workflow

Features:

  • Execute workflows by ID
  • Provide user input
  • Custom execution IDs
  • View execution results

Usage:

  1. Enter workflow ID
  2. Optional: Provide user input for the workflow
  3. Optional: Specify custom execution ID
  4. Execute and view results

Configuration File Reference

All configuration is stored in: ~/.codemie/test-harness.json

Priority Order (highest to lowest):

  1. CLI flags (temporary, for single run)
  2. Environment variables (from .env file)
  3. Configuration file (~/.codemie/test-harness.json)
  4. AWS Parameter Store (if AWS credentials configured)
  5. Default values (built-in)

Viewing the file:

cat ~/.codemie/test-harness.json | jq

Manual editing (advanced):

# Backup first
cp ~/.codemie/test-harness.json ~/.codemie/test-harness.json.backup

# Edit with your preferred editor
nano ~/.codemie/test-harness.json

Resetting configuration:

rm ~/.codemie/test-harness.json
codemie-test-harness  # Start fresh

Complete Example: First Test Run

Here's a complete walkthrough for first-time users:

1. Install

pip install codemie-test-harness

2. Launch interactive mode

codemie-test-harness

3. Configure (first time only)

Select: ⚙️ Configuration
Select: Setup (Quick Configuration)
Choose: Preview environment
Accept defaults for Auth Server, Client ID, Realm
Enter: Your Client Secret
Choose: Use existing AWS profile (or enter credentials)
Confirm configuration

4. Run tests

Select: 🚀 Run Tests
Select: Run Test Suite
Choose: smoke (Quick smoke tests)
Workers: 8 (press Enter for default)
Reruns: 2 (press Enter for default)
Confirm: Yes
Wait: 5-10 minutes
Result: See test results!

5. Explore results

  • Check ~/.codemie/test-harness.json for saved configuration
  • Test results are displayed in terminal
  • If ReportPortal is configured, view results there

Part 2: Running with pytest

For contributors working from the repository or users preferring direct pytest commands.

Installation for Contributors

1. Clone the repository:

git clone <repository-url>
cd test-harness

2. Install with Poetry:

poetry install

3. Install Playwright browsers (for UI tests):

playwright install

Configuration with .env File

Create a .env file in the codemie_test_harness directory.

Configuration for Localhost

Minimal localhost setup:

CODEMIE_API_DOMAIN=http://localhost:8080
TEST_USER_FULL_NAME=dev-codemie-user

# AWS credentials (see requirements below)
# Option 1: Use AWS profile
AWS_PROFILE=my-profile-name

# Option 2: Direct credentials
AWS_ACCESS_KEY=<your_aws_access_key>
AWS_SECRET_KEY=<your_aws_secret_key>

AWS Credentials Requirements:

  • REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
  • EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
  • Provides automatic loading of integration credentials from Parameter Store
  • Without AWS: You can manually configure integrations in .env

Configuration for Preview/Production

Full remote setup:

# API Configuration
CODEMIE_API_DOMAIN=https://codemie-preview.lab.epam.com/code-assistant-api

# Authentication
AUTH_SERVER_URL=https://auth.codemie.lab.epam.com/
AUTH_CLIENT_ID=codemie-preview-sdk
AUTH_REALM_NAME=codemie-prod
AUTH_USERNAME=<username>
AUTH_PASSWORD=<password>

# AWS credentials (see requirements below)
AWS_PROFILE=codemie-preview
# OR
# AWS_ACCESS_KEY=<key>
# AWS_SECRET_KEY=<secret>

# Optional: Test configuration
DEFAULT_TIMEOUT=60
CLEANUP_DATA=True

# Optional: UI Testing
FRONTEND_URL=https://codemie-preview.lab.epam.com
HEADLESS=True

AWS Credentials Requirements:

  • REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
  • EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
  • Provides automatic loading of integration credentials from Parameter Store

Integration Configuration

You can manually configure integrations in .env or use AWS Parameter Store.

Version Control:

GIT_ENV=gitlab  # or github

# GitLab
GITLAB_URL=https://gitlab.example.com
GITLAB_TOKEN=<token>
GITLAB_PROJECT=https://gitlab.example.com/group/project
GITLAB_PROJECT_ID=12345

# GitHub
GITHUB_URL=https://github.com
GITHUB_TOKEN=<token>
GITHUB_PROJECT=https://github.com/org/repo

Project Management:

# JIRA Server
JIRA_URL=https://jira.example.com
JIRA_TOKEN=<token>
JIRA_JQL=project = 'PROJECT' and status = 'Open'

# JIRA Cloud
JIRA_CLOUD_URL=https://company.atlassian.net
JIRA_CLOUD_EMAIL=user@company.com
JIRA_CLOUD_TOKEN=<api_token>

# Confluence
CONFLUENCE_URL=https://confluence.example.com
CONFLUENCE_TOKEN=<token>
CONFLUENCE_CQL=space = 'SPACE' and type = page

Credential Priority:

  1. Environment variables in .env (highest)
  2. AWS Parameter Store
  3. Default values

Running Tests with pytest

Test Suites with pytest

Run the same test suites using pytest directly:

Smoke Tests (Local Development)

# All smoke tests (API + UI)
pytest -n 8 -m "smoke" --reruns 2

# API smoke tests only (fast backend validation)
pytest -n 8 -m "smoke and api and not ui" --reruns 2

# UI smoke tests only (critical user paths)
pytest -n 4 -m "smoke and ui" --reruns 2

Sanity Tests (CI/CD)

pytest -n 8 -m "sanity" --reruns 2

Full API Regression

# Parallel-safe tests
pytest -n 10 -m "api" --reruns 2

# Sequential tests (run separately)
pytest -m "api and not_for_parallel_run" --reruns 2

UI Tests

pytest -n 4 -m "ui" --reruns 2

Open Source Features

pytest -n 10 -m "not enterprise and api" --reruns 2

Enterprise Features

pytest -n 10 -m "enterprise" --reruns 2

Test Suite Comparison

Suite CLI Command pytest Command
sanity codemie-test-harness run sanity pytest -n 8 -m "sanity" --reruns 2
smoke codemie-test-harness run smoke pytest -n 8 -m "smoke" --reruns 2
api codemie-test-harness run api pytest -n 10 -m "api" --reruns 2
ui codemie-test-harness run ui pytest -n 4 -m "ui" --reruns 2
opensource codemie-test-harness run opensource pytest -n 10 -m "not enterprise and api" --reruns 2
enterprise codemie-test-harness run enterprise pytest -n 10 -m "enterprise" --reruns 2

Custom Mark Selection with pytest

Single Mark:

pytest -n 8 -m "gitlab" --reruns 2

AND Operator (both marks required):

pytest -n 8 -m "api and jira" --reruns 2
pytest -n 4 -m "gitlab and code_kb" --reruns 2

OR Operator (either mark):

pytest -n 8 -m "jira or confluence" --reruns 2
pytest -n 6 -m "jira_kb or confluence_kb" --reruns 2

NOT Operator (exclude marks):

pytest -n 10 -m "api and not ui" --reruns 2
pytest -n 8 -m "not not_for_parallel_run" --reruns 2

Complex Expressions:

# Multiple conditions with parentheses
pytest -n 8 -m "(gitlab or github) and code_kb" --reruns 2

# Exclude multiple marks
pytest -n 10 -m "api and not (ui or not_for_parallel_run)" --reruns 2

# Knowledge base tests only
pytest -n 8 -m "(jira_kb or confluence_kb or code_kb)" --reruns 2

Common Testing Scenarios

Testing Specific Integrations:

# GitLab integration
pytest -n 8 -m "api and gitlab" --reruns 2

# JIRA integration
pytest -n 8 -m "api and jira" --reruns 2

# Confluence integration
pytest -n 8 -m "api and confluence" --reruns 2

# All Git providers
pytest -n 8 -m "gitlab or github or git" --reruns 2

Testing Specific Components:

# Workflows
pytest -n 8 -m "api and workflow" --reruns 2

# Assistants
pytest -n 8 -m "api and assistant" --reruns 2

# LLM models
pytest -n 8 -m "api and llm" --reruns 2

# MCP (Model Context Protocol)
pytest -n 8 -m "api and mcp" --reruns 2

# Plugins
pytest -n 8 -m "api and plugin" --reruns 2

Testing Without Full Backend:

# Exclude plugin tests (when NATS is not running)
pytest -n 8 -m "api and not plugin" --reruns 2

# Exclude MCP tests (when mcp-connect is not running)
pytest -n 8 -m "api and not mcp" --reruns 2

# Exclude both
pytest -n 8 -m "api and not (plugin or mcp)" --reruns 2

pytest Flags Explained

Flag Description Example
-n <number> Number of parallel workers (pytest-xdist) -n 8
-m "<expression>" Select tests by marks -m "api and not ui"
--reruns <number> Retry failed tests N times (pytest-rerunfailures) --reruns 2
--count <number> Run each test N times (pytest-repeat) --count 50
--timeout <seconds> Per-test timeout in seconds (pytest-timeout) --timeout 600
-v Verbose output -v
-s Show print statements -s
-x Stop on first failure -x
--lf Run last failed tests --lf
--reportportal Report results to ReportPortal --reportportal

Test Timeout Configuration

Control per-test timeout to prevent hanging tests.

In .env file:

TEST_TIMEOUT=600  # 10 minutes per test

Via pytest command:

# Set timeout for this run
pytest -n 8 -m "api" --timeout 900 --reruns 2

# Disable timeout (debugging only)
pytest -m "slow_tests" --timeout 0

Default: 300 seconds (5 minutes) per test

When a test exceeds the timeout:

  • Test is terminated immediately
  • Marked as FAILED with timeout message
  • Stack trace shows where execution stopped
  • Other tests continue normally

UI Tests with Playwright

Install browsers (one-time):

playwright install

Run UI tests:

pytest -n 4 -m "ui" --reruns 2

Headless mode:

Set HEADLESS=True in .env or:

HEADLESS=True pytest -n 4 -m "ui" --reruns 2

ReportPortal Integration

Configure in .env:

RP_ENDPOINT=https://reportportal.example.com
RP_PROJECT=codemie_tests
RP_API_KEY=<api_key>

Run with ReportPortal:

pytest -n 10 -m "api" --reruns 2 --reportportal

Troubleshooting

Common Issues

"Command not found: codemie-test-harness"

  • Run pip install codemie-test-harness or use uvx codemie-test-harness
  • Check that pip's bin directory is in your PATH

"Authentication failed"

  • Verify AUTH_CLIENT_SECRET is correct
  • Check AUTH_SERVER_URL is accessible
  • For localhost, authentication is automatically skipped

"AWS Parameter Store access denied"

  • Verify AWS credentials with aws sts get-caller-identity
  • Check that your AWS user has Parameter Store read permissions
  • Required path: /codemie/autotests/integrations/*

"Tests hanging or timing out"

  • Check DEFAULT_TIMEOUT in configuration (default: 300 seconds)
  • Increase timeout: pytest --timeout 600 -m "slow_tests"
  • For debugging, disable timeout: pytest --timeout 0

"Playwright browser not found"

  • Run playwright install to download browsers
  • For specific browser: playwright install chromium

"Integration tests failing"

  • Verify integration credentials in Configuration → Integrations Management
  • Run validation: Configuration → Integrations Management → Validate Integrations
  • Check if integration services (GitLab, JIRA, etc.) are accessible

Stopping tests mid-run

  • Press Ctrl+C to gracefully stop pytest
  • Running tests will complete their current test
  • Cleanup happens automatically even on interrupt

Configuration not persisting

  • Configuration is stored in ~/.codemie/test-harness.json
  • Check file permissions: ls -la ~/.codemie/test-harness.json
  • Use Configuration → List Settings to verify saved values

Quick Reference Card

Most Common Commands

# Interactive mode (easiest)
codemie-test-harness

# Quick test runs
codemie-test-harness run sanity     # Fastest (2 min, no AWS)
codemie-test-harness run smoke      # Quick (5-10 min)
codemie-test-harness run api        # Full regression (30-45 min)

# Configuration
codemie-test-harness config list    # View all settings
codemie-test-harness marks          # List available marks

# With pytest
pytest -n 8 -m "smoke" --reruns 2                    # Smoke tests
pytest -n 10 -m "api" --reruns 2                     # API tests
pytest -n 4 -m "ui" --reruns 2                       # UI tests
pytest -m "api and gitlab" --reruns 2                # GitLab tests

Files & Paths

~/.codemie/test-harness.json        # Configuration file
~/.aws/credentials                  # AWS credentials
codemie_test_harness/.env           # Environment variables (for pytest)

Quick Setup

# Install
pip install codemie-test-harness

# First run
codemie-test-harness                # Interactive setup wizard

# Or quick config
codemie-test-harness config set CODEMIE_API_DOMAIN http://localhost:8080
codemie-test-harness config set AWS_PROFILE my-profile

Support

For issues, questions, or contributions:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codemie_test_harness-0.1.413.tar.gz (84.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codemie_test_harness-0.1.413-py3-none-any.whl (85.0 MB view details)

Uploaded Python 3

File details

Details for the file codemie_test_harness-0.1.413.tar.gz.

File metadata

  • Download URL: codemie_test_harness-0.1.413.tar.gz
  • Upload date:
  • Size: 84.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for codemie_test_harness-0.1.413.tar.gz
Algorithm Hash digest
SHA256 3cacbc0f042daf7ecf87945090153010f3aef398244a2f8dbe1a6708e798cd8b
MD5 9036b759f983494571095c308f4b3bb3
BLAKE2b-256 42dd22249701319fdfdcc34c0861a0066a175a17323d39635ded5f21f1a1c9c8

See more details on using hashes here.

File details

Details for the file codemie_test_harness-0.1.413-py3-none-any.whl.

File metadata

File hashes

Hashes for codemie_test_harness-0.1.413-py3-none-any.whl
Algorithm Hash digest
SHA256 8b90ddce34b8c923dc433da8d94d8f5d792cd21b185a63bfc0d6e57cb8423d92
MD5 dbd9f7bd21aa69014543c4506258a13e
BLAKE2b-256 4a29c5222e0e449b27ce282e11b3e8807a307e9adb1952abde5ba2cc684f7300

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page