Autotest for CodeMie backend and UI
Project description
CodeMie Test Harness
End-to-end, integration, and UI test suite for CodeMie services. Test LLM assistants, workflows, tools, and integrations with a user-friendly CLI or pytest.
Quick Start
# Install and launch interactive mode
pip install codemie-test-harness
codemie-test-harness
# Or run without installing
uvx codemie-test-harness
First time? The CLI guides you through setup. Just select Configuration → Setup and follow the prompts.
Running tests? Choose Run Tests → Run Test Suite → smoke for a quick 5-10 minute validation.
Requirements
- Python: 3.9 or higher
- Platform: Linux, macOS, Windows (WSL recommended)
- For UI tests: Playwright browsers (
playwright install) - For most test suites: AWS credentials (see configuration below)
⚠️ AWS Credentials Required Most test suites (smoke, api, ui, opensource, enterprise) require AWS credentials to load integration settings from Parameter Store. Exception: The
sanitysuite works without AWS credentials. See AWS Credentials Setup for details.
Table of Contents
- Quick Start
- Requirements
- Part 1: Interactive CLI (Recommended)
- Complete Example: First Test Run
- Part 2: Running with pytest
- Troubleshooting
- Quick Reference Card
- Support
Part 1: Interactive CLI (Recommended)
The easiest way to use the test harness. No command memorization required - just navigate menus and follow prompts.
For developers: See Part 2: Running with pytest for direct pytest commands and
.envconfiguration.For technical details: See
CLAUDE.mdin the repository for architecture and development patterns.
Installation
Option 1: Install with pip (persistent installation)
pip install codemie-test-harness
Option 2: Run with uvx (no installation needed)
uvx codemie-test-harness
Verify installation:
codemie-test-harness --help
Getting Started
Launch the interactive CLI:
codemie-test-harness
You'll see the main menu:
╔═══════════════════════════════════════════════╗
║ CodeMie Test Harness - Interactive Mode ║
╚═══════════════════════════════════════════════╝
? What would you like to do?
🚀 Run Tests
❯ ⚙️ Configuration
🤖 Chat with Assistant
🔄 Execute Workflow
❌ Exit
First-time setup: Navigate to Configuration → Setup (Quick Configuration) to configure your environment.
Navigation Tips
Before diving into configuration, here's how to navigate the interactive CLI:
- Arrow Keys: Navigate menu options
- Enter: Select an option
- Ctrl+C: Cancel/Exit at any time
- Back Options: Every submenu has a "Back" option
- Menu Loops: Configuration menus loop until you select "Back"
- Smart Defaults: Pre-filled values for common configurations
- Validation: Input validation prevents invalid values
Configuration
The interactive CLI guides you through configuration with smart defaults and validation.
Quick Setup Wizard
Select: Configuration → Setup (Quick Configuration)
The wizard will guide you through:
1. Select Environment
? Select environment:
Localhost - http://localhost:8080
❯ Preview - https://codemie-preview.lab.epam.com/code-assistant-api
Production - https://codemie.lab.epam.com/code-assistant-api
Custom - Enter URL manually
2. Authentication Setup
The wizard automatically detects localhost and skips authentication setup. For remote environments (Preview/Production), it prompts for:
- Auth Server URL (default provided)
- Client ID (default provided)
- Realm Name (default provided)
- Client Secret
- Optional: Username/Password authentication
3. AWS Credentials (Optional)
Configure AWS credentials to access integration settings from Parameter Store:
? How would you like to configure AWS credentials?
📁 Use existing AWS profile
➕ Create new AWS profile
🔑 Enter access keys directly
⏭️ Skip AWS configuration
⬅️ Back
4. Summary & Confirmation
The wizard displays all configured values with masked sensitive data.
Configuration for Localhost
Minimal localhost setup:
- Select Configuration → Setup
- Choose Localhost environment
- Authentication is automatically skipped ✓
- Configure AWS credentials (see requirements below)
- Done! Ready to run tests
What you need:
- CodeMie API running on localhost:8080
- AWS credentials for integration settings
AWS Credentials Requirements:
- REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
- Without AWS: You can manually configure integrations in the config file
Configuration for Preview/Production
Remote environment setup:
- Select Configuration → Setup
- Choose Preview or Production environment
- Configure authentication:
- Accept default Auth Server URL or enter custom
- Accept default Client ID or enter custom
- Accept default Realm Name or enter custom
- Enter Client Secret
- Optional: Configure username/password
- Configure AWS credentials (see requirements below)
- Done! Ready to run tests
AWS Credentials Requirements:
- REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
AWS Credentials Setup
AWS credentials enable automatic loading of integration settings (GitLab, JIRA, Confluence, etc.) from Parameter Store.
When do you need AWS?
- ✅ Required:
smoke,api,ui,opensource,enterprisesuites - ⏭️ Not required:
sanitysuite (no integrations) - ⚠️ Alternative: Manual configuration in config file (tedious for 86+ variables)
Navigate to: Configuration → AWS Management
Option 1: Use Existing AWS Profile (Recommended)
Select your profile from ~/.aws/credentials:
? Select AWS profile:
❯ default
codemie-prod
codemie-dev
Option 2: Create New AWS Profile
The CLI guides you through:
- Enter profile name (e.g., "codemie")
- Enter AWS Access Key ID
- Enter AWS Secret Access Key
- Profile is saved to
~/.aws/credentialswith secure permissions
Option 3: Enter Access Keys Directly
Keys are stored in the test harness configuration file (~/.codemie/test-harness.json).
Option 4: Remove AWS Credentials
Clears all AWS configuration from the test harness.
Integrations Management
Manage credentials for 86+ integration variables across 10 categories.
Navigate to: Configuration → Integrations Management
Features:
- 📋 View Current Integrations - See all configured integrations (masked or real values)
- 📂 View Categories - List all integration categories:
- Version Control (GitLab, GitHub)
- Project Management (JIRA Server/Cloud, Confluence Server/Cloud)
- Cloud Providers (AWS, Azure, GCP)
- Code Quality (SonarQube, SonarCloud)
- DevOps (Azure DevOps)
- Access Management (Keycloak)
- Notifications (Email, OAuth, Telegram)
- Data Management (MySQL, PostgreSQL, MSSQL, LiteLLM, Elasticsearch)
- IT Service (ServiceNow)
- Quality Assurance (Report Portal, Kubernetes)
- ⚙️ Setup by Category - Interactive wizard for specific category
- ✅ Validate Integrations - Check configuration completeness
Example: Setup GitLab Integration
- Select Configuration → Integrations Management → Setup by Category
- Choose Version Control
- Enter values for each prompt (or press Enter to skip):
GITLAB_URL: https://gitlab.example.com GITLAB_TOKEN: ********************** GITLAB_PROJECT: https://gitlab.example.com/group/project GITLAB_PROJECT_ID: 12345
Running Tests
Test Suites (Recommended)
Navigate to: Run Tests → Run Test Suite
Choose from 6 predefined test suites optimized for different use cases:
| Suite | Use Case | Description | Workers | Reruns | Time |
|---|---|---|---|---|---|
| sanity | DevOps CI/CD | Fastest - API sanity checks for deployment validation | 8 | 2 | ~2 min |
| smoke | Local Dev | All smoke tests (API + UI) for rapid feedback | 8 | 2 | 8-12 min |
| smoke-api | Local Dev | API-only smoke tests - fast backend validation | 8 | 2 | 3-5 min |
| smoke-ui | Local Dev | UI-only smoke tests - critical user paths | 4 | 2 | 3-5 min |
| api | QA Regression | Full API regression (parallel-safe tests) | 10 | 2 | 30-45 min |
| ui | QA UI Tests | Full UI regression with Playwright | 4 | 2 | 20-30 min |
| opensource | Feature Testing | Non-enterprise (open-source) features | 10 | 2 | 25-35 min |
| enterprise | Feature Testing | Enterprise-only features | 10 | 2 | 15-25 min |
Interactive Flow:
- Select suite from the list with descriptions
- Configure number of parallel workers (default provided)
- Configure number of reruns on failure (default provided)
- Review execution summary with marks, workers, and reruns
- Confirm to start execution
- Tests run with live output
- See completion status
Example: Running Smoke Tests
? Select a test suite:
❯ smoke - Quick smoke tests for local development
sanity - Sanity check for DevOps CI/CD pipelines
api - Full API regression suite
[...]
? Number of parallel workers: 8
? Number of test reruns on failure: 2
Running test suite: smoke
Description: Quick smoke tests for local development
Marks: smoke
Workers: 8
Reruns: 2
? Proceed with test execution? Yes
[pytest output...]
✓ Test execution completed!
Running by Custom Marks
Navigate to: Run Tests → Run with Custom Marks
Interactive Flow:
- Optional: View available marks first
- Choose format: List view (simple) or Table view (with file details)
- Enter pytest marks expression with logical operators
- Configure workers and reruns
- Review summary and confirm
- Execute tests
Common Mark Examples:
# Single mark
api
# Multiple marks with AND
smoke and api
# Multiple marks with OR
jira or confluence
# Exclude marks with NOT
api and not ui
# Complex expressions with parentheses
(gitlab or github) and code_kb
# Multiple exclusions
api and not (ui or not_for_parallel_run)
📋 Available Marks by Category:
Before running custom marks, view all available marks:
codemie-test-harness marks # List view
codemie-test-harness marks --verbose # Detailed view with file locations
🏗️ Architecture
api- API integration testsui- UI tests with Playwrightmcp- Model Context Protocol testsplugin- Plugin functionality tests
💨 Speed
smoke- Quick smoke testssanity- Sanity checks (fastest, no AWS required)
🔐 License
enterprise- Enterprise featuresopensource- Non-enterprise features (implied by absence ofenterprise)
🔗 Integrations
gitlab,github,git- Version control systemsjira,jira_cloud- JIRA integrationsconfluence,confluence_cloud- Confluence integrationsado- Azure DevOpsservicenow- ServiceNow
📚 Knowledge Bases
jira_kb- JIRA knowledge base testsconfluence_kb- Confluence knowledge base testscode_kb- Code knowledge base tests
🤖 Features
assistant- Assistant functionalityworkflow- Workflow executionllm- LLM model testsdatasource- Datasource managementconversations- Conversation API
⚠️ Special
not_for_parallel_run- Sequential execution required
Interactive Features
Configuration Management
List Settings
- View all configured settings
- Sensitive values are masked by default
- See total count of configured values
Set Specific Value
- Set any configuration key manually
- Autocomplete suggestions for common keys
- Secure password input for sensitive values
Get Specific Value
- View a single configuration value
- Shows masked value for sensitive keys
Unset Specific Value
- Remove specific configuration keys
- Confirmation prompt before removal
Assistant Chat
Navigate to: Chat with Assistant
Features:
- Start new conversations or continue existing ones
- Stream responses in real-time
- Langfuse tracing support
- Interactive message input
Usage:
- Enter assistant ID
- Optional: Enter conversation ID to continue previous chat
- Optional: Enable streaming or Langfuse tracing
- Type your message
- View assistant response
- Continue conversation
Workflow Execution
Navigate to: Execute Workflow
Features:
- Execute workflows by ID
- Provide user input
- Custom execution IDs
- View execution results
Usage:
- Enter workflow ID
- Optional: Provide user input for the workflow
- Optional: Specify custom execution ID
- Execute and view results
Configuration File Reference
All configuration is stored in: ~/.codemie/test-harness.json
Priority Order (highest to lowest):
- CLI flags (temporary, for single run)
- Environment variables (from
.envfile) - Configuration file (
~/.codemie/test-harness.json) - AWS Parameter Store (if AWS credentials configured)
- Default values (built-in)
Viewing the file:
cat ~/.codemie/test-harness.json | jq
Manual editing (advanced):
# Backup first
cp ~/.codemie/test-harness.json ~/.codemie/test-harness.json.backup
# Edit with your preferred editor
nano ~/.codemie/test-harness.json
Resetting configuration:
rm ~/.codemie/test-harness.json
codemie-test-harness # Start fresh
Complete Example: First Test Run
Here's a complete walkthrough for first-time users:
1. Install
pip install codemie-test-harness
2. Launch interactive mode
codemie-test-harness
3. Configure (first time only)
Select: ⚙️ Configuration
Select: Setup (Quick Configuration)
Choose: Preview environment
Accept defaults for Auth Server, Client ID, Realm
Enter: Your Client Secret
Choose: Use existing AWS profile (or enter credentials)
Confirm configuration
4. Run tests
Select: 🚀 Run Tests
Select: Run Test Suite
Choose: smoke (Quick smoke tests)
Workers: 8 (press Enter for default)
Reruns: 2 (press Enter for default)
Confirm: Yes
Wait: 5-10 minutes
Result: See test results!
5. Explore results
- Check
~/.codemie/test-harness.jsonfor saved configuration - Test results are displayed in terminal
- If ReportPortal is configured, view results there
Part 2: Running with pytest
For contributors working from the repository or users preferring direct pytest commands.
Installation for Contributors
1. Clone the repository:
git clone <repository-url>
cd test-harness
2. Install with Poetry:
poetry install
3. Install Playwright browsers (for UI tests):
playwright install
Configuration with .env File
Create a .env file in the codemie_test_harness directory.
Configuration for Localhost
Minimal localhost setup:
CODEMIE_API_DOMAIN=http://localhost:8080
TEST_USER_FULL_NAME=dev-codemie-user
# AWS credentials (see requirements below)
# Option 1: Use AWS profile
AWS_PROFILE=my-profile-name
# Option 2: Direct credentials
AWS_ACCESS_KEY=<your_aws_access_key>
AWS_SECRET_KEY=<your_aws_secret_key>
AWS Credentials Requirements:
- REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
- Without AWS: You can manually configure integrations in
.env
Configuration for Preview/Production
Full remote setup:
# API Configuration
CODEMIE_API_DOMAIN=https://codemie-preview.lab.epam.com/code-assistant-api
# Authentication
AUTH_SERVER_URL=https://auth.codemie.lab.epam.com/
AUTH_CLIENT_ID=codemie-preview-sdk
AUTH_REALM_NAME=codemie-prod
AUTH_USERNAME=<username>
AUTH_PASSWORD=<password>
# AWS credentials (see requirements below)
AWS_PROFILE=codemie-preview
# OR
# AWS_ACCESS_KEY=<key>
# AWS_SECRET_KEY=<secret>
# Optional: Test configuration
DEFAULT_TIMEOUT=60
CLEANUP_DATA=True
# Optional: UI Testing
FRONTEND_URL=https://codemie-preview.lab.epam.com
HEADLESS=True
AWS Credentials Requirements:
- REQUIRED for all test suites (smoke, api, ui, opensource, enterprise) - Test integrations (GitLab, JIRA, Confluence, etc.)
- EXCEPTION: Sanity suite does not require AWS - Only tests assistants/workflows/datasources without integrations
- Provides automatic loading of integration credentials from Parameter Store
Integration Configuration
You can manually configure integrations in .env or use AWS Parameter Store.
Version Control:
GIT_ENV=gitlab # or github
# GitLab
GITLAB_URL=https://gitlab.example.com
GITLAB_TOKEN=<token>
GITLAB_PROJECT=https://gitlab.example.com/group/project
GITLAB_PROJECT_ID=12345
# GitHub
GITHUB_URL=https://github.com
GITHUB_TOKEN=<token>
GITHUB_PROJECT=https://github.com/org/repo
Project Management:
# JIRA Server
JIRA_URL=https://jira.example.com
JIRA_TOKEN=<token>
JIRA_JQL=project = 'PROJECT' and status = 'Open'
# JIRA Cloud
JIRA_CLOUD_URL=https://company.atlassian.net
JIRA_CLOUD_EMAIL=user@company.com
JIRA_CLOUD_TOKEN=<api_token>
# Confluence
CONFLUENCE_URL=https://confluence.example.com
CONFLUENCE_TOKEN=<token>
CONFLUENCE_CQL=space = 'SPACE' and type = page
Credential Priority:
- Environment variables in
.env(highest) - AWS Parameter Store
- Default values
Running Tests with pytest
Test Suites with pytest
Run the same test suites using pytest directly:
Smoke Tests (Local Development)
# All smoke tests (API + UI)
pytest -n 8 -m "smoke" --reruns 2
# API smoke tests only (fast backend validation)
pytest -n 8 -m "smoke and api and not ui" --reruns 2
# UI smoke tests only (critical user paths)
pytest -n 4 -m "smoke and ui" --reruns 2
Sanity Tests (CI/CD)
pytest -n 8 -m "sanity" --reruns 2
Full API Regression
# Parallel-safe tests
pytest -n 10 -m "api" --reruns 2
# Sequential tests (run separately)
pytest -m "api and not_for_parallel_run" --reruns 2
UI Tests
pytest -n 4 -m "ui" --reruns 2
Open Source Features
pytest -n 10 -m "not enterprise and api" --reruns 2
Enterprise Features
pytest -n 10 -m "enterprise" --reruns 2
Test Suite Comparison
| Suite | CLI Command | pytest Command |
|---|---|---|
| sanity | codemie-test-harness run sanity |
pytest -n 8 -m "sanity" --reruns 2 |
| smoke | codemie-test-harness run smoke |
pytest -n 8 -m "smoke" --reruns 2 |
| api | codemie-test-harness run api |
pytest -n 10 -m "api" --reruns 2 |
| ui | codemie-test-harness run ui |
pytest -n 4 -m "ui" --reruns 2 |
| opensource | codemie-test-harness run opensource |
pytest -n 10 -m "not enterprise and api" --reruns 2 |
| enterprise | codemie-test-harness run enterprise |
pytest -n 10 -m "enterprise" --reruns 2 |
Custom Mark Selection with pytest
Single Mark:
pytest -n 8 -m "gitlab" --reruns 2
AND Operator (both marks required):
pytest -n 8 -m "api and jira" --reruns 2
pytest -n 4 -m "gitlab and code_kb" --reruns 2
OR Operator (either mark):
pytest -n 8 -m "jira or confluence" --reruns 2
pytest -n 6 -m "jira_kb or confluence_kb" --reruns 2
NOT Operator (exclude marks):
pytest -n 10 -m "api and not ui" --reruns 2
pytest -n 8 -m "not not_for_parallel_run" --reruns 2
Complex Expressions:
# Multiple conditions with parentheses
pytest -n 8 -m "(gitlab or github) and code_kb" --reruns 2
# Exclude multiple marks
pytest -n 10 -m "api and not (ui or not_for_parallel_run)" --reruns 2
# Knowledge base tests only
pytest -n 8 -m "(jira_kb or confluence_kb or code_kb)" --reruns 2
Common Testing Scenarios
Testing Specific Integrations:
# GitLab integration
pytest -n 8 -m "api and gitlab" --reruns 2
# JIRA integration
pytest -n 8 -m "api and jira" --reruns 2
# Confluence integration
pytest -n 8 -m "api and confluence" --reruns 2
# All Git providers
pytest -n 8 -m "gitlab or github or git" --reruns 2
Testing Specific Components:
# Workflows
pytest -n 8 -m "api and workflow" --reruns 2
# Assistants
pytest -n 8 -m "api and assistant" --reruns 2
# LLM models
pytest -n 8 -m "api and llm" --reruns 2
# MCP (Model Context Protocol)
pytest -n 8 -m "api and mcp" --reruns 2
# Plugins
pytest -n 8 -m "api and plugin" --reruns 2
Testing Without Full Backend:
# Exclude plugin tests (when NATS is not running)
pytest -n 8 -m "api and not plugin" --reruns 2
# Exclude MCP tests (when mcp-connect is not running)
pytest -n 8 -m "api and not mcp" --reruns 2
# Exclude both
pytest -n 8 -m "api and not (plugin or mcp)" --reruns 2
pytest Flags Explained
| Flag | Description | Example |
|---|---|---|
-n <number> |
Number of parallel workers (pytest-xdist) | -n 8 |
-m "<expression>" |
Select tests by marks | -m "api and not ui" |
--reruns <number> |
Retry failed tests N times (pytest-rerunfailures) | --reruns 2 |
--count <number> |
Run each test N times (pytest-repeat) | --count 50 |
--timeout <seconds> |
Per-test timeout in seconds (pytest-timeout) | --timeout 600 |
-v |
Verbose output | -v |
-s |
Show print statements | -s |
-x |
Stop on first failure | -x |
--lf |
Run last failed tests | --lf |
--reportportal |
Report results to ReportPortal | --reportportal |
Test Timeout Configuration
Control per-test timeout to prevent hanging tests.
In .env file:
TEST_TIMEOUT=600 # 10 minutes per test
Via pytest command:
# Set timeout for this run
pytest -n 8 -m "api" --timeout 900 --reruns 2
# Disable timeout (debugging only)
pytest -m "slow_tests" --timeout 0
Default: 300 seconds (5 minutes) per test
When a test exceeds the timeout:
- Test is terminated immediately
- Marked as FAILED with timeout message
- Stack trace shows where execution stopped
- Other tests continue normally
UI Tests with Playwright
Install browsers (one-time):
playwright install
Run UI tests:
pytest -n 4 -m "ui" --reruns 2
Headless mode:
Set HEADLESS=True in .env or:
HEADLESS=True pytest -n 4 -m "ui" --reruns 2
ReportPortal Integration
Configure in .env:
RP_ENDPOINT=https://reportportal.example.com
RP_PROJECT=codemie_tests
RP_API_KEY=<api_key>
Run with ReportPortal:
pytest -n 10 -m "api" --reruns 2 --reportportal
Troubleshooting
Common Issues
"Command not found: codemie-test-harness"
- Run
pip install codemie-test-harnessor useuvx codemie-test-harness - Check that pip's bin directory is in your PATH
"Authentication failed"
- Verify AUTH_CLIENT_SECRET is correct
- Check AUTH_SERVER_URL is accessible
- For localhost, authentication is automatically skipped
"AWS Parameter Store access denied"
- Verify AWS credentials with
aws sts get-caller-identity - Check that your AWS user has Parameter Store read permissions
- Required path:
/codemie/autotests/integrations/*
"Tests hanging or timing out"
- Check DEFAULT_TIMEOUT in configuration (default: 300 seconds)
- Increase timeout:
pytest --timeout 600 -m "slow_tests" - For debugging, disable timeout:
pytest --timeout 0
"Playwright browser not found"
- Run
playwright installto download browsers - For specific browser:
playwright install chromium
"Integration tests failing"
- Verify integration credentials in Configuration → Integrations Management
- Run validation: Configuration → Integrations Management → Validate Integrations
- Check if integration services (GitLab, JIRA, etc.) are accessible
Stopping tests mid-run
- Press
Ctrl+Cto gracefully stop pytest - Running tests will complete their current test
- Cleanup happens automatically even on interrupt
Configuration not persisting
- Configuration is stored in
~/.codemie/test-harness.json - Check file permissions:
ls -la ~/.codemie/test-harness.json - Use Configuration → List Settings to verify saved values
Quick Reference Card
Most Common Commands
# Interactive mode (easiest)
codemie-test-harness
# Quick test runs
codemie-test-harness run sanity # Fastest (2 min, no AWS)
codemie-test-harness run smoke # Quick (5-10 min)
codemie-test-harness run api # Full regression (30-45 min)
# Configuration
codemie-test-harness config list # View all settings
codemie-test-harness marks # List available marks
# With pytest
pytest -n 8 -m "smoke" --reruns 2 # Smoke tests
pytest -n 10 -m "api" --reruns 2 # API tests
pytest -n 4 -m "ui" --reruns 2 # UI tests
pytest -m "api and gitlab" --reruns 2 # GitLab tests
Files & Paths
~/.codemie/test-harness.json # Configuration file
~/.aws/credentials # AWS credentials
codemie_test_harness/.env # Environment variables (for pytest)
Quick Setup
# Install
pip install codemie-test-harness
# First run
codemie-test-harness # Interactive setup wizard
# Or quick config
codemie-test-harness config set CODEMIE_API_DOMAIN http://localhost:8080
codemie-test-harness config set AWS_PROFILE my-profile
Support
For issues, questions, or contributions:
- Create an issue in the repository
- Contact: Anton Yeromin (anton_yeromin@epam.com)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file codemie_test_harness-0.1.400.tar.gz.
File metadata
- Download URL: codemie_test_harness-0.1.400.tar.gz
- Upload date:
- Size: 84.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3ac32c349189658a6ee434c28289c40677e5b8b5e91492d0d7854132457865a
|
|
| MD5 |
3af2f06467c465cff686a87ac8c19bcc
|
|
| BLAKE2b-256 |
e46c75c692ce6164375337881c76452d2029f351d2878bae41b7cebe9f05f63c
|
File details
Details for the file codemie_test_harness-0.1.400-py3-none-any.whl.
File metadata
- Download URL: codemie_test_harness-0.1.400-py3-none-any.whl
- Upload date:
- Size: 85.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7b617274c51a2c3339e46c46dd7cb8c143fbfa9000719db8380981b1a3daaa98
|
|
| MD5 |
b47252258538087cd93a34dea51aece7
|
|
| BLAKE2b-256 |
5cbdf43c807b23e47effefcbcad7ce03418df426a4789f08e7c2d3e715ab3bd1
|