Skip to main content

Autotest for CodeMie backend and UI

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

CodeMie Python Autotests

End-to-end, integration, and UI test suite for CodeMie services. This repository exercises CodeMie APIs (LLM, assistants, workflows, tools) and common integrations.

The project is designed for high-parallel execution with pytest-xdist, resilient runs with pytest-rerunfailures, and optional reporting to ReportPortal.

Table of Contents

  • Overview
  • Prerequisites
  • Installation
  • Quick start
  • Configuration
    • Custom environment (direct env vars)
    • Predefined environments (PREVIEW/AZURE/GCP/AWS/PROD/LOCAL)
    • Local with custom GitLab/GitHub/Jira/Confluence tokens
  • Running tests
    • Common options and markers
    • UI tests (Playwright)
    • ReportPortal integration
  • Makefile targets
  • Troubleshooting

Overview

This repository contains pytest-based suites to validate CodeMie capabilities:

  • Service tests for assistants, workflows, tasks, integrations, and datasources
  • Workflow tests for direct and virtual assistant tools
  • E2E/regression packs
  • UI tests powered by Playwright

Project layout highlights:

  • tests/ — test suites, fixtures, utilities, and test data
  • pyproject.toml — dependencies (Poetry)
  • pytest.ini — pytest configuration and ReportPortal defaults
  • Makefile — helper targets for install/lint/build/publish

Prerequisites

  • Python 3.12+
  • Poetry (recommended) or pip
  • For UI tests: Playwright browsers installed
  • Access to the required CodeMie environment and credentials

Installation

Choose one of the following:

  1. With Poetry (recommended)
poetry install
  1. With pip
pip install codemie-autotests

Tip: Use a virtual environment. With Poetry, venv is created automatically per project.

Quick start

Run a small sanity pack (smoke) against a custom environment using exported variables:

export AUTH_SERVER_URL=<auth_server_url>
export AUTH_CLIENT_ID=<client_id>
export AUTH_CLIENT_SECRET=<client_secret>
export AUTH_REALM_NAME=<realm_name>
export CODEMIE_API_DOMAIN=<codemie_api_domain_url>

pytest -n 8 -m smoke --reruns 2

Or pass variables inline:

AUTH_SERVER_URL=<auth_server_url> \
AUTH_CLIENT_ID=<client_id> \
AUTH_CLIENT_SECRET=<client_secret> \
AUTH_REALM_NAME=<realm_name> \
CODEMIE_API_DOMAIN=<codemie_api_domain_url> \
pytest -n 8 -m "smoke or mcp or plugin" --reruns 2

Configuration

Custom environment (direct env vars)

Provide the following environment variables for ad-hoc runs:

  • AUTH_SERVER_URL, AUTH_CLIENT_ID, AUTH_CLIENT_SECRET, AUTH_REALM_NAME
  • CODEMIE_API_DOMAIN

Predefined environments (PREVIEW, AZURE, GCP, AWS, PROD, LOCAL)

For runs targeting predefined environments, create a .env file in project root. If you provide AWS credentials, the suite will fetch additional values from AWS Systems Manager Parameter Store and recreate .env accordingly.

ENV=local

AWS_ACCESS_KEY=<aws_access_token>
AWS_SECRET_KEY=<aws_secret_key>

Now you can run full or subset packs. Examples:

# All tests (-n controls the number of workers)
pytest -n 10 --reruns 2

# E2E + regression only
pytest -n 10 -m "e2e or regression" --reruns 2

Note: "--reruns 2" uses pytest-rerunfailures to improve resiliency in flaky environments.

Local with custom GitLab, GitHub, Jira and Confluence tokens

  1. Start from a .env populated via AWS (optional)
  2. Replace the tokens below with your personal values
  3. Important: After replacing tokens, remove AWS_ACCESS_KEY and AWS_SECRET_KEY from .env — otherwise they will overwrite your changes next time .env is regenerated

Full .env example:

ENV=local
PROJECT_NAME=codemie
GIT_ENV=gitlab # required for e2e tests only
DEFAULT_TIMEOUT=60
CLEANUP_DATA=True
LANGFUSE_TRACES_ENABLED=False

CODEMIE_API_DOMAIN=http://localhost:8080

FRONTEND_URL=https://localhost:5173/
HEADLESS=False

NATS_URL=nats://localhost:4222

TEST_USER_FULL_NAME=dev-codemie-user

GITLAB_URL=<gitlab_url>
GITLAB_TOKEN=<gitlab_token>
GITLAB_PROJECT=<gitlab_project>
GITLAB_PROJECT_ID=<gitlab_project_id>

GITHUB_URL=<github_url>
GITHUB_TOKEN=<github_token>
GITHUB_PROJECT=<github_project>

JIRA_URL=<jira_url>
JIRA_TOKEN=<jira_token>
JQL="project = 'EPMCDME' and issuetype = 'Epic' and status = 'Closed'"

CONFLUENCE_URL=<confluence_url>
CONFLUENCE_TOKEN=<confluence_token>
CQL="space = EPMCDME and type = page and title = 'AQA Backlog Estimation'"

RP_API_KEY=<report_portal_api_key>

Running tests

Common options and markers

  • Parallelism: -n (pytest-xdist). Example: -n 10 or -n auto
  • Reruns: --reruns
  • Marker selection: -m "expr"

Common markers used in this repo include:

  • smoke
  • mcp
  • plugin
  • e2e
  • regression
  • ui
  • jira_kb, confluence_kb, code_kb
  • gitlab, github, git

Examples:

# Combine markers
pytest -n 8 -m "smoke or mcp" --reruns 2

# Only mcp
pytest -n 8 -m mcp --reruns 2

# Specific integrations
pytest -n 10 -m "jira_kb or github" --reruns 2

UI tests (Playwright)

Install browsers once:

playwright install

Then run UI pack:

pytest -n 4 -m ui --reruns 2

Playwright docs: https://playwright.dev/python/docs/intro

ReportPortal integration

pytest.ini is preconfigured with rp_endpoint, rp_project, and a default rp_launch. To publish results:

  1. Set RP_API_KEY in .env
  2. Add the flag:
pytest -n 10 -m "e2e or regression" --reruns 2 --reportportal

If you need access to the ReportPortal project, contact: Anton Yeromin (anton_yeromin@epam.com).

Makefile targets

  • install — poetry install
  • ruff — lint and format with Ruff
  • ruff-format — format only
  • ruff-fix — apply autofixes
  • build — poetry build
  • publish — poetry publish

Example:

make install
make ruff

Troubleshooting

  • Playwright not installed: Run playwright install.
  • Headless issues locally: Set HEADLESS=True in .env for CI or False for local debugging.
  • Env values keep reverting: Ensure AWS_ACCESS_KEY and AWS_SECRET_KEY are removed after manual edits to .env.
  • Authentication failures: Verify AUTH_* variables and CODEMIE_API_DOMAIN are correct for the target environment.
  • Slow or flaky runs: Reduce -n, increase timeouts, and/or use --reruns.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codemie_autotests-0.1.10.tar.gz (57.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codemie_autotests-0.1.10-py3-none-any.whl (57.1 MB view details)

Uploaded Python 3

File details

Details for the file codemie_autotests-0.1.10.tar.gz.

File metadata

  • Download URL: codemie_autotests-0.1.10.tar.gz
  • Upload date:
  • Size: 57.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for codemie_autotests-0.1.10.tar.gz
Algorithm Hash digest
SHA256 b5741c035e717c40a73746b8eab0ceb2ef459c5332e29de086e95f77b07d6e7b
MD5 f12330f26cac6b8b8946cf2c80bcb15c
BLAKE2b-256 2aa503de7a9449cd4b900612981f9460d2c90ad1dfb8761605e0df991b98ac52

See more details on using hashes here.

File details

Details for the file codemie_autotests-0.1.10-py3-none-any.whl.

File metadata

File hashes

Hashes for codemie_autotests-0.1.10-py3-none-any.whl
Algorithm Hash digest
SHA256 71b5ac3e3d771d3a74289d9ac330f1352b72624cc3489ce775c882e8f738f711
MD5 eaeddfd9b82ae1d2cf70397ade97d258
BLAKE2b-256 8c3459a0bcff3246768533d796be89a02b1b415a258b3446c9259ce073bbab1f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page