Skip to main content

Reusable Appium 2.x + pytest mobile test framework

Project description

appium-pytest-kit

appium-pytest-kit is a reusable Appium 2.x + pytest plugin library for Python 3.11+. Install it once, generate a .env, and start writing mobile tests with zero boilerplate.

pip install appium-pytest-kit
appium-pytest-kit-init --framework --root my-project
# optional: scaffold + install extras in one step
appium-pytest-kit-init --framework --root my-project --install-extras all

Full documentation: DOCUMENTATION.md · docs/


What it gives you

Zero-config fixtures driver, waiter, actions, page_factory — just add to your test function
Auto failure artifacts Screenshot + page source + device logs + session log captured automatically on failure
Artifact redaction Optional redaction for text artifacts + optional screenshot placeholder mode
3-tier device resolution explicit settings → named profile → auto-detect via adb/xcrun
Session modes clean (per-test) · clean-session (shared) · debug (keep alive)
Retry support Session reused across retry attempts — no restart cost between tries
Flake quality gates scripts/check_flake_thresholds.py can fail CI when flake budgets are exceeded
Performance checks Optional perf telemetry (perf-summary.json / perf-trend.json) + soft budgets
xdist parallelism Worker-safe capability port isolation + per-worker managed Appium ports
Quarantine lane @pytest.mark.quarantine tests can be isolated from default lanes
Fail-fast --app-fail-fast stops the suite after retries are exhausted, not before
Explicit waits WaitTimeoutError with structured .locator and .timeout context
High-level actions tap, type, swipe, scroll, assertions — all wait-safe
API endpoint checks Lightweight ApiClient for backend assertions in the same pytest run
App reset primitives clear app data, reset permissions, reinstall app helpers in actions
Page + flow objects Scaffold generates pages/ and flows/ with base classes ready to extend
Extension hooks Override settings, inject capabilities, run code after driver creation
CLI scaffold One command to generate a full project structure
Data-driven tests Load test cases from YAML/JSON, cross-platform parametrize helpers
Visual regression Screenshot comparison with baseline management and diff images
Soft assertions Collect multiple failures in one test — critical for form validation flows
Cloud device farms BrowserStack, Sauce Labs, AWS Device Farm with one config switch
Locator healing Fallback chains + registry — automatic recovery when locators break
Test data factories Unique emails, usernames, passwords per run — xdist-safe, seedable

Dependencies

All required dependencies are installed automatically with pip install appium-pytest-kit. You do not need a separate requirements.txt.

Auto-installed Version Purpose
Appium-Python-Client ≥ 4.0.0 Appium WebDriver client
pydantic-settings ≥ 2.3.0 .env and env var loading
pytest ≥ 8.2.0 Test runner integration

Optional extras (install only what you need):

pip install "appium-pytest-kit[yaml]"    # device profile YAML support + data-driven tests
pip install "appium-pytest-kit[allure]"  # Allure report attachments
pip install "appium-pytest-kit[retry]"   # pytest-retry for flaky test handling
pip install "appium-pytest-kit[xdist]"   # pytest-xdist parallel execution
pip install "appium-pytest-kit[visual]"  # visual regression screenshot comparison
pip install "appium-pytest-kit[all]"     # all optional extras
Extra Installs When you need it
[yaml] PyYAML ≥ 6.0 Named device profiles in data/devices.yaml + YAML test data files
[allure] allure-pytest ≥ 2.13.0 Screenshots + page source in Allure reports
[retry] pytest-retry ≥ 0.6.0 Retry flaky tests while reusing the same Appium session
[xdist] pytest-xdist ≥ 3.6.0 Run tests in parallel workers safely
[visual] Pillow ≥ 10.0.0 Visual regression screenshot comparison

Installation

From PyPI

pip install appium-pytest-kit

From GitHub

pip install git+https://github.com/gianlucasoare/appium-pytest-kit.git

Local clone (development)

git clone https://github.com/gianlucasoare/appium-pytest-kit.git
cd appium-pytest-kit
pip install -e ".[dev]"

Environment doctor

Validate your local setup (tools, drivers, config, Appium reachability):

appium-pytest-kit-doctor
appium-pytest-kit-doctor --env-file .env.staging
appium-pytest-kit-doctor --json

Automated PyPI release

The repository includes a GitHub Actions release workflow with Trusted Publishing:

  • workflow file: .github/workflows/release.yml
  • trigger: push a tag like v0.1.8
  • behavior: run tests, build sdist + wheel, verify tag/version match, publish to PyPI via OIDC
  • publish guard: publish job runs only for refs/tags/v*
  • optional signed-tag enforcement via REQUIRE_SIGNED_TAG=true repo variable
  • auto-generated GitHub release notes after publish

Changelog automation is also available via .github/workflows/changelog.yml (workflow_dispatch) to generate a release section in CHANGELOG.md and open a PR. CI in .github/workflows/ci.yml also validates conventional commit subjects and runs a dedicated quarantined-test lane (@pytest.mark.quarantine).


Quickstart: test an app in 5 minutes

1 — Scaffold the project

pip install appium-pytest-kit
appium-pytest-kit-init --framework --root my-project
cd my-project

2 — Edit .env with your device and app

APP_PLATFORM=android
APP_APPIUM_URL=http://127.0.0.1:4723
APP_APP_PACKAGE=com.example.myapp
APP_APP_ACTIVITY=.MainActivity
APP_DEVICE_NAME=emulator-5554
APP_PLATFORM_VERSION=14

3 — Start Appium and your emulator, then run

appium &
pytest tests/android/test_smoke.py -v

4 — Write a real test

# tests/android/test_login.py
import pytest
from appium.webdriver.common.appiumby import AppiumBy

USERNAME = (AppiumBy.ID, "com.example.app:id/username")
PASSWORD = (AppiumBy.ID, "com.example.app:id/password")
LOGIN_BTN = (AppiumBy.ACCESSIBILITY_ID, "login_button")
WELCOME = (AppiumBy.ID, "com.example.app:id/welcome_text")


@pytest.mark.integration
def test_login(actions):
    actions.type_text(USERNAME, "testuser")
    actions.type_text(PASSWORD, "secret")
    actions.tap(LOGIN_BTN)
    assert actions.text(WELCOME) == "Welcome, testuser"
pytest -m integration -v

API endpoint tests

Use the built-in HTTP helper to validate backend endpoints from the same suite:

from appium_pytest_kit import ApiClient

def test_health_endpoint():
    api = ApiClient("http://127.0.0.1:8000")
    response = api.get("/health", expected_status=200)
    assert response.json()["ok"] is True

When you scaffold with --framework, the generated project includes:

  • api/client.py (shared get_api_client() factory)
  • tests/api/test_health.py (starter API test)
  • api_client fixture in conftest.py

See docs/api-testing.md for full usage and hybrid API + mobile patterns.


Built-in fixtures

Fixture Scope Description
settings session Resolved AppiumPytestKitSettings
device_info session Resolved device (name, UDID, version)
appium_server session Server URL, optional lifecycle management
driver function Live appium.webdriver.Remote, auto-quit
waiter function Explicit waits with WaitTimeoutError
actions function High-level UI helpers
page_factory function Factory for page objects: page_factory(LoginPage)

Page objects with page_factory

# pages/login_page.py
from appium.webdriver.common.appiumby import AppiumBy
from appium_pytest_kit import Locator
from pages.base_page import BasePage

class LoginPage(BasePage):
    _USERNAME: Locator = (AppiumBy.ID, "com.example.app:id/username")
    _LOGIN_BTN: Locator = (AppiumBy.ACCESSIBILITY_ID, "login_button")

    def log_in(self, username: str, password: str) -> None:
        self._actions.type_text(self._USERNAME, username)
        self._actions.tap(self._LOGIN_BTN)

    def is_loaded(self) -> bool:
        return self._actions.is_displayed(self._USERNAME)
# tests/test_login.py
def test_login_success(page_factory):
    login = page_factory(LoginPage)
    login.wait_until_loaded()
    login.log_in("testuser", "secret")
    # ...

See docs/page-objects.md for the full guide.


Session modes

APP_SESSION_MODE=clean          # fresh driver per test (default)
APP_SESSION_MODE=clean-session  # one shared driver for the whole run (faster)
APP_SESSION_MODE=debug          # shared + no restart on failure (local debugging)

Retry support

Install the extra, then use @pytest.mark.flaky(...) and/or the --retries CLI flag:

pip install "appium-pytest-kit[retry]"
# Retry this test up to 2 extra times (3 total attempts)
@pytest.mark.flaky(retries=2)
def test_flaky_animation(actions):
    actions.tap(START_BTN)
    actions.assert_displayed(RESULT_SCREEN)
# Retry every failed test up to 2 extra times, stop if something is truly broken
pytest --retries 2 --retry-delay 1 --app-fail-fast

How it works: during retries the same Appium session is reused — no restart between attempts. Once the test passes or all retries are exhausted, the session is quit and the next test starts fresh.

See docs/cli-reference.md for the full retry flag reference.


Parallel with xdist

Install xdist support:

pip install "appium-pytest-kit[xdist]"

Run tests in parallel workers:

pytest -n 4

When running with xdist:

  • Android workers get default systemPort values (8200 + worker_index) unless explicitly set.
  • iOS workers get default wdaLocalPort (8100 + worker_index) and webkitDebugProxyPort (27753 + worker_index) unless explicitly set.
  • With APP_MANAGE_APPIUM_SERVER=true, each worker starts Appium on APP_APPIUM_PORT + worker_index.

Device resolution (3-tier)

  1. ExplicitAPP_DEVICE_NAME / APP_UDID in .env or CLI
  2. ProfileAPP_DEVICE_PROFILE=pixel7 from data/devices.yaml
  3. Auto-detectadb devices (Android) or xcrun simctl / xctrace (iOS)
pytest --app-device-profile pixel7
pytest --app-udid emulator-5554
pytest   # auto-detect if nothing set

Failure diagnostics

On test failure the framework automatically captures:

  • Screenshotartifacts/screenshots/<test_id>.png
  • Page sourceartifacts/pagesource/<test_id>.xml
  • Device logsartifacts/device_logs/<test_id>.log
  • Session logartifacts/session_logs/<test_id>.log (from available Appium log type)
  • Video (if configured) → artifacts/videos/<test_id>.mp4
APP_VIDEO_POLICY=failed   # record and save only on failure
APP_VIDEO_POLICY=always   # record every test

Allure attachments are added automatically when allure-pytest is installed.


Configuration

Settings load from .env → env vars → CLI flags (highest wins).

pytest --app-platform ios
pytest --app-device-name "Pixel 7" --app-platform-version 14
pytest --app-appium-url http://192.168.1.10:4723
pytest --app-session-mode clean-session
pytest --app-device-profile pixel7
pytest --app-video-policy failed
pytest --app-override APP_EXPLICIT_WAIT_TIMEOUT=15
pytest --app-capabilities-json '{"autoGrantPermissions": true}'
pytest --app-strict-config
pytest --app-manage-appium-server
pytest --app-reporting-enabled

# Retry support (requires appium-pytest-kit[retry])
pytest --retries 2 --retry-delay 1          # retry all tests up to 2 extra times
pytest --retries 2 --app-fail-fast          # stop suite after retries are exhausted

See docs/configuration.md for all settings.


Extension hooks

# conftest.py

def pytest_appium_pytest_kit_capabilities(capabilities, settings):
    """Add extra capabilities before each driver session."""
    if settings.platform == "android":
        return {"autoGrantPermissions": True, "language": "en"}

def pytest_appium_pytest_kit_configure_settings(settings):
    """Replace settings at session start."""
    return settings.model_copy(update={"explicit_wait_timeout": 20.0})

def pytest_appium_pytest_kit_driver_created(driver, settings):
    """Run setup immediately after each driver is created."""
    driver.orientation = "PORTRAIT"

Expanded waits

waiter.for_clickable(locator)
waiter.for_invisibility(locator)
waiter.for_text_contains(locator, "partial text")
waiter.for_text_equals(locator, "exact text")
waiter.for_all_visible([loc1, loc2, loc3])   # single timeout for the whole group
waiter.for_all_gone([loc1, loc2])
waiter.for_any_visible([loc1, loc2])
waiter.for_context_contains("WEBVIEW")
waiter.for_android_activity("MainActivity")
waiter.for_android_toast("Saved")

Expanded actions

# Tap
actions.tap_if_present(locator)
actions.tap_if_present_first_available([l1, l2])
actions.click_by_attribute_value((By.ID, "toggle"), "checked", "true")
actions.tap_by_coordinates(x, y)
actions.double_tap(locator)
actions.long_press(locator, duration_seconds=2)

# Text
actions.type_if_present(locator, "text")
actions.type_text_slowly(locator, "otp", delay_per_char=0.15)
actions.clear(locator)

# Visibility assertions
actions.is_displayed(locator)
actions.assert_displayed(locator)
actions.is_not_displayed(locator)
actions.assert_not_displayed(locator)
actions.assert_displayed_first_available([l1, l2])
actions.assert_not_displayed_first_available([l1, l2])

# Text assertions
actions.assert_text(locator, "exact text")
actions.assert_text_contains(locator, "partial")
actions.assert_text_not_empty(locator)

# Attribute assertion
actions.assert_attribute(locator, "checked", "true")

# Enabled/disabled state
actions.is_enabled(locator)
actions.assert_enabled(locator)
actions.assert_not_enabled(locator)

# Checked/selected state (checkboxes, toggles)
actions.is_checked(locator)
actions.assert_checked(locator)
actions.assert_not_checked(locator)

# Element count
actions.count(locator)           # → int
actions.assert_count(locator, 3)

# Scroll
actions.scroll_down()
actions.scroll_to_element(locator)

# Keyboard
actions.hide_keyboard()
actions.press_keycode(66)  # ENTER

# App lifecycle
actions.activate_app("com.example.myapp")
actions.terminate_app("com.example.myapp")
actions.background_app(2)
actions.clear_app_data("com.example.myapp")        # Android only
actions.reset_app_permissions()                     # Android only
actions.reinstall_app(app_path="/tmp/build.apk")
actions.open_deep_link("myapp://profile", app_id="com.example.myapp")

# Hybrid
actions.switch_to_webview()
actions.get_webview_context_name()
actions.switch_to_frame("iframe.checkout")
actions.switch_to_default_frame()
actions.switch_to_native()

Data-driven tests

Load test cases from YAML or JSON files and run them as parametrized tests:

# data/login_cases.yaml
- name: valid login
  username: user@example.com
  password: Test1234!
  expected: home

- name: invalid password
  username: user@example.com
  password: wrong
  expected: error
from appium_pytest_kit import from_file

@from_file("data/login_cases.yaml")
def test_login(case, login_page):
    login_page.login(case["username"], case["password"])
    assert login_page.current_screen() == case["expected"]

Run the same test on both platforms:

from appium_pytest_kit import cross_platform

@cross_platform()
def test_login_works(platform, login_page):
    login_page.login("user", "pass")
    assert login_page.is_on_home()

See docs/data-driven-testing.md for platform-filtered data, nested sections, and JSON support.


Visual regression

Compare screenshots against baselines to detect unintended UI changes:

from appium_pytest_kit import assert_screenshot_match

def test_home_screen_looks_correct(driver, request):
    assert_screenshot_match(
        driver,
        test_id=request.node.nodeid,
        baselines_dir="baselines",
        artifacts_dir="artifacts",
        platform="android",
        threshold=0.01,  # 1% pixel tolerance
    )

On first run, saves a baseline. On subsequent runs, compares and raises VisualRegressionError if the diff exceeds the threshold. Diff images highlight changes in red.

pip install "appium-pytest-kit[visual]"

See docs/visual-regression.md for baseline management, threshold tuning, and update workflows.


Public API

from appium_pytest_kit import (
    AppiumPytestKitSettings,
    AppiumPytestKitError,
    ConfigurationError, DeviceResolutionError, LaunchValidationError,
    WaitTimeoutError, ActionError, DriverCreationError, ApiRequestError,
    VisualRegressionError, SoftAssertionError,
    DeviceInfo, DriverConfig, MobileActions, Waiter,
    ApiClient, ApiResponse,
    BaselineManager, ScreenshotDiff,
    CloudConfig, build_cloud_config, apply_cloud_config,
    SoftAssert, AssertionFailure, soft_assertions,
    LocatorChain, HealingResult, HealingRegistry, chain,
    DataFactory,
    Locator,           # type alias: tuple[str, str]
    build_driver_config, create_driver, load_settings, apply_cli_overrides,
    load_test_data, from_file, cross_platform,
    compare_screenshots, assert_screenshot_match,
)

Fixture lifecycle

flowchart TD
    A["pytest start"] --> B["load defaults + .env + env vars"]
    B --> C["apply --app-* CLI overrides"]
    C --> D["settings fixture (session)"]
    D --> E{"APP_MANAGE_APPIUM_SERVER"}
    E -->|"true"| F["start local Appium server"]
    E -->|"false"| G["use APP_APPIUM_URL"]
    F --> H["appium_server fixture (session)"]
    G --> H
    H --> I{"session_mode"}
    I -->|"clean-session / debug"| J["_driver_shared (session)"]
    I -->|"clean"| K["driver per test"]
    J --> K
    K --> L["waiter / actions / page_factory"]
    K --> M["test runs"]
    M --> N{"failed?"}
    N -->|"yes"| O["capture screenshot + page source"]
    N --> P["stop video (per policy)"]
    O --> P
    P --> Q["driver.quit() (clean mode)"]
    Q --> R["report summary flush"]
    R --> S["server stop (if managed)"]

Debug logs

appium-pytest-kit logs every action, wait, and session lifecycle event using Python's standard logging module. Enable them with a single pytest flag:

pytest --log-cli-level=INFO    # session lifecycle + artifacts
pytest --log-cli-level=DEBUG   # full trace (every tap, wait, scroll)

Or persist in pyproject.toml:

[tool.pytest.ini_options]
log_cli       = true
log_cli_level = "INFO"

See docs/troubleshooting.md for a full table of log messages.


Local development

pip install -e ".[dev]"
python -m ruff check .
python -m pytest -q
python -m pytest --collect-only examples/basic/tests -q

Documentation

Topic File
Installation + dependencies docs/installation.md
Project structure + scaffold docs/project-structure.md
Configuration (all settings) docs/configuration.md
CLI reference (all flags) docs/cli-reference.md
Built-in fixtures docs/fixtures.md
Page objects guide docs/page-objects.md
conftest.py guide docs/conftest-guide.md
API testing tutorial (step by step) docs/api-testing.md
Waits reference docs/waits.md
Actions reference docs/actions.md
Data-driven testing docs/data-driven-testing.md
Visual regression docs/visual-regression.md
Soft assertions docs/soft-assertions.md
Cloud providers docs/cloud-providers.md
Locator healing docs/locator-healing.md
Test data factories docs/test-data-factories.md
Session modes docs/session-modes.md
Device resolution docs/device-resolution.md
Failure diagnostics + video docs/diagnostics.md
Performance checks docs/performance.md
Error reference docs/errors.md
Troubleshooting docs/troubleshooting.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

appium_pytest_kit-0.2.0.tar.gz (75.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

appium_pytest_kit-0.2.0-py3-none-any.whl (77.4 kB view details)

Uploaded Python 3

File details

Details for the file appium_pytest_kit-0.2.0.tar.gz.

File metadata

  • Download URL: appium_pytest_kit-0.2.0.tar.gz
  • Upload date:
  • Size: 75.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for appium_pytest_kit-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4fbe8d24678c07e8b4cec727eb1917b4c25034af3198512d8d5688edfd78090d
MD5 d86026b11881aba666daaeea6ea489bb
BLAKE2b-256 03c564771c5c5d3ce2e8d5918fb57ebaa523c2acd0818a16656a398a1cc52e59

See more details on using hashes here.

Provenance

The following attestation bundles were made for appium_pytest_kit-0.2.0.tar.gz:

Publisher: release.yml on gianlucasoare/appium-pytest-kit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file appium_pytest_kit-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for appium_pytest_kit-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c39c94f2a19446054ee9ba1c25cb256a0fa9ccc6587131dd01b1e2a27802a426
MD5 de01f4e19be0cbf5d096f30605505827
BLAKE2b-256 5bf0bb7405a981da40f3715f5e61378c7e7d767c59f8929b5267ff7949581a68

See more details on using hashes here.

Provenance

The following attestation bundles were made for appium_pytest_kit-0.2.0-py3-none-any.whl:

Publisher: release.yml on gianlucasoare/appium-pytest-kit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page