Skip to main content

End-to-end test framework for OpenVoiceOS skills

Project description

Ask DeepWiki PyPI License Python

OvoScope

End-to-end testing for OVOS skills. OvoScope runs a full OVOS Core pipeline in-process using a FakeBus — no server, no audio stack, no network. Load real skill plugins, emit a test utterance, and assert on every bus message that comes back: type, data, routing context, session state, and message ordering. image

Like a microscope for your OVOS skills.


Features

Full pipeline Runs real intent pipeline plugins (Adapt, Padatious, Fallback, Converse, Common Query)
Isolated Config isolation strips user preferences; deterministic DEFAULT_TEST_PIPELINE excludes AI/persona/OCP stages
Ordered assertions Assert message type, data keys, routing context, and session state in sequence
Recording mode Capture a live message sequence and save it as a JSON fixture — no manual construction needed
Multi-turn Pass a list of utterances to test full conversational flows
pytest fixture minicroft class-scoped fixture auto-discovered via the pytest11 entry point
Inject skills extra_skills={id: SkillClass} to load inline test skills without a PyPI entry point
Inject messages MiniCroft.inject_message() to trigger non-utterance handlers (GUI events, timers, API calls)
Typed models Optional ovoscope[pydantic] bridge to ovos-pydantic-models for schema-validated messages

Installation

pip install ovoscope

With optional typed message model support:

pip install ovoscope[pydantic]

Quick Start

import unittest
from ovos_bus_client.message import Message
from ovos_bus_client.session import Session
from ovoscope import End2EndTest
SKILL_ID = "ovos-skill-hello-world.openvoiceos"
session = Session("test-session")
utterance = Message(
    "recognizer_loop:utterance",
    {"utterances": ["hello world"], "lang": "en-US"},
    {"session": session.serialize(), "source": "A", "destination": "B"},
)
class TestHelloWorld(unittest.TestCase):
    def test_intent_match(self):
        End2EndTest(
            skill_ids=[SKILL_ID],
            source_message=utterance,
            expected_messages=[
                utterance,
                Message(f"{SKILL_ID}.activate", context={"skill_id": SKILL_ID}),
                Message(f"{SKILL_ID}:HelloWorldIntent",
                        data={"utterance": "hello world"}, context={"skill_id": SKILL_ID}),
                Message("mycroft.skill.handler.start", context={"skill_id": SKILL_ID}),
                Message("speak", data={"lang": "en-US"}, context={"skill_id": SKILL_ID}),
                Message("mycroft.skill.handler.complete", context={"skill_id": SKILL_ID}),
                Message("ovos.utterance.handled", context={"skill_id": SKILL_ID}),
            ],
        ).execute(timeout=10)

Only keys you specify in expected.data and expected.context are checked — extra keys in the received message are ignored.

Recording Mode

Don't know the exact message sequence yet? Record it from a live run:

from ovoscope import End2EndTest
test = End2EndTest.from_message(
    message=utterance,
    skill_ids=[SKILL_ID],
    timeout=20,
)
test.save("tests/fixtures/hello_world.json")  # anonymises location data by default

Replay in CI:

End2EndTest.from_path("tests/fixtures/hello_world.json").execute(timeout=10)

pytest Fixture

The minicroft class-scoped fixture is auto-registered when ovoscope is installed. No setUp/tearDown boilerplate needed:

class TestMySkill:
    skill_ids = ["my-skill.author"]
    def test_something(self, minicroft):
        End2EndTest(
            minicroft=minicroft,
            skill_ids=self.skill_ids,
            source_message=utterance,
            expected_messages=[...],
        ).execute(timeout=10)

Pipeline Control

OvoScope exposes composable pipeline stage lists so tests are deterministic regardless of which AI plugins are installed on the host:

from ovoscope import ADAPT_PIPELINE, PADATIOUS_PIPELINE, FALLBACK_PIPELINE, PERSONA_PIPELINE
# Adapt only — fastest
mc = get_minicroft([SKILL_ID], default_pipeline=ADAPT_PIPELINE)
# Full intent chain
mc = get_minicroft([SKILL_ID],
                   default_pipeline=ADAPT_PIPELINE + PADATIOUS_PIPELINE + FALLBACK_PIPELINE)
# Opt in to persona for AI testing
mc = get_minicroft([SKILL_ID], default_pipeline=DEFAULT_TEST_PIPELINE + PERSONA_PIPELINE)

DEFAULT_TEST_PIPELINE (the default when isolate_config=True) includes all standard built-in stages and deliberately excludes persona, Ollama, OCP, and m2v plugins.

Documentation

Document
docs/usage-guide.md Start here — 8 test patterns with full worked examples
docs/ci-integration.md Wiring ovoscope into GitHub Actions
docs/minicroft.md MiniCroft and get_minicroft() reference
docs/capture-session.md CaptureSession internals
docs/end2end-test.md End2EndTest full parameter reference
docs/pydantic-integration.md Typed message models with ovos-pydantic-models
FAQ.md Common questions and gotchas

License

Apache 2.0

Contributing

PRs are welcome! See CONTRIBUTING.md for guidelines.

AI Disclosure

Parts of this project are developed with the assistance of AI tools. In the interest of transparency, two files are maintained as a public record of AI involvement:

  • FAQ.md — Frequently asked questions that emerged from real development sessions, including design rationale, gotchas, and usage patterns. Many entries were authored or refined with AI assistance during the process of building and testing this framework.
  • MAINTENANCE_REPORT.md — A chronological log of changes made to this repository. Each entry records what was changed, why, which AI model was involved, what actions it took, and what human oversight was applied. This log is updated after every significant AI-assisted session. These files are intentionally published so that contributors and users can understand how the project evolves and where AI assistance has been applied.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ovoscope-0.10.0a1.tar.gz (34.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ovoscope-0.10.0a1-py3-none-any.whl (34.2 kB view details)

Uploaded Python 3

File details

Details for the file ovoscope-0.10.0a1.tar.gz.

File metadata

  • Download URL: ovoscope-0.10.0a1.tar.gz
  • Upload date:
  • Size: 34.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ovoscope-0.10.0a1.tar.gz
Algorithm Hash digest
SHA256 6279acdd8fdae45dabfbddb68a6c2c7825ba9ae412506796c340e36d3f96bf13
MD5 b26f9615eff65ff0958b316973aa8ac2
BLAKE2b-256 9bd982b5889b2610b564d44981d09dccf92ab157f4f48431eb6382b1fba3404c

See more details on using hashes here.

File details

Details for the file ovoscope-0.10.0a1-py3-none-any.whl.

File metadata

  • Download URL: ovoscope-0.10.0a1-py3-none-any.whl
  • Upload date:
  • Size: 34.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ovoscope-0.10.0a1-py3-none-any.whl
Algorithm Hash digest
SHA256 7ca41ecdf11d4b3e24735666bd9566371cea47b68dcea211c7e8e8b8dc90e74d
MD5 814e2e8a86264bea8641a3943f6ea325
BLAKE2b-256 db2d2fa61cd855992f02ceb129bac05d7dfa0aac63eb5dd0ef5ff7e29f284f33

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page