Skip to main content

End-to-end test framework for OpenVoiceOS skills

Project description

Ask DeepWiki PyPI License Python

OvoScope

End-to-end testing for OVOS skills. OvoScope runs a full OVOS Core pipeline in-process using a FakeBus — no server, no audio stack, no network. Load real skill plugins, emit a test utterance, and assert on every bus message that comes back: type, data, routing context, session state, and message ordering. image

Like a microscope for your OVOS skills.


Features

Full pipeline Runs real intent pipeline plugins (Adapt, Padatious, Fallback, Converse, Common Query)
Isolated Config isolation strips user preferences; deterministic DEFAULT_TEST_PIPELINE excludes AI/persona/OCP stages
Ordered assertions Assert message type, data keys, routing context, and session state in sequence
Recording mode Capture a live message sequence and save it as a JSON fixture — no manual construction needed
Multi-turn Pass a list of utterances to test full conversational flows
pytest fixture minicroft class-scoped fixture auto-discovered via the pytest11 entry point
Inject skills extra_skills={id: SkillClass} to load inline test skills without a PyPI entry point
Inject messages MiniCroft.inject_message() to trigger non-utterance handlers (GUI events, timers, API calls)
Typed models Optional ovoscope[pydantic] bridge to ovos-pydantic-models for schema-validated messages

Installation

pip install ovoscope

With optional typed message model support:

pip install ovoscope[pydantic]

Quick Start

import unittest
from ovos_bus_client.message import Message
from ovos_bus_client.session import Session
from ovoscope import End2EndTest
SKILL_ID = "ovos-skill-hello-world.openvoiceos"
session = Session("test-session")
utterance = Message(
    "recognizer_loop:utterance",
    {"utterances": ["hello world"], "lang": "en-US"},
    {"session": session.serialize(), "source": "A", "destination": "B"},
)
class TestHelloWorld(unittest.TestCase):
    def test_intent_match(self):
        End2EndTest(
            skill_ids=[SKILL_ID],
            source_message=utterance,
            expected_messages=[
                utterance,
                Message(f"{SKILL_ID}.activate", context={"skill_id": SKILL_ID}),
                Message(f"{SKILL_ID}:HelloWorldIntent",
                        data={"utterance": "hello world"}, context={"skill_id": SKILL_ID}),
                Message("mycroft.skill.handler.start", context={"skill_id": SKILL_ID}),
                Message("speak", data={"lang": "en-US"}, context={"skill_id": SKILL_ID}),
                Message("mycroft.skill.handler.complete", context={"skill_id": SKILL_ID}),
                Message("ovos.utterance.handled", context={"skill_id": SKILL_ID}),
            ],
        ).execute(timeout=10)

Only keys you specify in expected.data and expected.context are checked — extra keys in the received message are ignored.

Recording Mode

Don't know the exact message sequence yet? Record it from a live run:

from ovoscope import End2EndTest
test = End2EndTest.from_message(
    message=utterance,
    skill_ids=[SKILL_ID],
    timeout=20,
)
test.save("tests/fixtures/hello_world.json")  # anonymises location data by default

Replay in CI:

End2EndTest.from_path("tests/fixtures/hello_world.json").execute(timeout=10)

pytest Fixture

The minicroft class-scoped fixture is auto-registered when ovoscope is installed. No setUp/tearDown boilerplate needed:

class TestMySkill:
    skill_ids = ["my-skill.author"]
    def test_something(self, minicroft):
        End2EndTest(
            minicroft=minicroft,
            skill_ids=self.skill_ids,
            source_message=utterance,
            expected_messages=[...],
        ).execute(timeout=10)

Pipeline Control

OvoScope exposes composable pipeline stage lists so tests are deterministic regardless of which AI plugins are installed on the host:

from ovoscope import ADAPT_PIPELINE, PADATIOUS_PIPELINE, FALLBACK_PIPELINE, PERSONA_PIPELINE
# Adapt only — fastest
mc = get_minicroft([SKILL_ID], default_pipeline=ADAPT_PIPELINE)
# Full intent chain
mc = get_minicroft([SKILL_ID],
                   default_pipeline=ADAPT_PIPELINE + PADATIOUS_PIPELINE + FALLBACK_PIPELINE)
# Opt in to persona for AI testing
mc = get_minicroft([SKILL_ID], default_pipeline=DEFAULT_TEST_PIPELINE + PERSONA_PIPELINE)

DEFAULT_TEST_PIPELINE (the default when isolate_config=True) includes all standard built-in stages and deliberately excludes persona, Ollama, OCP, and m2v plugins.

Documentation

Document
docs/usage-guide.md Start here — 8 test patterns with full worked examples
docs/ci-integration.md Wiring ovoscope into GitHub Actions
docs/minicroft.md MiniCroft and get_minicroft() reference
docs/capture-session.md CaptureSession internals
docs/end2end-test.md End2EndTest full parameter reference
docs/pydantic-integration.md Typed message models with ovos-pydantic-models
FAQ.md Common questions and gotchas

License

Apache 2.0

Contributing

PRs are welcome! See CONTRIBUTING.md for guidelines.

AI Disclosure

Parts of this project are developed with the assistance of AI tools. In the interest of transparency, two files are maintained as a public record of AI involvement:

  • FAQ.md — Frequently asked questions that emerged from real development sessions, including design rationale, gotchas, and usage patterns. Many entries were authored or refined with AI assistance during the process of building and testing this framework.
  • MAINTENANCE_REPORT.md — A chronological log of changes made to this repository. Each entry records what was changed, why, which AI model was involved, what actions it took, and what human oversight was applied. This log is updated after every significant AI-assisted session. These files are intentionally published so that contributors and users can understand how the project evolves and where AI assistance has been applied.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ovoscope-0.13.1.tar.gz (78.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ovoscope-0.13.1-py3-none-any.whl (86.8 kB view details)

Uploaded Python 3

File details

Details for the file ovoscope-0.13.1.tar.gz.

File metadata

  • Download URL: ovoscope-0.13.1.tar.gz
  • Upload date:
  • Size: 78.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ovoscope-0.13.1.tar.gz
Algorithm Hash digest
SHA256 62eba98a31a2fb7254071d029ac4aab54a135d9df7cb9d5c8e7b04e476128686
MD5 ede9cc9b43e2ad6cbf4352fcda4a81e9
BLAKE2b-256 574fcf3532ea4d4dc1a55facf589823f9ce63bd459d7109ad4ecc9a6cc29fe85

See more details on using hashes here.

File details

Details for the file ovoscope-0.13.1-py3-none-any.whl.

File metadata

  • Download URL: ovoscope-0.13.1-py3-none-any.whl
  • Upload date:
  • Size: 86.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ovoscope-0.13.1-py3-none-any.whl
Algorithm Hash digest
SHA256 586f157b9abd3a70ff4d306baad7d8d436c81e58bca5f832d4757a39226ad5bb
MD5 409ce2a6a0f618cb36b1901d7311cc33
BLAKE2b-256 fb13b033a26242b1a5d02f6b059723d59b8a865db7979f6abfdc3f2c8edb2f46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page