Skip to main content

The testing framework for skill engineering. Test tool descriptions, prompt templates, agent skills, and custom agents with real LLMs. AI analyzes results and tells you what to fix.

Project description

pytest-skill-engineering

PyPI version Python versions CI License: MIT

Test-Driven Skill Engineering for GitHub Copilot

Test MCP servers, CLI tools, Agent Skills, and custom agents using the real GitHub Copilot coding agent. Write tests as prompts, run them against actual Copilot sessions, and get AI-powered insights on what to fix.

Why?

Your MCP server passes all unit tests. Then a user tries it in GitHub Copilot and:

  • Copilot picks the wrong tool
  • Passes garbage parameters
  • Can't recover from errors
  • Ignores your skill's instructions

Why? Because you tested the code, not the AI interface.

For LLMs, your API isn't functions and types — it's tool descriptions, Agent Skills, custom agent instructions, and schemas. These are what GitHub Copilot actually sees. Traditional tests can't validate them.

The key insight: your test is a prompt. You write what a user would say ("What's my checking balance?"), and Copilot figures out how to use your tools. If it can't, your AI interface needs work.

What This Tests

pytest-skill-engineering validates the full skill engineering stack that ships with your MCP server:

  • MCP Server Tools — Can Copilot discover and call your tools correctly?
  • Agent Skills (agentskills.io spec-compliant) — Does domain knowledge improve performance?
  • Custom Agents (.agent.md files) — Do your specialist instructions trigger proper subagent dispatch?
  • MCP Prompt Templates — Do server-side templates produce the right behavior?
  • CLI Tools — Can Copilot use command-line interfaces effectively?

Plus A/B testing, multi-turn sessions, clarification detection, and AI-powered reports that tell you exactly what to fix.

How It Works

Write tests as prompts. Run them with the real GitHub Copilot coding agent. Assert on what happened:

from pytest_skill_engineering.copilot import CopilotEval

async def test_balance_query(copilot_eval):
    agent = CopilotEval(
        skill_directories=["skills/banking-advisor"],
        max_turns=10,
    )
    result = await copilot_eval(agent, "What's my checking balance?")
    
    assert result.success
    assert result.tool_was_called("get_balance")

The workflow:

  1. Write a test — a prompt that describes what a user would say
  2. Run it — GitHub Copilot tries to use your tools
  3. Fix the interface — improve tool descriptions, skills, or agent instructions until it passes
  4. AI analysis tells you what to optimize — cost, redundant calls, better prompts

If a test fails, your AI interface needs work, not your code.

Agent Skills — First-Class Support

pytest-skill-engineering provides full Agent Skills spec compliance:

  • Compatibility field — Mark required tools, models, or platforms
  • Metadata — Title, description, version, attribution
  • Allowed-tools — Restrict which tools the agent can use
  • Scripts & Assets — Package Python scripts, prompts, and resources
  • Eval Bridge — Import evals from evals/evals.json, export grading results

Agent Skills are loaded natively when testing with CopilotEval — exactly as users experience them.

AI-Powered Reports

AI analyzes your results and tells you what to fix: which configuration to deploy, how to improve tool descriptions, where to cut costs. See a sample report →

AI Analysis — winner recommendation, metrics, and comparative analysis

Quick Start

# Install
uv add pytest-skill-engineering

# Authenticate (one-time)
gh auth login

# Run tests
pytest tests/

Configure AI Analysis (optional but recommended)

The AI-powered report needs a model to generate insights. Configure it in pyproject.toml:

[tool.pytest.ini_options]
addopts = "--aitest-summary-model=copilot/gpt-5-mini"

You can also use Azure OpenAI or other providers if you prefer — see Configuration.

Features

  • MCP Server Testing — Test tools, prompt templates, and bundled skills with real Copilot sessions
  • Agent Skills — Full agentskills.io spec compliance (compatibility, metadata, allowed-tools, evals bridge)
  • Custom Agents — Test .agent.md files and validate subagent dispatch
  • CLI Tool Testing — Verify Copilot can use command-line interfaces
  • Plugin Testing — Load complete plugin directories (plugin.json, .github/, .claude/ layouts) with auto-discovery
  • A/B Testing — Compare instructions, skills, custom agent versions, or tool configurations
  • Eval Leaderboard — Auto-ranked by pass rate and cost
  • Multi-Turn Sessions — Test conversations that build on context
  • Clarification Detection — Catch agents that ask questions instead of acting
  • LLM Assertions — Semantic checks with llm_assert, multi-dimension scoring with llm_score, image evaluation with llm_assert_image
  • AI-Powered Reports — Actionable feedback on tool descriptions, prompts, and costs
  • Cost Tracking — Copilot premium request tracking + USD estimation via pricing.toml

Who This Is For

  • MCP server authors — Validate that GitHub Copilot can actually use your tools
  • Agent Skills authors — Test skills exactly as users experience them in Copilot
  • Custom agent builders — Validate .agent.md instructions and subagent dispatch
  • Plugin developers — Test complete GitHub Copilot CLI plugins end-to-end
  • Teams shipping Copilot integrations — Catch skill stack regressions in CI/CD

Documentation

📚 Full Documentation

Requirements

  • Python 3.11+
  • pytest 9.0+
  • GitHub Copilot subscription (required)

Acknowledgments

Inspired by agent-benchmark.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_skill_engineering-0.5.9.tar.gz (137.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytest_skill_engineering-0.5.9-py3-none-any.whl (180.4 kB view details)

Uploaded Python 3

File details

Details for the file pytest_skill_engineering-0.5.9.tar.gz.

File metadata

  • Download URL: pytest_skill_engineering-0.5.9.tar.gz
  • Upload date:
  • Size: 137.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pytest_skill_engineering-0.5.9.tar.gz
Algorithm Hash digest
SHA256 c441a719a696ea19a3f7a0168119f2db291dfa54b4bdd818ce716f142572983c
MD5 c45a688b18a2de632dfe7e232bae9dba
BLAKE2b-256 f1732438247a84ca621df31767b7c3b1233782708bf071f2cda74be3124b3ac2

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_skill_engineering-0.5.9.tar.gz:

Publisher: release.yml on sbroenne/pytest-skill-engineering

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pytest_skill_engineering-0.5.9-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_skill_engineering-0.5.9-py3-none-any.whl
Algorithm Hash digest
SHA256 97cb36981b6cc0142d3eb182f8182dc1c502efc17d229bd1d46e528e8047a2e5
MD5 1e60f07ecd602fc94b9d96816e5cb995
BLAKE2b-256 15ba52774aa69a3f5b0efebc8eb12114d3ae7e784a818fbc6a0bd30dfacb07f6

See more details on using hashes here.

Provenance

The following attestation bundles were made for pytest_skill_engineering-0.5.9-py3-none-any.whl:

Publisher: release.yml on sbroenne/pytest-skill-engineering

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page