Skip to main content

Generate requirements and test artifacts from PRDs with stable IDs

Project description

Spec & Test Generator Skill

CI License: MIT Python 3.10+

A tool that generates requirements and tests from PRDs, using stable IDs that persist across iterations, enabling traceable and auditable specifications.


Why This Exists

  • Requirements decay into chaos — Every PRD iteration means re-numbering, broken references, and "which REQ-0042 are we talking about?"
  • Test coverage is guesswork — Without traceability, you can't prove which tests cover which requirements
  • Manual ID management is error-prone — Engineers waste time maintaining spreadsheets of REQ/TEST mappings
  • Regeneration breaks everything — Edit a PRD and all your test case IDs shift, invalidating bug reports and test runs

What It Is

  • A PRD parser that extracts functional and non-functional requirements
  • A stable ID generator using content fingerprints (IDs survive edits)
  • A test case generator with pragmatic test pyramid strategy
  • A traceability matrix builder for REQ ↔ TEST bidirectional mapping
  • Audit-ready — generates artifacts suitable for compliance and reviews

What It Is NOT

  • Not an AI that writes your tests (it structures them, you implement them)
  • Not a test runner or execution framework
  • Not a requirements management system (it generates snapshots, not a database)
  • Not a replacement for thinking — garbage PRD in, garbage specs out

How It Works

┌─────────────────────────────────────────────────────────────────────┐
│                  Spec & Test Generation Pipeline                     │
├─────────────────────────────────────────────────────────────────────┤
│                                                                      │
│  ┌──────────┐    ┌──────────┐    ┌──────────────────────────────┐   │
│  │   PRD    │───▶│  Parser  │───▶│     Extracted Requirements   │   │
│  │ Markdown │    │          │    │  (goals, FRs, NFRs, scope)   │   │
│  └──────────┘    └──────────┘    └──────────────────────────────┘   │
│                                         │                            │
│                        ┌────────────────┘                            │
│                        ▼                                             │
│  ┌──────────┐    ┌────────────┐    ┌────────────────────────────┐   │
│  │ .idmap   │◀──▶│ ID Manager │───▶│   Stable IDs (REQ-xxxx)    │   │
│  │  .json   │    │ (fingerprint)   │   Persist across edits     │   │
│  └──────────┘    └────────────┘    └────────────────────────────┘   │
│                                         │                            │
│                        ┌────────────────┘                            │
│                        ▼                                             │
│               ┌──────────────┐    ┌────────────────────────────┐    │
│               │  Generator   │───▶│   Test Cases (TEST-xxxx)   │    │
│               │              │    │   with preconditions/steps │    │
│               └──────────────┘    └────────────────────────────┘    │
│                                         │                            │
│                        ┌────────────────┘                            │
│                        ▼                                             │
│               ┌─────────────────────────────────────────────────┐   │
│               │              Output Artifacts                    │   │
│               │  • REQUIREMENTS.md  (structured requirements)   │   │
│               │  • TEST_PLAN.md     (test pyramid strategy)     │   │
│               │  • TEST_CASES.md    (detailed test cases)       │   │
│               │  • TRACEABILITY.csv (REQ ↔ TEST mapping)        │   │
│               └─────────────────────────────────────────────────┘   │
│                                                                      │
└─────────────────────────────────────────────────────────────────────┘

Quick Start

30-Second Hello World

pip install spec-test-generator

# Generate specs from a PRD
spec-test-generator prd.md

2-Minute Realistic Example

# Generate with strict policy (regulated environments)
spec-test-generator prd.md --strict --output ./specs

# Output files created:
# specs/REQUIREMENTS.md   - 12 requirements (REQ-0001 to REQ-0012)
# specs/TEST_PLAN.md      - Test strategy with pyramid breakdown
# specs/TEST_CASES.md     - 24 test cases (TEST-0001 to TEST-0024)
# specs/TRACEABILITY.csv  - Full coverage matrix
# specs/.idmap.json       - ID persistence (commit this!)

# Edit prd.md, regenerate - IDs stay stable!
spec-test-generator prd.md --strict --output ./specs
# REQ-0001 still refers to the same requirement

Python API

from spec_test_generator import SpecTestGenerator

generator = SpecTestGenerator(prd_path="prd.md")

result = generator.generate()
print(f"Requirements: {len(result['requirements'])}")
print(f"Test Cases: {len(result['test_cases'])}")

# Write artifacts to disk
artifacts = generator.write_artifacts(output_dir="./specs")

Docker

docker build -t spec-test-generator .
docker run --rm -v $(pwd):/prds spec-test-generator /prds/prd.md

Real-World Use Cases

Use Case Command Outcome
Sprint planning spec-test-generator epic.md Break epics into testable requirements
Compliance audit spec-test-generator prd.md --strict Generate audit-ready traceability
Test coverage analysis Check TRACEABILITY.csv Identify untested requirements
Change impact Regenerate after PRD edit See which tests need updating

Comparison with Alternatives

Tool Focus Stable IDs Traceability Regeneration
spec-test-generator PRD → Specs + Tests Yes (fingerprint) Full matrix Safe
Jira/Linear Issue tracking Manual Manual links N/A
TestRail Test management Sequential Manual Breaks refs
AI assistants Ad-hoc generation No No Full rewrite

Key differentiator: Fingerprint-based IDs that survive edits — your bug reports and test runs reference stable identifiers.


Stable ID System

IDs use content fingerprints to survive regeneration:

Edit Type ID Behavior
Minor wording change Same ID retained
Requirement split Original ID on closest match
Major rewrite New ID allocated
Requirement deleted ID retired (never reused)

How It Works

// .idmap.json (commit this file!)
{
  "requirements": {
    "a1b2c3d4": "REQ-0001",  // fingerprint → stable ID
    "e5f6g7h8": "REQ-0002"
  },
  "tests": {
    "x9y0z1a2": "TEST-0001"
  },
  "next_req_id": 3,
  "next_test_id": 2
}

Policies

Policy Use Case Min Tests/Req Negative Tests
default.internal.yaml Internal/agile 1 Optional
preset.strict.yaml Regulated/compliance 2 Required

Policy Differences

Feature Internal Strict
Minimum tests per requirement 1 2
Negative test cases Optional Required
Edge cases per requirement 2+ 4+
Acceptance criteria format GWT or bullets GWT only
Traceability completeness Optional gaps 100% coverage

PRD Input Format

# PRD: Feature Name

## Goal
What this feature accomplishes (1-2 sentences)

## Functional Requirements
1) User can do X when Y
2) System shall Z under condition W

## Non-Functional Requirements
- Performance: p95 < 300ms
- Security: All endpoints require authentication

## Non-Goals
- Things explicitly out of scope
- Features we're not building

## Notes
- Additional context for implementers

Output Artifacts

specs/
├── REQUIREMENTS.md   # REQ-0001, REQ-0002, ... with acceptance criteria
├── TEST_PLAN.md      # Test pyramid strategy (unit/integration/e2e)
├── TEST_CASES.md     # TEST-0001, TEST-0002, ... with steps
├── TRACEABILITY.csv  # REQ_ID,TEST_ID,Coverage mapping
└── .idmap.json       # ID persistence store (commit this!)

Additional Features

Gherkin/BDD Output

Generate .feature files from requirements:

from spec_test_generator import GherkinGenerator

generator = GherkinGenerator(result, output_dir)
artifacts = generator.generate()
# Creates features/authentication.feature, features/payment.feature, etc.

Import from Jira/Linear

Import requirements from existing issue trackers:

from spec_test_generator import JiraImporter, LinearImporter, IDManager

id_manager = IDManager(output_dir)

# From Jira JSON export
jira = JiraImporter(id_manager)
requirements = jira.import_from_file("jira_export.json")

# From Linear JSON export
linear = LinearImporter(id_manager)
requirements = linear.import_from_file("linear_export.json")

Test Coverage Gap Analysis

Identify requirements without adequate test coverage:

from spec_test_generator import CoverageAnalyzer

analyzer = CoverageAnalyzer(result)
report = analyzer.analyze()

print(f"Coverage: {report.coverage_percentage}%")
for gap in report.gaps:
    print(f"  {gap.req_id}: {gap.gap_type} - {gap.description}")

# Write report
analyzer.write_report(output_dir)  # Creates COVERAGE_REPORT.md

Change Impact Reports

Compare PRD versions and analyze impact:

from spec_test_generator import ImpactAnalyzer

analyzer = ImpactAnalyzer(output_dir)
report = analyzer.compare("prd_v1.md", "prd_v2.md", existing_tests)

print(f"Risk Level: {report.risk_level}")
print(f"Changes: {len(report.changes)}")
print(f"Affected Tests: {report.affected_tests}")

# Write report
analyzer.write_report("prd_v1.md", "prd_v2.md")  # Creates IMPACT_REPORT.md

Roadmap

Now ✅

  • PRD markdown parsing
  • Stable ID generation with fingerprinting
  • Requirements, test plan, test case generation
  • CSV traceability matrix
  • Gherkin/BDD output format
  • Import from Jira/Linear
  • Test coverage gap analysis
  • Change impact reports

Next

  • IDE plugins (VS Code, IntelliJ)
  • Integration with TestRail/Zephyr
  • AI-assisted test case expansion

Later

  • Real-time PRD collaboration
  • Multi-language support

Development

git clone https://github.com/akz4ol/spec-test-generator-skill.git
cd spec-test-generator-skill
pip install -e ".[dev]"

make test    # Run tests
make lint    # Run linters
make format  # Format code
make all     # All checks

Contributing

See CONTRIBUTING.md for guidelines.

Good first issues:

  • Add new output format (Gherkin, XML)
  • Improve PRD parsing edge cases
  • Add test case templates

Documentation


License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spec_test_generator-1.0.0.tar.gz (27.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spec_test_generator-1.0.0-py3-none-any.whl (26.9 kB view details)

Uploaded Python 3

File details

Details for the file spec_test_generator-1.0.0.tar.gz.

File metadata

  • Download URL: spec_test_generator-1.0.0.tar.gz
  • Upload date:
  • Size: 27.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spec_test_generator-1.0.0.tar.gz
Algorithm Hash digest
SHA256 10381cb9274e96eb11c68b788a5f1109aafecd758c4d204358aa5dab545b82b3
MD5 86bb867edba326320f1e258cbe947998
BLAKE2b-256 1901438dc4e1336a2e88ed38462dfc465e6a282d8c096fcd5ca842c300e877b3

See more details on using hashes here.

File details

Details for the file spec_test_generator-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for spec_test_generator-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c2a270220b3cbfa356f2e98f8cc1d483c8b13a5f0f1c3c942efd1529dfe0d257
MD5 35907591434fb9f340337882995ff26b
BLAKE2b-256 87b042a8d10c7f5063c7f2cf3c632751fbae3e819446ecf3058187b3892dd46a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page