Skip to main content

MCP tool for analyzing Karate Framework projects and generating dependency graphs

Project description

Karate Graph (MCP-Powered)

A Model Context Protocol (MCP) tool for AI agents to analyze Karate Framework projects, build dependency graphs, and generate interactive reports.

Why use this with AI?

  • Impact analysis with dependency paths and source lines
  • Multi-project graph exploration
  • Search APIs, workflows, pages, Java/JS usages
  • Failure hotspot and flaky-risk prioritization
  • Reusable helper discovery before writing new code

Quick Start

  1. Install
pip install karate-graph

Install from TestPyPI (pre-release validation):

pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ karate-graph==0.1.0

Verify installation:

karate-graph-mcp --help
python -c "import karate_graph_analyzer; print('ok')"

For local development:

git clone <repository-url>
cd karate-graph
pip install -r requirements-dev.txt
  1. Configure MCP client
{
  "mcpServers": {
    "karate-graph": {
      "command": "karate-graph-mcp",
      "args": [],
      "env": {
        "PYTHONPATH": "C:/path/to/repo/src",
        "PYTHONIOENCODING": "utf-8"
      }
    }
  }
}
  1. Ask your AI to:
  • Register and analyze project
  • Show impact if a component changes
  • Find reusable helpers before creating new ones

Available AI Tools

  • register_project
  • analyze_project
  • bulk_analyze
  • impact_analysis
  • search_api
  • search_workflow
  • search_test_case
  • search_java_usage
  • search_js_usage
  • search_error_pattern
  • search_reusable_function
  • change_impact_preview
  • test_selection_suggestion
  • feature_intent_index
  • variable_data_flow_trace
  • assertion_map
  • call_read_deep_context
  • ai_feature_context_pack
  • feature_behavior_map
  • scenario_similarity_map
  • feature_reuse_advisor
  • db_query_index
  • search_db_usage
  • db_data_flow_trace
  • db_assertion_map
  • db_impact_preview
  • top_hotspots
  • prioritize_fix_queue
  • flaky_risk

Reuse And Smart Retest

1) Reuse search

search_reusable_function(project_name, query, language, limit) now returns:

  • tags
  • aliases
  • usage_examples
  • stability_score

This helps AI pick existing helpers with better confidence.

2) Change impact preview

change_impact_preview(project_name, changed_paths, limit) maps changed files/components to impacted test cases via dependency paths.

3) Test selection suggestion

test_selection_suggestion(project_name, changed_paths, limit) suggests a compact high-signal rerun set.

Current strategy:

priority = trigger_count * 10 - min_depth

Feature Understanding For AI

These tools help AI understand Karate .feature files before editing or debugging:

  • feature_intent_index(project_name, query, limit)
    • summarizes scenario intent, step roles, API signals, data files, assertions, and call/read usage
  • variable_data_flow_trace(project_name, feature_path, scenario_tag, scenario_name, node_id, limit)
    • traces def/set variables from source expressions to usage lines
  • assertion_map(project_name, query, limit)
    • indexes status, match, and assert checks across feature files
  • call_read_deep_context(project_name, feature_path, scenario_tag, scenario_name, node_id, max_depth, limit)
    • expands nested call read(...) chains, including target feature/scenario context
  • ai_feature_context_pack(project_name, feature_path, scenario_tag, scenario_name, node_id, max_call_depth, limit)
    • returns an AI-ready pack with intent, variable flow, assertions, call/read chain, and graph context
  • feature_behavior_map(project_name, feature_path, scenario_tag, scenario_name, node_id, limit)
    • groups scenario behavior into preconditions, actions, expectations, plus data inputs and status expectations
  • scenario_similarity_map(project_name, query, limit, top_k)
    • finds similar scenarios by intent-keyword overlap to improve reuse and AI suggestion quality
  • feature_reuse_advisor(project_name, min_group_size, min_flow_length, limit, include_low_signal)
    • finds duplicate steps and repeated flows, indexes their locations, and returns AI-safe refactor plans

DB Understanding For AI

These tools help AI understand query usage, variable flow, and DB-related impact:

  • db_query_index(project_name, query, limit, include_components, link_status)
    • indexes DB query nodes and DB executor/components with operation/table/database/host/dialect/provider/risk/usage/link status
  • search_db_usage(project_name, query, limit, link_status)
    • searches DB usage by table, operation, host, file path, or query keywords
  • db_data_flow_trace(project_name, feature_path, scenario_tag, scenario_name, node_id, limit)
    • traces DB-related variables, DB call steps, and DB assertions inside selected scenarios
  • db_assertion_map(project_name, query, limit)
    • indexes DB-related assertion steps and links them to DB variables/query signatures
  • db_impact_preview(project_name, changed_entities, limit)
    • previews impacted test cases from changed DB entities (tables/schemas/hosts/DB feature paths)

Dialect and DB-type detection is included in DB outputs:

  • SQL dialects: PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, SQLite, DB2, H2, Redshift, Snowflake, ClickHouse, and generic SQL
  • NoSQL/cache/search/graph stores: MongoDB, Redis, DynamoDB, Cassandra/CQL, Elasticsearch/OpenSearch, Neo4j/Cypher
  • Returned fields include db_type, dialect, provider, dialect_confidence, dialect_signals, entity_type, and entity_name
  • Returned DB index rows also include link_status: linked, orphan, component, or demo
  • Use link_status="default" to focus on impact-first rows (linked + orphan) while keeping component/demo context searchable when needed
  • Detection uses Strategy + Registry + Value Object so new DB providers can be added as isolated strategies.

Visual Reports

Interactive HTML graph output is generated under each project output/ folder.

  • Test cases are shown as @TEST-ID - Scenario name when Jira/test-case tags exist.
  • Dashboard search supports TEST-ID, @TEST-ID, scenario name, component name, and feature path.

Publishing

Build and validate locally first:

python -m pip install build twine
python -m build
python -m twine check dist/*

Publish to TestPyPI before PyPI:

python -m twine upload --repository testpypi dist/*
python -m twine upload dist/*

Use an API token for upload. The package ships report assets (style.css, script.js) via package data.

Project Structure

karate-graph/
|-- src/karate_graph_analyzer/
|   |-- mcp_server.py
|   |-- mcp_interface/
|   |-- parser/
|   |-- graph/
|   `-- visualization/
|-- output/
`-- tests/

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

karate_graph-0.1.1.tar.gz (165.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

karate_graph-0.1.1-py3-none-any.whl (195.9 kB view details)

Uploaded Python 3

File details

Details for the file karate_graph-0.1.1.tar.gz.

File metadata

  • Download URL: karate_graph-0.1.1.tar.gz
  • Upload date:
  • Size: 165.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for karate_graph-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7a75e1ab0721cbf15487872f16c0962234620987645bb0594a71f78088664ea6
MD5 378345f0247ffcf613c7ffab000a266b
BLAKE2b-256 ade9d2e8c42b24e8160bedd3385fcd7c4cf750cf68a9dbdcf724812d3047e998

See more details on using hashes here.

File details

Details for the file karate_graph-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: karate_graph-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 195.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for karate_graph-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ea750dbf0975116b66c2285975f39876993ee45a3e204b22f0a0c4d291719895
MD5 d26873555688abaf57186a7f3ea483a1
BLAKE2b-256 5ba3d372308c429c745047e17a85a8ccbaafb0e004f4377159420abee594c179

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page