Skip to main content

Pytest for LLM agents. Self-explaining metrics show exactly WHY your agent failed. Regression testing catches degradation before deployment. GitHub Action for one-click CI/CD. Supports OpenAI, Anthropic, Gemini.

Project description

Toolscore Logo

Toolscore

pytest for LLM agents - catch regressions before deployment

Test tool-calling accuracy for OpenAI, Anthropic, and Gemini

GitHub Stars GitHub forks

PyPI version License Downloads Python Versions CI codecov


Stop shipping broken LLM agents. Toolscore automatically tests tool-calling behavior by comparing actual agent traces against expected behavior, catching regressions before they reach production. Works with OpenAI, Anthropic, Gemini, LangChain, and custom agents.

๐Ÿ“ What is Toolscore?

Toolscore evaluates LLM tool usage - it doesn't call LLM APIs directly. Think of it as a testing framework for function-calling agents:

  • โœ… Evaluates existing tool usage traces from OpenAI, Anthropic, or custom sources
  • โœ… Compares actual behavior against expected gold standards
  • โœ… Reports detailed metrics on accuracy, efficiency, and correctness
  • โŒ Does NOT call LLM APIs or execute tools (you capture traces separately)

Use Toolscore to:

  • Benchmark different LLM models on tool usage tasks
  • Validate that your agent calls the right tools with the right arguments
  • Track improvements in function calling accuracy over time
  • Compare agent performance across different prompting strategies

Features

  • Self-Explaining Metrics: Know exactly WHY your agent failed with detailed explanations, similar name detection, and actionable tips
  • Regression Testing: toolscore regression command catches performance degradation with baseline comparison
  • GitHub Action: One-click CI/CD setup with yotambraun/toolscore@v1
  • Trace vs. Spec Comparison: Load agent tool-use traces (OpenAI, Anthropic, Gemini, MCP, LangChain, or custom) and compare against gold standard specifications
  • Comprehensive Metrics Suite:
    • Tool Invocation Accuracy
    • Tool Selection Accuracy
    • Tool Correctness (were all expected tools called?)
    • Tool Call Sequence Edit Distance
    • Trajectory Accuracy (did agent take the correct reasoning path?)
    • Argument Match F1 Score
    • Parameter Schema Validation (types, ranges, patterns)
    • Redundant Call Rate
    • Side-Effect Success Rate (with content validation)
    • Cost Tracking & Estimation (token usage, pricing for OpenAI/Anthropic/Gemini)
    • Integrated LLM-as-a-judge semantic evaluation
  • Multiple Trace Adapters: Built-in support for OpenAI, Anthropic, Google Gemini, MCP (Anthropic), LangChain, and custom JSON formats
  • Production Trace Capture: Decorator to capture real agent executions and convert them to test cases
  • CLI and API: Command-line interface and Python API for programmatic use
  • Beautiful Console Output: Color-coded metrics, tables, and progress indicators with Rich
  • Rich Output Reports: Interactive HTML, JSON, CSV (Excel/Sheets), Markdown (GitHub/docs) formats
  • Pytest Integration: Seamless test integration with pytest plugin and assertion helpers
  • Interactive Tutorials: Jupyter notebooks for hands-on learning
  • Example Datasets: 5 realistic gold standards for common agent types (weather, ecommerce, code, RAG, multi-tool)
  • Enhanced Validators: Validate side-effects with content checking (file content, database rows, HTTP responses)
  • CI/CD Ready: GitHub Actions workflow template included
  • Automated Releases: Semantic versioning with conventional commits

๐Ÿ†š Why Toolscore?

Feature Toolscore LangSmith OpenAI Evals Weights & Biases Manual Testing
Self-Explaining Metrics โœ… WHY it failed + tips โŒ โŒ โŒ โŒ
Regression Testing โœ… Baseline comparison โš ๏ธ Manual โŒ โš ๏ธ Custom โŒ
GitHub Action โœ… One-click CI โš ๏ธ Custom setup โŒ โš ๏ธ Custom โŒ
Multi-Provider Support โœ… OpenAI, Anthropic, Gemini, MCP โš ๏ธ LangChain-focused โš ๏ธ OpenAI-focused โœ… Yes โŒ
Trajectory Evaluation โœ… Multi-step path analysis โœ… Yes โŒ โš ๏ธ Custom โŒ
Production Trace Capture โœ… Decorator + auto-save โœ… Yes โŒ โœ… Yes โŒ
Open Source & Free โœ… Apache 2.0 โŒ Paid (limited free tier) โœ… MIT โŒ Paid โœ… Free
Pytest Integration โœ… Native plugin โš ๏ธ Custom โŒ โš ๏ธ Custom โš ๏ธ Manual
Comprehensive Metrics โœ… 12+ specialized metrics โš ๏ธ General metrics โš ๏ธ Basic scoring โœ… General ML metrics โŒ
Content Validation โœ… File/DB content checks โŒ โŒ โŒ โŒ
Schema Validation โœ… Types, ranges, patterns โŒ โŒ โŒ โŒ
Tool Correctness Check โœ… Deterministic coverage โŒ โŒ โŒ โŒ
LLM-as-a-Judge โœ… Built-in โœ… Yes โš ๏ธ External โœ… Yes โŒ
Example Datasets โœ… 5 realistic templates โš ๏ธ Few examples โš ๏ธ Limited โŒ โŒ
Beautiful HTML Reports โœ… Interactive โœ… Dashboard โš ๏ธ Basic โœ… Advanced โŒ
Side-effect Validation โœ… HTTP, FS, DB โŒ โŒ โŒ โŒ
Zero-Config Setup โœ… toolscore init โš ๏ธ Requires setup โš ๏ธ Requires setup โš ๏ธ Complex setup โœ…
CI/CD Templates โœ… GitHub Actions ready โœ… Yes โš ๏ธ Manual โœ… Yes โŒ
Local-First โœ… No cloud required โŒ Cloud-based โœ… Local โŒ Cloud-based โœ…
Type Safety โœ… Fully typed โš ๏ธ Partial โš ๏ธ Partial โš ๏ธ Partial โŒ

Perfect for: Teams that want open-source, multi-provider evaluation with pytest integration and no cloud dependencies.

๐Ÿ”Œ Integrations

Toolscore works seamlessly with your existing stack:

Category Supported
LLM Providers OpenAI, Anthropic, Google Gemini, MCP (Model Context Protocol), Custom APIs
Frameworks LangChain, AutoGPT, CrewAI, Semantic Kernel, raw API calls
Testing Pytest (native plugin), unittest, CI/CD pipelines (GitHub Actions, GitLab CI)
Input Formats JSON, OpenAI format, Anthropic format, Gemini format, MCP (JSON-RPC 2.0), LangChain format, custom adapters
Output Formats HTML reports, JSON, CSV, Markdown, Terminal (Rich), Prometheus metrics
Development VS Code, PyCharm, Jupyter notebooks, Google Colab

Coming Soon: DataDog integration, Weights & Biases export, Slack notifications

GitHub Action

Add LLM agent evaluation to your CI in seconds:

- uses: yotambraun/toolscore@v1
  with:
    gold-file: tests/gold_standard.json
    trace-file: tests/agent_trace.json
    threshold: '0.90'

With regression testing:

- uses: yotambraun/toolscore@v1
  with:
    gold-file: tests/gold_standard.json
    trace-file: tests/agent_trace.json
    baseline-file: tests/baseline.json
    regression-threshold: '0.05'

See action.yml for all options.

Regression Testing

Catch performance degradation automatically:

# Step 1: Create a baseline from your best evaluation
toolscore eval gold.json trace.json --save-baseline baseline.json

# Step 2: Run regression checks in CI (fails if accuracy drops >5%)
toolscore regression baseline.json new_trace.json --gold-file gold.json

# With custom threshold (10% allowed regression)
toolscore regression baseline.json trace.json -g gold.json -t 0.10

Exit codes:

  • 0: PASS - No regression detected
  • 1: FAIL - Regression detected (accuracy dropped)
  • 2: ERROR - Invalid files or other errors

๐Ÿ‘ฅ Who Uses Toolscore?

Toolscore is trusted by ML engineers and teams building production LLM applications:

  • Startups building agent-first products
  • Research teams benchmarking LLM capabilities
  • Enterprise teams ensuring agent reliability in production
  • Independent developers optimizing prompt engineering

"Toolscore cut our agent testing time by 80% and caught 3 critical regressions before deployment" - ML Engineer

Using Toolscore? Share your story โ†’

๐Ÿ“ฆ Installation

# Install from PyPI
pip install tool-scorer

# Or install from source
git clone https://github.com/yotambraun/toolscore.git
cd toolscore
pip install -e .

Optional Dependencies

# Install with HTTP validation support
pip install tool-scorer[http]

# Install with LLM-as-a-judge metrics (requires OpenAI API key)
pip install tool-scorer[llm]

# Install with LangChain support
pip install tool-scorer[langchain]

# Install all optional features
pip install tool-scorer[all]

Development Installation

# Install with development dependencies
pip install -e ".[dev]"

# Install with dev + docs dependencies
pip install -e ".[dev,docs]"

# Or using uv (faster)
uv pip install -e ".[dev]"

What's New in v1.4.0

Toolscore v1.4.0 introduces three high-impact features based on real user needs:

Self-Explaining Metrics

Know exactly WHY your agent failed - not just that it failed. Get detailed explanations after each evaluation.

toolscore eval gold.json trace.json --verbose

# Output:
# What Went Wrong:
#   MISSING: Expected tool 'search_web' was never called
#   MISMATCH: Position 2: Expected 'summarize' but got 'summary' (similar names detected)
#   WRONG_ARGS: Argument 'limit' expected 10, got 100
#
# Tips:
#   Use --llm-judge to catch semantic equivalence (search vs web_search)

Regression Testing (toolscore regression)

Catch performance degradation automatically in CI/CD. 58% of prompt+model combinations degrade over API updates - now you'll know immediately.

# Create baseline from your best run
toolscore eval gold.json trace.json --save-baseline baseline.json

# Run regression checks (fails if accuracy drops >5%)
toolscore regression baseline.json new_trace.json --gold-file gold.json

# Exit codes: 0=PASS, 1=FAIL (regression), 2=ERROR

GitHub Action

One-click CI setup. Add agent quality gates to any repository in 30 seconds:

- uses: yotambraun/toolscore@v1
  with:
    gold-file: tests/gold_standard.json
    trace-file: tests/agent_trace.json
    threshold: '0.90'
    fail-on-regression: 'true'

See examples/github_actions/ for complete workflow examples.


Also in Toolscore

  • Zero-Friction Onboarding: toolscore init - interactive project setup in 30 seconds
  • Synthetic Test Generator: toolscore generate - create test cases from OpenAI schemas
  • Quick Compare: toolscore compare - compare multiple models side-by-side
  • Interactive Debug Mode: --debug flag for step-by-step failure analysis
  • LLM-as-a-Judge: --llm-judge flag for semantic tool name matching
  • Schema Validation: Validate argument types, ranges, patterns
  • Example Datasets: 5 realistic gold standards (weather, ecommerce, code, RAG, multi-tool)

๐Ÿš€ Quick Start

๐Ÿš€ 30-Second Start

The fastest way to start evaluating:

# Install
pip install tool-scorer

# Initialize project (interactive)
toolscore init

# Evaluate (included templates)
toolscore eval gold_calls.json example_trace.json

Done! You now have evaluation results with detailed metrics.

5-Minute Complete Workflow

  1. Install Toolscore:

    pip install tool-scorer
    
  2. Initialize a project (choose from 5 agent types):

    toolscore init
    # Select agent type โ†’ Get templates + examples
    
  3. Generate test cases (if you have OpenAI function schemas):

    toolscore generate --from-openai functions.json --count 20
    
  4. Run evaluation with your agent's trace:

    # Basic evaluation
    toolscore eval gold_calls.json my_trace.json --html report.html
    
    # With semantic matching (catches similar tool names)
    toolscore eval gold_calls.json my_trace.json --llm-judge
    
    # With interactive debugging
    toolscore eval gold_calls.json my_trace.json --debug
    
  5. Compare multiple models:

    toolscore compare gold.json gpt4.json claude.json \
      -n gpt-4 -n claude-3
    
  6. View results:

    • Console shows color-coded metrics
    • Open report.html for interactive analysis
    • Check toolscore.json for machine-readable results

Want to test with your own LLM? See the Complete Tutorial for step-by-step instructions on capturing traces from OpenAI/Anthropic APIs.

Command Line Usage

# ===== GETTING STARTED =====

# Initialize new project (interactive)
toolscore init

# Generate test cases from OpenAI function schemas
toolscore generate --from-openai functions.json --count 20 -o gold.json

# Validate trace file format
toolscore validate trace.json

# ===== EVALUATION =====

# Basic evaluation
toolscore eval gold_calls.json trace.json

# With HTML report
toolscore eval gold_calls.json trace.json --html report.html

# With semantic matching (LLM-as-a-judge)
toolscore eval gold_calls.json trace.json --llm-judge

# With interactive debugging
toolscore eval gold_calls.json trace.json --debug

# Verbose output (shows missing/extra tools)
toolscore eval gold_calls.json trace.json --verbose

# Specify trace format explicitly
toolscore eval gold_calls.json trace.json --format openai

# Use realistic example dataset
toolscore eval examples/datasets/ecommerce_agent.json trace.json

# ===== MULTI-MODEL COMPARISON =====

# Compare multiple models side-by-side
toolscore compare gold.json gpt4.json claude.json gemini.json

# With custom model names
toolscore compare gold.json model1.json model2.json \
  -n "GPT-4" -n "Claude-3-Opus"

# Save comparison report
toolscore compare gold.json *.json -o comparison.json

Python API

from toolscore import evaluate_trace

# Run evaluation
result = evaluate_trace(
    gold_file="gold_calls.json",
    trace_file="trace.json",
    format="auto"  # auto-detect format
)

# Access metrics
print(f"Invocation Accuracy: {result.metrics['invocation_accuracy']:.2%}")
print(f"Selection Accuracy: {result.metrics['selection_accuracy']:.2%}")

sequence = result.metrics['sequence_metrics']
print(f"Sequence Accuracy: {sequence['sequence_accuracy']:.2%}")

arguments = result.metrics['argument_metrics']
print(f"Argument F1: {arguments['f1']:.2%}")

Pytest Integration

Toolscore includes a pytest plugin for seamless test integration:

# test_my_agent.py
def test_agent_accuracy(toolscore_eval, toolscore_assertions):
    """Test that agent achieves high accuracy."""
    result = toolscore_eval("gold_calls.json", "trace.json")

    # Use built-in assertions
    toolscore_assertions.assert_invocation_accuracy(result, min_accuracy=0.9)
    toolscore_assertions.assert_selection_accuracy(result, min_accuracy=0.9)
    toolscore_assertions.assert_argument_f1(result, min_f1=0.8)

The plugin is automatically loaded when you install Toolscore. See the examples for more patterns.

Interactive Tutorials

Try Toolscore in your browser with our Jupyter notebooks:

Open them in Google Colab for instant experimentation.

๐Ÿ“‹ Gold Standard Format

Create a gold_calls.json file defining the expected tool calls:

[
  {
    "tool": "make_file",
    "args": {
      "filename": "poem.txt",
      "lines_of_text": ["Roses are red,", "Violets are blue."]
    },
    "side_effects": {
      "file_exists": "poem.txt"
    },
    "description": "Create a file with a poem"
  }
]

๐Ÿ”„ Trace Formats

Toolscore supports multiple trace formats:

OpenAI Format

[
  {
    "role": "assistant",
    "function_call": {
      "name": "get_weather",
      "arguments": "{\"location\": \"Boston\"}"
    }
  }
]

Anthropic Format

[
  {
    "role": "assistant",
    "content": [
      {
        "type": "tool_use",
        "id": "toolu_123",
        "name": "search",
        "input": {"query": "Python"}
      }
    ]
  }
]

LangChain Format

[
  {
    "tool": "search",
    "tool_input": {"query": "Python tutorials"},
    "log": "Invoking search..."
  }
]

Or modern format:

[
  {
    "name": "search",
    "args": {"query": "Python"},
    "id": "call_123"
  }
]

Custom Format

{
  "calls": [
    {
      "tool": "read_file",
      "args": {"path": "data.txt"},
      "result": "file contents"
    }
  ]
}

๐Ÿ“Š Metrics Explained

Tool Invocation Accuracy

Measures whether the agent invoked tools when needed and refrained when not needed.

Tool Selection Accuracy

Proportion of tool calls that match expected tool names.

Tool Correctness (NEW)

Checks if all expected tools were called at least once - complements selection accuracy by measuring coverage rather than per-call matching.

Sequence Edit Distance

Levenshtein distance between expected and actual tool call sequences.

Argument Match F1

Precision and recall of argument correctness across all tool calls.

Schema Validation (NEW)

Validates argument types, numeric ranges, string patterns, enums, and required fields. Define schemas in your gold standard:

{
  "tool": "search",
  "args": {"query": "test", "limit": 10},
  "metadata": {
    "schema": {
      "query": {"type": "string", "minLength": 1},
      "limit": {"type": "integer", "minimum": 1, "maximum": 100}
    }
  }
}

Redundant Call Rate

Percentage of unnecessary or duplicate tool calls.

Side-Effect Success Rate

Proportion of validated side-effects (HTTP, filesystem, database) that succeeded.

LLM-as-a-judge Semantic Evaluation (Integrated)

Now built into core evaluation! Use --llm-judge flag to evaluate semantic equivalence beyond exact string matching. Perfect for catching cases where tool names differ but intentions match (e.g., search_web vs web_search).

# CLI usage - easiest way
tool-scorer eval gold.json trace.json --llm-judge

# Python API
result = evaluate_trace("gold.json", "trace.json", use_llm_judge=True)
print(f"Semantic Score: {result.metrics['semantic_metrics']['semantic_score']:.2%}")

๐Ÿ—‚๏ธ Project Structure

toolscore/
โ”œโ”€โ”€ adapters/          # Trace format adapters
โ”‚   โ”œโ”€โ”€ openai.py
โ”‚   โ”œโ”€โ”€ anthropic.py
โ”‚   โ””โ”€โ”€ custom.py
โ”œโ”€โ”€ metrics/           # Metric calculators
โ”‚   โ”œโ”€โ”€ accuracy.py
โ”‚   โ”œโ”€โ”€ sequence.py
โ”‚   โ”œโ”€โ”€ arguments.py
โ”‚   โ””โ”€โ”€ ...
โ”œโ”€โ”€ validators/        # Side-effect validators
โ”‚   โ”œโ”€โ”€ http.py
โ”‚   โ”œโ”€โ”€ filesystem.py
โ”‚   โ””โ”€โ”€ database.py
โ”œโ”€โ”€ reports/           # Report generators
โ”œโ”€โ”€ cli.py            # CLI interface
โ””โ”€โ”€ core.py           # Core evaluation logic

Development

# Install dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run tests with coverage
pytest --cov=toolscore

# Type checking
mypy toolscore

# Linting and formatting
ruff check toolscore
ruff format toolscore

๐ŸŽฏ Real-World Use Cases

1. Model Evaluation & Selection

Compare GPT-4 vs Claude vs Gemini on your specific tool-calling tasks:

models = ["gpt-4", "claude-3-5-sonnet", "gemini-pro"]
results = {}

for model in models:
    trace = capture_trace(model, task="customer_support")
    result = evaluate_trace("gold_standard.json", trace)
    results[model] = result.metrics['selection_accuracy']

best_model = max(results, key=results.get)
print(f"Best model: {best_model} ({results[best_model]:.1%} accuracy)")

2. CI/CD Integration

Catch regressions in agent behavior before deployment:

# test_agent_quality.py
def test_agent_meets_sla(toolscore_eval, toolscore_assertions):
    """Ensure agent meets 95% accuracy SLA."""
    result = toolscore_eval("gold_standard.json", "production_trace.json")
    toolscore_assertions.assert_selection_accuracy(result, min_accuracy=0.95)
    toolscore_assertions.assert_redundancy_rate(result, max_rate=0.1)

3. Prompt Engineering Optimization

A/B test different prompts and measure impact:

prompts = ["prompt_v1.txt", "prompt_v2.txt", "prompt_v3.txt"]

for prompt_file in prompts:
    trace = run_agent_with_prompt(prompt_file)
    result = evaluate_trace("gold_standard.json", trace)

    print(f"{prompt_file}:")
    print(f"  Selection: {result.metrics['selection_accuracy']:.1%}")
    print(f"  Arguments: {result.metrics['argument_metrics']['f1']:.1%}")
    print(f"  Efficiency: {result.metrics['efficiency_metrics']['redundant_rate']:.1%}")

4. Production Monitoring

Track agent performance over time in production:

# Run daily
today_traces = collect_production_traces(date=today)
result = evaluate_trace("gold_standard.json", today_traces)

# Alert if degradation
if result.metrics['selection_accuracy'] < 0.90:
    send_alert("Agent performance degraded!")

# Log metrics to dashboard
log_to_datadog({
    "accuracy": result.metrics['selection_accuracy'],
    "redundancy": result.metrics['efficiency_metrics']['redundant_rate'],
})

๐Ÿ“š Documentation

What's New

v1.4.0 (Latest - January 2026)

Self-Explaining Metrics:

  • Know exactly WHY your agent failed with detailed explanations
  • Automatic detection of tool name mismatches and similar names
  • Actionable tips like "use --llm-judge to catch semantic equivalence"
  • Per-metric breakdowns showing missing, extra, and mismatched items

Regression Testing:

  • New toolscore regression command for CI/CD integration
  • Save baselines with --save-baseline flag
  • Automatic PASS/FAIL with configurable thresholds
  • Detailed delta reports showing improvements and regressions

GitHub Action:

  • Official action on GitHub Marketplace
  • One-click CI setup for any repository
  • Supports both threshold and regression testing modes
  • Automatic report artifacts and job summaries

v1.1.0 (October 2025)

Major Product Improvements:

  • Integrated LLM-as-a-Judge with --llm-judge flag
  • Tool Correctness Metric for complete tool coverage
  • Parameter Schema Validation for types, ranges, patterns
  • Example Datasets: 5 realistic gold standards
  • Enhanced Console Output with Rich tables

v1.0.x

  • LLM-as-a-judge metrics: Semantic correctness evaluation using OpenAI API
  • LangChain adapter: Support for LangChain agent traces (legacy and modern formats)
  • Beautiful console output: Color-coded metrics with Rich library
  • Pytest plugin: Seamless test integration with fixtures and assertions
  • Interactive tutorials: Jupyter notebooks for hands-on learning
  • Comprehensive documentation: Sphinx docs on ReadTheDocs
  • Test coverage: Increased to 80%+ with 123 passing tests
  • Automated releases: Semantic versioning with conventional commits
  • Enhanced PyPI presence: 16 searchable keywords, Beta status, comprehensive classifiers

See CHANGELOG.md for full release history.

๐Ÿค Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

๐Ÿ“„ License

Apache License 2.0 - see LICENSE for details.

๐Ÿ“– Citation

If you use Toolscore in your research, please cite:

@software{toolscore,
  title = {Toolscore: LLM Tool Usage Evaluation Package},
  author = {Yotam Braun},
  year = {2025},
  url = {https://github.com/yotambraun/toolscore}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tool_scorer-1.4.2.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tool_scorer-1.4.2-py3-none-any.whl (87.0 kB view details)

Uploaded Python 3

File details

Details for the file tool_scorer-1.4.2.tar.gz.

File metadata

  • Download URL: tool_scorer-1.4.2.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for tool_scorer-1.4.2.tar.gz
Algorithm Hash digest
SHA256 028346ea4afe82011feb271be150ad77d9812806d1f1f3885fbf67b47c6be421
MD5 4861a716059c3c73ba1059c03d4733d4
BLAKE2b-256 8250d3c0caecd6e6d5fae54ba084f7bf68dd0d61dde367ba0c1554ab82246e6a

See more details on using hashes here.

File details

Details for the file tool_scorer-1.4.2-py3-none-any.whl.

File metadata

  • Download URL: tool_scorer-1.4.2-py3-none-any.whl
  • Upload date:
  • Size: 87.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for tool_scorer-1.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3e15db51d245327b256adb1e1b6a0950b354df856f038f382caeeecdd0f19e68
MD5 5dcc429279d7bac98ad1045cd837e6e9
BLAKE2b-256 174e04cee4a6e10c9ca046ce433d6647d46138ab2a4a2653fce60609d91e459d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page