Skip to main content

Python package manifest validation (pyproject.toml, setup.py, requirements.txt) with policy enforcement and SBOM generation

Project description

ManifestGuard Logo

ManifestGuard for Python

Version: tag-based (setuptools_scm; see Git tags) Author: Nejat Philip Eryigit Website: https://www.ready-4-it.com Repository: https://github.com/timejunky/r4it_manifest_guard_py


🎯 What is ManifestGuard?

ManifestGuard is a Python package manifest validation tool that ensures consistency between your pyproject.toml, setup.py, requirements.txt, and source code. It provides policy enforcement, dependency analysis, and SBOM generation capabilities.

Key Features

  • Manifest Validation: Check pyproject.toml, setup.py, requirements.txt for errors
  • Entry Point Consistency: Validate that scripts point to existing modules
  • Dependency Analysis: Detect conflicts and missing dependencies
  • Policy Enforcement: Whitelist/Blacklist packages, enforce version constraints
  • SBOM Generation: Generate Software Bill of Materials (SPDX, CycloneDX)
  • CI/CD Integration: GitHub Actions, pre-commit hooks, CLI tool

🚀 Quick Start

ManifestGuard offers two integration paths:

Option A: Direct CLI (Recommended for Development)

Best for: Local development, IDE integration, granular control

# Install ManifestGuard user-wide (requires Python 3.12)
# Windows (Python Launcher — user-wide, no venv):
py -3.12 -m pip install manifestguard

# Linux / macOS:
python3.12 -m pip install manifestguard

# Generate configuration
manifestguard init-config --merge

# Run checks
# Windows:
py -3.12 -m manifestguard check --extended
# Linux / macOS:
python3.12 -m manifestguard check --extended

# Export metrics (for dashboards)
manifestguard export-metrics --output metrics.json

Windows note: Always use py -3.12 -m pip install (never bare pip install) to ensure ManifestGuard ends up in the user-wide Python 3.12 installation. Internal subprocesses started by ManifestGuard reuse that same interpreter by default. The VS Code extension (MGVS) resolves the interpreter in this order:

  1. manifestguard.python.interpreterPath VS Code setting (explicit override)
  2. Active interpreter from the VS Code Python extension
  3. python.defaultInterpreterPath VS Code setting
  4. py -3.12 (Windows Launcher, user-wide) → resolved to the concrete python.exe
  5. py launcher fallback

Offline installation is also supported with pip if you already have the wheel and dependency files locally. See the bundled package guide manifestguard/doc/INSTALLATION.md for the offline workflow.

When to use:

  • ✅ ManifestGuard already installed
  • ✅ IDE integration (VS Code, PyCharm)
  • ✅ Programmatic usage (Python API)
  • ✅ Custom workflows

Option B: Bootstrap Script (Recommended for CI/CD)

Best for: Fresh CI/CD environments, first-time setup, zero-config integration

# 1. Download the bootstrap script
curl -O https://raw.githubusercontent.com/timejunky/r4it_manifest_guard_py/main/examples/run_manifestguard.py

# 2. Run (auto-installs and configures)
python run_manifestguard.py --report report.json

Windows note:

  • Don’t run .py files by typing the path directly if your .py file association is misconfigured (it can accidentally invoke Node.js).
  • Prefer explicit Python invocation: py -3 run_mgpy_install.py or python run_mgpy_install.py.

CI single-shot (recommended):

python run_manifestguard.py --ci --report .manifestguard/manifestguard-report.json

Optional local fix-loop (bounded retries):

python run_manifestguard.py --fix-loop --max-iterations 5 --fix-command "python -m ruff check --fix" --report .manifestguard/manifestguard-report.json

Customer CI templates (draft, copy/paste + adapt):

  • See ci/integration-examples/ (GitHub Actions + Azure Pipelines).
  • These are templates for downstream repos; this repo itself does not rely on them.

Recommended .gitignore entries:

.manifestguard/
coverage.xml
coverage.json
.coverage
.venv/
venv/
env/
venvs/

What it does:

  • ✅ Auto-installs ManifestGuard from PyPI
  • ✅ Creates manifestguard.json if missing
  • ✅ Runs tests + checks in one command
  • ✅ Writes unified JSON report

Note: In restricted environments you can also pre-stage wheels locally and install with python -m pip install --no-index --find-links <dir> <wheel-or-package>.

When to use:

  • ✅ GitHub Actions / GitLab CI / Azure Pipelines
  • ✅ Fresh environments (no ManifestGuard installed)
  • ✅ First-time project setup
  • ✅ Want one command for everything

Integration Comparison

Feature Direct CLI Bootstrap Script
Installation Manual (pip install) ✅ Automatic
Config creation Manual (init-config) ✅ Automatic
Setup steps 3-4 commands 1 command
Test + Check Separate commands Combined
MGVS Integration ✅ Yes ❌ No
CI/CD one-liner ❌ No ✅ Yes
Best for Development CI/CD

Note: The VS Code Extension (MGVS) uses Direct CLI (assumes ManifestGuard is installed).


Basic Usage


🔐 Licensing (Offline, OS Independent)

ManifestGuard's runtime license verification is designed to work offline. Activation material (token or activation JSON) can be delivered out-of-band (e.g. email/support), and the client verifies and persists it locally without contacting an activation service.

# Print the stable device hash used for device-bound activations
manifestguard license device-hash

# Activate from a signed offline token (R4IT.<payload>.<signature>)
manifestguard license activate "R4IT...."

# Or activate from a JSON file containing an activation object
manifestguard license activate path/to/activation.json

# Inspect local state
manifestguard license status

Useful follow-up commands:

# Print the localized activation how-to
manifestguard license guide

# Inspect local state as JSON for scripts or CI
manifestguard license status --json

Optional: Radon complexity engine (fallback)

ManifestGuard’s default complexity engine is the built-in AST-based analyzer (mgpy-analyzer). If you want an external fallback engine, you can opt into Radon:

pip install "manifestguard[radon]"

Then select it in manifestguard.json:

{
    "checks": {
        "complexity": {
            "enabled": true,
            "threshold": 10,
            "includeTests": false,
            "engine": "radon"
        }
    }
}

Note: the selected engine is written into .manifestguard-history.json so trends stay interpretable when switching engines.

That's it! manifestguard.json controls all tests and thresholds.

Additional Commands

# Validate specific file
python -m manifestguard check pyproject.toml

# Force CLI language for this run
python -m manifestguard -l de check

# Persist a preferred CLI language (per-user settings)
python -m manifestguard set-language de
python -m manifestguard set-language --unset

# Show current language + source (env/user/system/default)
python -m manifestguard languages

# Auto-fix issues
python -m manifestguard fix --auto

# Generate SBOM
python -m manifestguard sbom --format spdx --output sbom.json

# View metrics history
python -m manifestguard baseline --list

# Generate OpenAPI schema
manifestguard schema --output openapi.json

# Detect hardcoded GUI strings (scans src/gui/**/*.py)
python -m manifestguard check-hardcoded-strings --format text

# JSON output; exits with code 3 if findings exist (use --no-fail to force exit 0)
python -m manifestguard check-hardcoded-strings --format json --max-findings 200

# Detect hardcoded CLI strings (click.echo/print/help/prompt)
python -m manifestguard check-hardcoded-cli-strings --format text

# JSON output; exits with code 3 if findings exist (use --no-fail to force exit 0)
python -m manifestguard check-hardcoded-cli-strings --format json --max-findings 200

# Refactoring rules: print ordered plan (presence-based)
# - Looks for .mgpy/mg.rules.refactoring.json first, then .mgvs/...
# - Missing file => exit 0 + short note
# - Invalid file => config error
python -m manifestguard refactor-plan --format text

# Deterministic path (useful for CI/tests)
python -m manifestguard refactor-plan --format json --path .mgpy/mg.rules.refactoring.json

# Optional: refresh the dedicated refactor guidance report
python -m manifestguard check --refactor --report .manifestguard/manifestguard-report-refactor.json

Refactoring workflow note:

  • Use python run_manifestguard.py only as a bootstrap/install helper in downstream repos or CI environments.
  • Use python -m manifestguard refactor-plan to compute the ordered plan from .mgpy/mg.rules.refactoring.json.
  • Use python -m manifestguard check --refactor ... when you want a refreshed refactor guidance report.
  • Both entry modes operate on the same project files; the runner script is convenience/orchestration, the direct CLI commands are the actual refactor loop.

Language / i18n (CLI)

By default, ManifestGuard chooses the CLI language in this order:

  1. MANIFESTGUARD_LANG (environment override)
  2. Persisted per-user preference (from user.json)
  3. System locale (e.g. Windows language)
  4. LANG
  5. LC_ALL
  6. Fallback: en

You can override the language per run via -l/--lang, or persist it with set-language.

Note: Always use python -m manifestguard for cross-platform compatibility (Windows, Linux, macOS).


✅ Suppressions / Ignores (Exception Mechanism)

ManifestGuard supports inline ignore markers and (where applicable) config-based suppressions.

  • Use suppressions for known false positives, generated/test-only files, or time-boxed transitional refactors.
  • Do not use suppressions to avoid real fixes (especially for Security, i18n, Licensing, or Dependency issues).
  • Keep suppressions narrow (single line, specific code). Avoid * / all except for short-lived debugging.
  • Always document why (reason) and prefer a time-box (expiry) + tracking ticket.

AI note: An AI assistant must not “silence” findings via suppressions to make reports pass.


📚 Documentation

Comprehensive documentation is available:

Additional guides in this repository:

  • Inline ignores: see docs/howto.inline-markers.en.md
  • Extended Mode guide: see EXTENDED-TEST-MODE.md
  • Refactoring support (App/Lib/GUI): see docs/howto.refactoring.en.md
  • ADR: Runner self-detection and policy – see docs/adr/ADR-2025-12-23-runner-self-detection.md

QuickFix Backup Policy

  • VCS-first: Wenn Git (oder ein anderes Versionskontrollsystem) verfügbar ist, nutzt QuickFix primär VCS-Backups (Stash/Commit/Branch) für sauberes Diffing und einfaches Revert.
  • Snapshot-Fallback: Ist kein VCS verfügbar oder Commits sind nicht erlaubt, erstellt QuickFix vor jeder Änderung einen lokalen Snapshot unter .manifestguard-backups/ mit Hash/Diff-Metadaten für verlustfreie Wiederherstellung.
  • Kundenaufforderung: In Kundenumgebungen ohne VCS wird empfohlen, ein Versionsmanagement zu aktivieren. Alternativ greift der Snapshot-Fallback automatisch.

Konfiguration in manifestguard.json (Beispiel):

{
    "quickfix": {
        "backup": {
            "mode": "git",           // git | snapshot | both
            "dir": ".manifestguard-backups/",
            "requireCleanGit": true,
            "autoCommit": false,
            "maxBackups": 100,
            "dryRun": false
        }
    }
}

Pre-refactor complexity scan (standalone):

python -m manifestguard.extended.complexity --threshold 12 --engine mgpy-analyzer --path .

Akzeptanzkriterien:

  • Reversibel (Git oder Snapshot), atomar (Temp→fsync→Replace), nachvollziehbar (Audit-Log pro Fix), vendor-agnostisch (ohne Git nutzbar), sicher (respektiert ignorePatterns).

Documentation Highlights

  • 98.6% Docstring Coverage - All classes and public functions documented with Google-style docstrings
  • Interactive API Docs - OpenAPI 3.0.3 schema with Swagger UI support
  • Code Examples - Comprehensive examples for all features
  • Migration Guides - Upgrade paths between versions

📦 Status: Active Development

ManifestGuard is under active development with quality-first approach.

Project Statistics

  • 📊 13,195 lines of source code
  • 482 automated tests (474 passed, 2 skipped)
  • 📝 38 test files with 7,139 lines
  • 🎯 Quality Score: 100/100
  • 🔧 Complexity: All violations resolved
  • 🌐 5 Languages: en, de, fr, lb, tr

Roadmap

  • v1.5.0 (Q1 2026): Enhanced SBOM generation, improved policy engine
  • v2.0.0 (Q2 2026): VS Code extension integration, dashboard
  • v3.0.0 (Q3 2026): Enterprise features, self-hosted licensing

🛠️ Development Setup

Quick Start Template

Copy the template script and i18n folder to your project root:

# Download template + i18n resources (must be kept together)
curl -O https://raw.githubusercontent.com/timejunky/r4it_manifest_guard_py/main/examples/run_manifestguard.py
# Also copy examples/i18n/run_manifestguard/ next to the script

# Run (auto-installs from PyPI, then shows AI workflow instruction)
python run_manifestguard.py

The template script handles:

  • ✅ Install/Update ManifestGuard (default: PyPI)
  • ✅ Prints AI workflow instruction for the next development steps

Manual Installation

Python version requirement: ManifestGuard wheels are built with PyArmor and contain a native runtime extension (pyarmor_runtime_000000). The current release targets Python 3.12 only. Python 3.13 and newer are not yet supported (PyArmor upstream does not ship a 3.13 extension as of April 2026).

Windows — multiple Python versions installed

If Python 3.13 is your system default (py --list shows * 3.13), use the py launcher to target 3.12 explicitly. You do not need to change your system default.

From the local find-links cache — works after every invoke build-release because the task copies wheels there automatically (the cache path is read from pip's global find-links config):

# Install / upgrade user-wide for Python 3.12
py -3.12 -m pip install --user manifestguard==1.6.32

# Verify
py -3.12 -m manifestguard --version

From a specific local wheel file (one-off):

py -3.12 -m pip install --user --force-reinstall --no-deps `
    dist\manifestguard-1.6.32-cp312-cp312-win_amd64.whl

py -3.12 -m manifestguard --version

Optional — make manifestguard work without py -3.12 -m: Add the Python 3.12 user Scripts folder to the front of your PATH:

%APPDATA%\Python\Python312\Scripts

macOS / Linux — multiple Python versions installed

python3.12 -m pip install --user manifestguard==1.6.32
python3.12 -m manifestguard --version

From local source (editable, 3.12 venv)

git clone https://github.com/timejunky/r4it_manifest_guard_py.git
cd r4it_manifest_guard_py
py -3.12 -m venv .venv           # Windows
# python3.12 -m venv .venv       # macOS/Linux
.venv\Scripts\activate           # Windows
# source .venv/bin/activate      # macOS/Linux
pip install -e .

# Verify
manifestguard --version

Run Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=src/manifestguard --cov-report=html

# Run linting
ruff check src/
black --check src/
mypy src/

Test Configuration Best Practice

CRITICAL: Always set test timeouts to prevent hanging tests

# pytest.ini or pyproject.toml
[tool.pytest.ini_options]
timeout = 120  # 2 minutes max per test (prevents hanging)
timeout_method = "thread"  # or "signal" on Linux/macOS
# Individual test timeout override
import pytest

@pytest.mark.timeout(30)  # 30 seconds for this test
def test_fast_operation():
    # test code
    pass

Why this matters:

  • ❌ Without timeout: Single hanging test can block CI/CD for hours
  • ✅ With timeout: Failed tests fail fast (2 min max)
  • 🔍 Helps identify tests with infinite loops, network hangs, deadlocks
  • 📊 Recommended: 2 minutes global, 5-30 seconds for most tests

Installation:

pip install pytest-timeout

License-Dependent Tests

Some tests require an active license. The test suite provides a session-wide autouse fixture that activates a temporary test license and restores any production license after the run.

  • Session tier selection: pytest --license-tier=pro|team|enterprise
  • Run only licensed tests: pytest -m licensed --license-tier=team
  • Tier-specific markers: pro, team, enterprise (use with -m)

Notes:

  • The session fixture stores a temporary token for the chosen tier and removes it on teardown.
  • Individual tests may still activate their own tier token; the session token is a safe default.

Build Package

# Using invoke tasks
invoke build

# Manual build
python -m build

🔌 API Reference

ManifestGuard provides a type-safe REST API with Pydantic v2 models for all requests and responses.

API Models

All API endpoints use Pydantic models for automatic validation and OpenAPI documentation:

from manifestguard.models import (
    # Environment management
    EnvironmentCreate, EnvironmentUpdate, EnvironmentResponse,

    # Validation
    ValidationRequest, ValidationResponse, ValidationResult,

    # Metrics
    MetricsReport, CoverageMetrics, DependencyMetrics,

    # Configuration
    ConfigTemplate, ConfigRequest, ConfigResponse,
)

Example: Create Environment

from manifestguard.models import EnvironmentCreate, EnvironmentResponse
import httpx

# Create environment
payload = EnvironmentCreate(
    name="production",
    description="Production environment for deployment"
)

async with httpx.AsyncClient() as client:
    response = await client.post(
        "http://localhost:8000/environment",
        json=payload.model_dump()
    )
    env = EnvironmentResponse(**response.json())
    print(f"Created: {env.name} (ID: {env.id})")

OpenAPI Schema Generation

Generate OpenAPI 3.0.3 schema from Pydantic models:

# Generate JSON schema
manifestguard schema --output openapi.json --format json

# Generate YAML schema
manifestguard schema --output openapi.yaml --format yaml

# Custom API info
manifestguard schema --title "My API" --api-version 2.0.0

Interactive API Documentation

Start the FastAPI server for interactive Swagger UI:

# Start API server
uvicorn manifestguard.api.app:app --reload

# Open in browser
# - Swagger UI: http://localhost:8000/docs
# - ReDoc: http://localhost:8000/redoc
# - OpenAPI schema: http://localhost:8000/openapi.json

API Features

  • Automatic validation via Pydantic models
  • Type-safe requests and responses
  • OpenAPI 3.0.3 schema generation
  • Interactive docs (Swagger UI, ReDoc)
  • Consistent error format (ErrorResponse)
  • Multi-language support (en, fr, de, lb, tr)

For detailed API documentation, see OPENAPI.md.


📚 Documentation


🤝 Contributing

Contributions are welcome! Please read our Contributing Guide first.

Development Workflow

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'feat: add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

Third-Party Licenses

This project uses the following open-source libraries:

  • tomli (MIT License) - TOML parser for Python < 3.11
  • tomlkit (MIT License) - Style-preserving TOML library
  • click (BSD-3-Clause License) - CLI framework
  • pydantic (MIT License) - Data validation
  • packaging (Apache 2.0 / BSD-2-Clause) - Version handling

See THIRD-PARTY-LICENSES.md for full license texts.


👤 Author

Nejat Philip Eryigit


🌟 Support


🌍 Multi-Language Support

ManifestGuard supports 5 languages out of the box:

  • 🇬🇧 EN (English) - Default
  • 🇩🇪 DE (Deutsch)
  • 🇫🇷 FR (Français)
  • 🇱🇺 LB (Lëtzebuergesch)
  • 🇹🇷 TR (Türkçe)

Set Your Language

# Auto-detect from system (LANG environment variable)
python -m manifestguard check

# Set language explicitly
export MANIFESTGUARD_LANG=de  # German
python -m manifestguard check

export MANIFESTGUARD_LANG=fr  # French
python -m manifestguard check

export MANIFESTGUARD_LANG=tr  # Turkish
python -m manifestguard check

Programmatic Usage

from manifestguard.i18n import t, set_language

# Auto-detect from environment
print(t("validation.success"))  # "Validation successful"

# Set language explicitly
set_language("de")
print(t("validation.success"))  # "Validierung erfolgreich"

set_language("fr")
print(t("validation.success"))  # "Validation réussie"

🔄 CI/CD Integration Examples

GitHub Actions (Bootstrap Script)

Recommended for fresh environments - one command does everything:

name: ManifestGuard Quality Checks

on: [push, pull_request]

jobs:
  quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.10'
      
      - name: Run ManifestGuard (Bootstrap Install)
        run: python run_manifestguard.py
      
      - name: Run ManifestGuard checks
        run: manifestguard check --report report.json
      
      - name: Upload Report
        uses: actions/upload-artifact@v4
        with:
          name: manifestguard-report
          path: report.json

GitHub Actions (Direct CLI)

When you want granular control over each step:

name: ManifestGuard Quality Checks

on: [push, pull_request]

jobs:
  quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.10'
      
      - name: Install ManifestGuard
        run: pip install manifestguard
      
      - name: Generate Config
        run: manifestguard init-config --merge
      
      - name: Run Tests
        run: pytest --cov=src --cov-report=xml
      
      - name: Run Quality Checks
        run: manifestguard check --extended --report report.json
      
      - name: Export Metrics
        run: manifestguard export-metrics --output metrics.json
      
      - name: Upload Artifacts
        uses: actions/upload-artifact@v4
        with:
          name: quality-reports
          path: |
            report.json
            metrics.json
            coverage.xml

GitLab CI (Bootstrap Script)

manifestguard:
  stage: test
  image: python:3.10
  script:
    - python run_manifestguard.py
    - manifestguard check --report report.json
    reports:
      junit: report.json
    paths:
      - report.json

GitLab CI (Direct CLI)

manifestguard:
  stage: test
  image: python:3.10
  before_script:
    - pip install manifestguard
  script:
    - manifestguard init-config --merge
    - pytest --junitxml=pytest-report.xml
    - manifestguard check --extended --report manifestguard-report.json
  artifacts:
    reports:
      junit: pytest-report.xml
    paths:
      - manifestguard-report.json

Pre-commit Hook

Add to .pre-commit-config.yaml:

repos:
  - repo: https://github.com/timejunky/r4it_manifest_guard_py
    rev: v1.5.31
    hooks:
      - id: manifestguard-check
        name: ManifestGuard
        entry: manifestguard check
        language: python
        types: [python]
        pass_filenames: false

📊 Dashboard Integration

VS Code Extension (MGVS)

The ManifestGuard VS Code Extension provides real-time quality metrics:

  1. Install extension: Search "ManifestGuard" in VS Code
  2. Open Python project
  3. Click "▶️ Run All Checks" in Python Control Panel
  4. View metrics in Python Reports webview

Integration: MGVS uses Direct CLI (assumes ManifestGuard installed in venv).

Web Dashboard (Coming Soon)

Export metrics for external dashboards:

# Export comprehensive telemetry
manifestguard export-metrics --output metrics.json

# Use in dashboard (Grafana, custom web UI)
curl -X POST https://dashboard.example.com/api/metrics \
  -H "Content-Type: application/json" \
  -d @metrics.json

Schema: Uses Telemetry v1.0.0 (Pydantic-validated, JSON export).


📄 License & Usage

ManifestGuard is proprietary software with all rights reserved.

  • License Type: Proprietary / Commercial
  • Copyright: © 2025 Nejat Philip Eryigit
  • Usage: Requires valid license (Community/Pro/Team/Enterprise)

License Tiers

  • Community (Free): Basic validation, limited features
  • Pro: Extended analysis, SARIF export, complexity metrics
  • Team: Multi-user, dashboard, API access
  • Enterprise: Self-hosted, custom rules, SLA support

Activate License:

manifestguard license activate <token>
manifestguard license activate path\to\activation.json
manifestguard license status

Activation storage is per-user, not per-venv:

  • ManifestGuard prefers the system keyring.
  • It also keeps a per-user fallback under ~/.manifestguard/ so a global install, a user-scoped install, and multiple venvs see the same activation state.
  • When an activation object is present, it becomes the runtime source of truth for entitlement fields such as activation_key, updateEntitlementUntil, maxActivations, lastEligibleVersion, and features.

That avoids the common Python problem where a license works in one venv but seems to disappear in another.

For CI or short-lived shells, you can still inject a token without writing local state:

set MANIFESTGUARD_ACTIVATION_KEY_PY=R4IT....
manifestguard license status

For licensing inquiries: https://www.ready-4-it.com


�🔗 Related Projects


Built with ❤️ by ready-4-it

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

manifestguard-1.6.44.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

manifestguard-1.6.44-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file manifestguard-1.6.44.tar.gz.

File metadata

  • Download URL: manifestguard-1.6.44.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for manifestguard-1.6.44.tar.gz
Algorithm Hash digest
SHA256 5af8b31597a44bcac9678b2fc79bf44cedafdc3c0a88fe5ea1e6b62015c82697
MD5 91aad6e92429797551e912f312ed4364
BLAKE2b-256 e15e7de31b42925b9a25ca5f803d6d0c69e9c1e18a540a6558c37a37772b3684

See more details on using hashes here.

File details

Details for the file manifestguard-1.6.44-py3-none-any.whl.

File metadata

  • Download URL: manifestguard-1.6.44-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for manifestguard-1.6.44-py3-none-any.whl
Algorithm Hash digest
SHA256 577b0ecc3265effdc8385782de0aba0982313f308501d8023dd618027c1153dd
MD5 f659e74e8eb341de399187fbfdc8f28b
BLAKE2b-256 193b9a2239c5e4efd88416ad2eccf0d73ac4a1d35275c463d05ee46d8c344e64

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page