Skip to main content

AI-powered CLI for analyzing hardware engineering documents

Project description

Multifactor ADK Backend

Development Status

An AI-powered engineering document processing pipeline that intelligently analyzes hardware engineering documents including schematics, datasheets, BOMs, and netlists. Built with Google's Gemini models, the system extracts structured data, generates documentation, and enables semantic search across processed documents.

๐Ÿš€ Features

  • CLI-Based Pipeline: Process entire directories of engineering documents with a single command
  • Intelligent Document Analysis: Automated classification, text extraction, and schema mapping
  • BOM Generation: Extract components from schematics and generate CSV Bill of Materials
  • Datasheet Enrichment: Automatically download component datasheets from BOMs
  • Cheat Sheet Generation: AI-generated documentation for MCU datasheets, errata, debug setup, and functional blocks
  • RAG-Powered Queries: Query processed documents using ChromaDB-backed Retrieval-Augmented Generation
  • Web UI: Interactive interface for document processing and agent interaction (optional)
  • File Type Support: PDF, EDIF, PADS, KiCad netlists, CSV BOMs, and more

๐Ÿ“‹ Table of Contents

๐Ÿ—๏ธ Architecture

The system uses a streamlined architecture with a single controller agent that orchestrates a sequential processing pipeline:

Controller Agent
โ”œโ”€โ”€ Tools:
โ”‚   โ”œโ”€โ”€ run_pipeline (Sequential pipeline execution)
โ”‚   โ””โ”€โ”€ query_knowledgebase (RAG-based document queries)
โ”‚
โ””โ”€โ”€ Pipeline Stages:
    โ”œโ”€โ”€ Pre-processing
    โ”‚   โ”œโ”€โ”€ File classification & validation
    โ”‚   โ”œโ”€โ”€ Text extraction
    โ”‚   โ”œโ”€โ”€ Document sub-type detection (uses LLM when needed)
    โ”‚   โ”œโ”€โ”€ Schema mapping (for structured documents)
    โ”‚   โ”œโ”€โ”€ Data parsing
    โ”‚   โ””โ”€โ”€ Data enrichment
    โ”‚
    โ””โ”€โ”€ Analysis & Generation
        โ”œโ”€โ”€ Netlist-to-BOM mapping
        โ””โ”€โ”€ File generation:
            โ”œโ”€โ”€ BOM CSV files
            โ””โ”€โ”€ JSON cheat sheets (MCU, errata, debug setup, functional blocks)

The pipeline processes files sequentially, making LLM calls only when necessary for tasks like document sub-classification and schema mapping, rather than using a hierarchy of sub-agents.

๐Ÿ“ฆ Prerequisites

  • Python 3.12+
  • Required API Keys:
    • Google API Key (for Gemini models)
    • OpenAI API Key (for embeddings)
    • LlamaParse Cloud API Key (for document parsing)
    • DigiKey API credentials (client ID & secret, for datasheet downloads)
    • AWS credentials (for S3 storage, optional)
  • Database: SQLite (automatically managed)

๐Ÿ”ง Installation

Quick Install (Recommended)

The easiest way to install mfcli is using our automated installation script with pipx, which provides isolated dependency management while making the CLI globally available.

Windows (PowerShell):

iwr -useb https://raw.githubusercontent.com/MultifactorAI/multifactor-adk-backend/main/install.ps1 | iex

Linux/macOS:

curl -fsSL https://raw.githubusercontent.com/MultifactorAI/multifactor-adk-backend/main/install.sh | bash

The script will:

  • โœ… Check Python 3.12 installation
  • โœ… Install pipx if needed
  • โœ… Install mfcli with isolated dependencies
  • โœ… Set up configuration directory
  • โœ… Make mfcli and mfcli-mcp commands globally available

Manual Installation

If you prefer manual installation or the script doesn't work:

Using pipx (Recommended)

# Install pipx if not already installed
python -m pip install --user pipx
python -m pipx ensurepath

# Install mfcli from GitHub
pipx install git+https://github.com/MultifactorAI/multifactor-adk-backend.git

# Or install from PyPI (once published)
pipx install mfcli

Why pipx?

  • โœ… Isolated dependencies - no conflicts with other Python packages
  • โœ… Global CLI access - available in any terminal
  • โœ… No virtual environment activation needed
  • โœ… MCP server compatible - works with external tools like Cline
  • โœ… Easy updates: pipx upgrade mfcli

Using pip (For Development)

# Clone the repository
git clone https://github.com/MultifactorAI/multifactor-adk-backend.git
cd multifactor-adk-backend

# Create virtual environment
python -m venv venv

# Activate virtual environment
# Windows:
venv\Scripts\activate
# macOS/Linux:
source venv/bin/activate

# Install in development mode
pip install -e .

Note: If you plan to use the MCP server with Cline/Claude Code, install with pipx instead to ensure global availability.

Verify Installation

After installation, verify everything is working:

# Check mfcli is installed
mfcli --help

# Run system health check
mfcli doctor

โš™๏ธ Configuration

Interactive Configuration Wizard (Recommended)

The easiest way to configure mfcli is using the interactive wizard:

mfcli configure

This will guide you through setting up all required API keys with:

  • ๐Ÿ”— Direct links to get each API key
  • โœ… Automatic validation of API keys
  • ๐Ÿ“ Smart defaults for vectorization settings
  • ๐Ÿ’พ Automatic saving to the correct location

Manual Configuration

Alternatively, create a .env file at:

Windows: C:\Users\<username>\Multifactor\.env
macOS/Linux: ~/Multifactor/.env

# API Keys (Required)
google_api_key=your_google_api_key
openai_api_key=your_openai_api_key
llama_cloud_api_key=your_llamaparse_api_key
digikey_client_id=your_digikey_client_id
digikey_client_secret=your_digikey_client_secret

# Vector Database Configuration
chunk_size=1000
chunk_overlap=200
embedding_model=text-embedding-3-small
embedding_dimensions=1536

Check Configuration

To verify your configuration at any time:

# Check configuration status
mfcli configure --check

# Run comprehensive system check
mfcli doctor

Required API Keys & How to Get Them

Tip: The mfcli configure wizard provides these links interactively and validates your keys!

๐Ÿš€ Usage

Command-Line Interface

Getting Started with a Project

To analyze hardware design files, follow these steps:

1. Navigate to your hardware design files directory:

cd C:\Projects\hardware\board_v1

2. Initialize the project:

mfcli init

You'll be prompted to enter a project name (3-45 characters, alphanumeric with underscores/hyphens allowed). This creates a .multifactor folder in your current directory containing:

  • config.json - Project configuration with your project name
  • file_docket.json - File tracking and metadata

3. Run the pipeline:

mfcli run_pipeline

This will:

  • Process all supported files in the directory
  • Skip files that have already been processed (matching MD5 checksum)
  • Prompt for confirmation if a file has been modified (different MD5)
  • Generate BOM CSV files (if schematics are found)
  • Download datasheets for BOM components
  • Generate cheat sheets for MCU datasheets, errata, and schematics
  • Store vector embeddings for RAG queries

File Change Detection

The pipeline tracks files using MD5 checksums stored in .multifactor/file_docket.json. When you run the pipeline:

  • New files: Automatically processed

  • Unchanged files: Skipped (MD5 matches previous run)

  • Modified files: You'll be prompted:

    ======================================================================
    File has been modified: schematic.pdf
    Path: C:\Projects\hardware\board_v1\schematic.pdf
    Old MD5: abc123...
    New MD5: def456...
    ======================================================================
    Do you want to delete the old file data and process the new version? (yes/no):
    
    • Answer yes to remove old data from the knowledge base and reprocess
    • Answer no to skip the file and keep the old version

This ensures efficient processing by only analyzing new or changed files, while maintaining data consistency in the knowledge base.

CLI Commands Reference

The mfcli tool provides the following commands:

  • mfcli init - Initialize a new project in the current directory
  • mfcli run_pipeline - Run the analysis pipeline on the current directory
  • mfcli web [--port PORT] - Start the web UI (default port: 9999)
  • mfcli addfile FILE [--purpose PURPOSE] - Add a file to ChromaDB knowledge base

Start Web UI

Launch the interactive web interface:

mfcli web

With custom port:

mfcli web --port 8080

The web UI will be available at http://localhost:9999/dev-ui/ (or your specified port).

Web UI Usage

The web interface allows you to:

  • Upload and process individual files
  • Run the pipeline on directories
  • Query processed documents using natural language
  • View processing status and results

Example queries in web UI:

  • "What components are in the processed schematic?"
  • "Tell me about the voltage ratings in the last datasheet"
  • "What are the errata for this MCU?"

๐Ÿ”Œ MCP Server

This package includes a Model Context Protocol (MCP) server that exposes tools for AI assistants and development environments like Cline/Claude to interact with your engineering documentation knowledge base.

What is MCP?

The Model Context Protocol (MCP) is a standard that allows AI assistants to securely access external tools and data sources. The mfcli MCP server provides AI-powered access to your processed engineering documents through the local ChromaDB vector database.

Available Tools

The MCP server exposes the following tool:

query_local_rag

Query the local hardware knowledge base of processed engineering documents using natural language.

Parameters:

  • query (required): Your search query (e.g., "MSPM0L130x", "power management", "IEC 61000-4-2")
  • project_name (optional): The name of the project to query. If not provided, uses the last known project name from previous queries.
  • n_results (optional): Number of results to return (1-20, default: 8)

Returns:

  • Document chunks matching your query
  • Metadata (file names, document types)
  • Similarity scores (lower distance = more relevant)
  • ChromaDB database path
  • Project name that was used for the query

Note: The function automatically remembers the last project name used, so you only need to specify project_name for the first query or when switching between projects.

Example queries:

  • "MSPM0L130x specifications"
  • "What are the voltage requirements?"
  • "MCU pin configurations"
  • "Component datasheets for capacitors"

Configuration for Cline/Claude

To use the MCP server with Cline (or other MCP-compatible clients), add the following configuration to your MCP settings file:

Configuration Location:

  • VS Code (Cline): %APPDATA%\Code\User\globalStorage\saoudrizwan.claude-dev\settings\cline_mcp_settings.json
  • Cline standalone: ~/.cline/mcp_settings.json

Configuration:

{
  "mcpServers": {
    "mfcli-mcp": {
      "disabled": false,
      "timeout": 60,
      "type": "stdio",
      "command": "python",
      "args": ["-m", "mfcli.mcp.server"]
    }
  }
}

Setup Instructions

  1. Install mfcli system-wide (see Installation section above):

    pip install .
    
  2. Process your engineering documents to populate the knowledge base:

    cd /path/to/hardware/files
    mfcli init
    mfcli run_pipeline
    
  3. Add the MCP configuration to your Cline/Claude settings file (see Configuration above)

  4. Restart Cline/Claude to load the MCP server

  5. Use the tool in your AI assistant:

    • Ask questions like: "Query the local RAG for MSPM0L130x in project test"
    • The assistant will use the query_local_rag tool to search your documents

Troubleshooting MCP Server

Error: Module 'mfcli' not found

  • Solution: Ensure mfcli is installed system-wide (not just in a virtual environment)
    deactivate  # Exit any virtual environment
    pip install .
    

Error: ChromaDB directory not found

  • Solution: Run the pipeline at least once to create the vector database:
    mfcli init
    mfcli run_pipeline
    

Error: MCP server timeout

  • Solution: Increase the timeout value in your MCP settings (default: 60 seconds)

Server not connecting:

  • Verify the MCP server configuration in your settings file
  • Check that Python is in your system PATH
  • Restart your IDE/editor after updating MCP settings

MCP Server Architecture

The MCP server is implemented in mfcli/mcp/ with the following structure:

mfcli/mcp/
โ”œโ”€โ”€ server.py              # MCP server entry point
โ”œโ”€โ”€ mcp_instance.py        # MCP server instance and tool definitions
โ””โ”€โ”€ tools/
    โ””โ”€โ”€ query_knowledgebase.py  # RAG query implementation

The server connects to your local ChromaDB instance (located in your system's application data directory) and provides semantic search capabilities over all processed engineering documents.

๐Ÿ”„ Pipeline

The pipeline processes engineering documents in two main phases:

Phase 1: Pre-processing

For each file in the input directory:

  1. Classification & Validation (classifier.py)

    • Determines file type (PDF, EDIF, CSV, etc.)
    • Validates file integrity and MIME type
    • Checks file size limits
  2. Gemini File Upload (PDFs only)

    • Uploads PDF files to Gemini's Files API for vision-based processing
  3. Text Extraction (extractor.py)

    • Extracts text content from documents
    • Handles various formats (PDF, netlist formats, CSV)
  4. Sub-type Classification (sub_classifier.py)

    • Determines document sub-type (e.g., schematic, BOM, datasheet, MCU datasheet, errata)
    • Uses LLM analysis when necessary
  5. Schema Mapping (schema_mapper.py)

    • Maps document structure to database schemas
    • Skipped for schemaless files like schematics
  6. Data Parsing (parser.py)

    • Parses structured data from documents
    • Stores in SQLite database
  7. Data Enrichment (data_enricher.py)

    • Enriches parsed data with additional information
    • Downloads component datasheets for BOM entries

Phase 2: Analysis & Generation

After all files are pre-processed:

  1. Netlist-to-BOM Mapping (bom_netlist_mapper.py)

    • Maps netlist components to BOM entries
    • Correlates design files with component lists
  2. File Generation (generator.py)

    • BOM CSV: Extracts components from schematics, generates CSV with reference, value, quantity, manufacturer, MPN, description
    • Cheat Sheets: Generates JSON cheat sheets for:
      • MCU datasheets (register maps, peripherals, specs)
      • MCU errata (known issues, workarounds)
      • Debug setup (pin configurations, debugging instructions)
      • Functional blocks (system architecture, block diagrams)

Supported File Types

  • PDF: Schematics, datasheets, MCU documentation, errata sheets
  • EDIF: Electronic Design Interchange Format netlists
  • PADS: PADS ASCII netlist format
  • KiCad: Legacy netlist (.net) and SPICE circuit (.cir) formats
  • CSV: Bill of Materials files

๐Ÿ“‚ Output Directories

The pipeline creates directories in three locations:

1. Project Metadata Directory (.multifactor)

Created in your hardware design files directory when you run mfcli init:

your_hardware_files_directory/
โ””โ”€โ”€ .multifactor/
    โ”œโ”€โ”€ config.json         # Project configuration (project name, etc.)
    โ””โ”€โ”€ file_docket.json    # File tracking and processing metadata

This folder stores project-specific configuration and tracks which files have been processed.

2. User Application Data

Platform-specific storage for global application data:

Windows:

C:\Users\<username>\AppData\Local\Multifactor\
โ””โ”€โ”€ chromadb/              # Vector embeddings database

macOS:

/Users/<username>/Library/Application Support/Multifactor/
โ””โ”€โ”€ chromadb/              # Vector embeddings database

Linux:

~/.local/share/Multifactor/
โ””โ”€โ”€ chromadb/              # Vector embeddings database

Contents:

  • chromadb/ - Vector embeddings of processed documents for RAG queries

3. Project Output Directories

Created in the parent directory of your hardware files directory:

<parent_directory>/
โ”œโ”€โ”€ generated_files/         # BOM CSV files generated from schematics
โ”œโ”€โ”€ hw_cheat_sheets/         # JSON cheat sheets (MCU, errata, debug, functional blocks)
โ”œโ”€โ”€ data_sheets/             # Downloaded component datasheets (from BOM processing)
โ”œโ”€โ”€ agent_instructions/      # (Reserved for future use)
โ””โ”€โ”€ requirements/            # (Reserved for future use)

Example: If your hardware files are in C:\Projects\hardware\board_v1\, outputs will be created in C:\Projects\hardware\:

  • C:\Projects\hardware\board_v1\.multifactor\ - Project metadata
  • C:\Projects\hardware\generated_files\bom.csv - Generated BOM
  • C:\Projects\hardware\hw_cheat_sheets\schematic_cheat_sheet.json - Cheat sheets
  • C:\Projects\hardware\data_sheets\STM32F4_datasheet.pdf - Downloaded datasheets

๐Ÿ’พ Data Storage

SQLite Database

Located at sessions.db in the project root (or path specified in .env).

Stores:

  • Pipeline run metadata and status
  • File metadata and processing results
  • Parsed component data (BOMs, netlists, datasheets)
  • MCU information and errata
  • ADK session data and conversation history

ChromaDB Vector Store

Located in the user application data directory.

Stores:

  • Vector embeddings of processed documents
  • Enables semantic search and RAG queries
  • Uses OpenAI embeddings (text-embedding-3-small)

S3 Storage (Optional)

If AWS credentials are configured:

  • Uploaded files
  • Generated outputs
  • Long-term document storage

๐Ÿ“ Project Structure

multifactor-adk-backend/
โ”œโ”€โ”€ app/
โ”‚   โ”œโ”€โ”€ agents/
โ”‚   โ”‚   โ”œโ”€โ”€ controller/              # Main controller agent
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ agent.py            # Agent definition
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ config.yaml         # Agent configuration
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ tools.py            # Agent tools
โ”‚   โ”‚   โ””โ”€โ”€ tools/
โ”‚   โ”‚       โ””โ”€โ”€ general.py          # Shared tools
โ”‚   โ”œโ”€โ”€ alembic/                    # Database migrations
โ”‚   โ”œโ”€โ”€ cli/
โ”‚   โ”‚   โ””โ”€โ”€ main.py                 # CLI entry point (mfcli)
โ”‚   โ”œโ”€โ”€ client/                     # External service clients
โ”‚   โ”‚   โ”œโ”€โ”€ chroma_db.py           # ChromaDB vector store
โ”‚   โ”‚   โ”œโ”€โ”€ gemini.py              # Gemini API client
โ”‚   โ”‚   โ”œโ”€โ”€ llama_parse.py         # LlamaParse client
โ”‚   โ”‚   โ””โ”€โ”€ vector_db.py           # Vector DB interface
โ”‚   โ”œโ”€โ”€ constants/                  # Enums and constants
โ”‚   โ”œโ”€โ”€ crud/                       # Database operations
โ”‚   โ”œโ”€โ”€ digikey/                    # DigiKey API integration
โ”‚   โ”œโ”€โ”€ models/                     # SQLAlchemy models
โ”‚   โ”œโ”€โ”€ pipeline/
โ”‚   โ”‚   โ”œโ”€โ”€ pipeline.py            # Main pipeline orchestration
โ”‚   โ”‚   โ”œโ”€โ”€ classifier.py          # File classification
โ”‚   โ”‚   โ”œโ”€โ”€ extractor.py           # Text extraction
โ”‚   โ”‚   โ”œโ”€โ”€ sub_classifier.py      # Document sub-typing
โ”‚   โ”‚   โ”œโ”€โ”€ schema_mapper.py       # Schema mapping
โ”‚   โ”‚   โ”œโ”€โ”€ parser.py              # Data parsing
โ”‚   โ”‚   โ”œโ”€โ”€ data_enricher.py       # Data enrichment
โ”‚   โ”‚   โ”œโ”€โ”€ analysis/
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ bom_netlist_mapper.py
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ generators/        # Output generators
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ generator.py   # Main generator
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ bom/          # BOM generation
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ debug_setup/  # Debug setup cheat sheets
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ functional_blocks/  # Functional block diagrams
โ”‚   โ”‚   โ”‚       โ”œโ”€โ”€ mcu/          # MCU documentation
โ”‚   โ”‚   โ”‚       โ””โ”€โ”€ mcu_errata/   # Errata cheat sheets
โ”‚   โ”‚   โ”œโ”€โ”€ extractors/           # Format-specific extractors
โ”‚   โ”‚   โ””โ”€โ”€ parsers/
โ”‚   โ”‚       โ””โ”€โ”€ netlist/          # Netlist parsers (EDIF, KiCad, PADS)
โ”‚   โ”œโ”€โ”€ tests/                     # Unit tests
โ”‚   โ””โ”€โ”€ utils/                     # Utility functions
โ”‚       โ”œโ”€โ”€ config.py
โ”‚       โ”œโ”€โ”€ directory_manager.py   # Output directory management
โ”‚       โ”œโ”€โ”€ logger.py
โ”‚       โ””โ”€โ”€ ...
โ”œโ”€โ”€ .env                           # Environment configuration (create this)
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ alembic.ini                    # Alembic configuration
โ”œโ”€โ”€ pyproject.toml                 # Package configuration
โ”œโ”€โ”€ requirements.txt               # Python dependencies
โ””โ”€โ”€ README.md                      # This file

๐Ÿ› ๏ธ Development

Running the Pipeline

First, navigate to your hardware files directory and initialize:

cd /path/to/hardware/files
mfcli init
mfcli run_pipeline

Starting the Web UI

mfcli web --port 9999

Development Status

โš ๏ธ This project is currently in development. Features and APIs may change.

Database Migrations

This project uses Alembic for database schema management.

Create a New Migration

alembic revision -m "description of changes"

Apply Migrations

alembic upgrade head

Rollback Migration

alembic downgrade -1

Logging

Logs are configured in utils/logger.py. View logs for debugging:

from mfcli.utils.logger import get_logger
logger = get_logger(__name__)
logger.info("Message")
logger.error("Error message")

Running Tests

pytest app/tests/

๐Ÿ› Troubleshooting

Common Issues

1. Installation Issues

Error: ModuleNotFoundError: No module named 'google'

  • Solution: Reinstall package: pip install .

Error: Command 'mfcli' not found

  • Solution: Ensure virtual environment is activated and package is installed: pip install .

2. API Key Errors

Error: google.api_core.exceptions.Unauthenticated: 401 API key not valid

Error: OpenAI API error

  • Solution: Check openai_api_key in .env
  • Ensure API key has billing enabled

3. ChromaDB Issues

Error: ChromaDB directory not found

  • Solution: ChromaDB will be created automatically on first run in AppData folder

Error: Embedding dimension mismatch

  • Solution: Delete ChromaDB directory and restart to rebuild with correct dimensions

4. Pipeline Processing Failures

Error: Could not find metadata file. Please initialize this repo with "mfcli init"

  • Solution: You need to run mfcli init in your hardware files directory before running mfcli run_pipeline

Error: File not found

  • Solution: Make sure you're running mfcli run_pipeline from within your hardware files directory (where you ran mfcli init)

Error: File extension is not supported

  • Solution: Check that your files are in supported formats (PDF, EDIF, CSV, .net, .cir, .asc)

Error: No components extracted from schematic

  • Solution:
    • Ensure schematic PDF is clear and readable
    • Check that component designators are visible
    • Verify file is an actual schematic (not layout or other document type)

5. Database Issues

Error: Database connection failed

  • Solution: Check that SQLite database path in .env is valid
  • Run migrations: alembic upgrade head

Verify Configuration

Run this Python snippet to verify your configuration:

from mfcli.utils.config import get_config
config = get_config()
print(f"Google API Key set: {'Yes' if config.google_api_key else 'No'}")
print(f"OpenAI API Key set: {'Yes' if config.openai_api_key else 'No'}")
print(f"Database path: {config.sqlite_db_path}")

Debug Mode

For detailed debugging, check the console output when running mfcli pipeline or mfcli web. The application uses structured logging to help diagnose issues.

Getting Help

  • Issues: GitHub Issues
  • Documentation: Check this README and inline code documentation

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿค Contributing

This project is in active development. Contributions, issues, and feature requests are welcome!


Built with Google Gemini and Google Agent Development Kit (ADK)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mfcli-0.2.0.tar.gz (109.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mfcli-0.2.0-py3-none-any.whl (134.6 kB view details)

Uploaded Python 3

File details

Details for the file mfcli-0.2.0.tar.gz.

File metadata

  • Download URL: mfcli-0.2.0.tar.gz
  • Upload date:
  • Size: 109.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for mfcli-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b72ad733a95c6ca4dd5fb1d23f2fafaeddafc026aabf88a85f2151e0a4f61506
MD5 75dd6831fa30bb6feaceb305bb20e2b4
BLAKE2b-256 67784bb02855c7385b3ffb42cd30c22b56930b80c6658aa3b306b229e7f5ad9a

See more details on using hashes here.

File details

Details for the file mfcli-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: mfcli-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 134.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for mfcli-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6561451b1b7387675b944578eabec0d13d15668ee52950987ca020fe1b62a081
MD5 9efbac3e4d729932b1e69bd581a8a501
BLAKE2b-256 088d9225102f360608fafa5d9890e43f0e2feca4d2f1c99954c811cfb569e210

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page