Multi-layer Intelligent Evaluation for Smart Contracts - Open-source framework that brings enterprise-grade security analysis to every developer
Project description
MIESC
50 tool adapters. 9 defense layers. One command.
Enterprise-grade smart contract security, free and open to everyone.
Quick Start • Why MIESC • GitHub Action • Docker • Espanol • Docs
Quick Start
pip install miesc
miesc scan MyContract.sol
That's it. MIESC runs Slither + Aderyn + Solhint, deduplicates findings, and gives you a unified report with confidence scores in seconds.
Full pipeline: detect → fix → verify → comply
miesc scan contract.sol -o results.json # Detect + intelligence
miesc fix results.json -c contract.sol -o fixed.sol # Auto-patch vulnerabilities
miesc remediate results.json -c contract.sol --compile --rescan # Patch + evidence bundle
miesc verify fixed.sol --tool smtchecker # Prove fix works
miesc compliance results.json --standard mica # Map to MiCA/DORA/ISO 27001
miesc report results.json -t premium -f pdf # Professional audit report
Current research pipeline: Intelligence Engine
miesc scan contract.sol --verbose # Per-finding confidence + fix
miesc scan contracts/ --recursive # Directory scanning
miesc scan . --diff origin/main # PR-level: only changed files
The intelligence engine automatically:
- Deduplicates cross-tool findings (Slither + Aderyn report same bug → 1 finding)
- Scores confidence via Bayesian multi-tool agreement (2 tools = 85%, 3 = 95%)
- Generates fix code — copy-pasteable Solidity patches for 10 vulnerability categories
- Suppresses false positives — context-aware (onlyOwner, Solidity 0.8+, OpenZeppelin guards)
- Calibrates severity across tools (Aderyn LOW → Medium when warranted)
Want the full 9-layer analysis with AI correlation?
miesc audit full MyContract.sol -o results.json
miesc report results.json -t premium -f pdf --llm-interpret
See example output
=== Layer 1: Static Analysis ===
OK slither: 5 findings in 1.7s
OK aderyn: 5 findings in 3.0s
OK solhint: 0 findings in 0.7s
=== Layer 2: Dynamic Testing ===
OK echidna: 0 findings in 2.0s
OK foundry: 0 findings in 9.0s
=== Layer 3: Symbolic Execution ===
OK mythril: 2 findings in 298.0s
=== Layer 5: AI Analysis ===
OK smartllm: 4 findings in 198.9s
OK gptscan: 4 findings in 49.7s
Full Audit Summary
+----------+-------+
| Severity | Count |
+----------+-------+
| CRITICAL | 1 |
| HIGH | 11 |
| MEDIUM | 1 |
| LOW | 9 |
| TOTAL | 22 |
+----------+-------+
Tools executed: 12/29
Report saved to results.json
Why MIESC
The problem: Professional smart contract audits cost $50K-$200K and take weeks. Meanwhile, $1.5B+ is lost to exploits every year. Most projects ship without any audit at all. Running Slither alone catches ~70% of vulnerabilities with 15-20% false positives. Every tool has blind spots. Auditors manually run 5-10 tools, normalize outputs, and cross-reference findings. This takes hours.
MIESC makes that workflow accessible to everyone. One command orchestrates multiple security tools across 9 complementary analysis techniques, deduplicates findings, and generates professional reports. Free, open-source, runs locally — your code never leaves your machine.
Benchmark Results
SmartBugs-curated (143 contracts, 207 ground-truth vulnerabilities):
| Metric | Slither alone | Mythril alone | MIESC Paper 1 reproducible profile | Evidence scope |
|---|---|---|---|---|
| Recall | 43.2% | 27.4% | 93.7% | Full SmartBugs-curated corpus |
| Precision | 8.3% | 6.1% | 19.1% | Full SmartBugs-curated corpus |
| F1-Score | 13.9% | 10.0% | 31.7% | Full SmartBugs-curated corpus |
The full-corpus SmartBugs result is the reproducible Paper 1 profile. The 9-layer run is reported separately as an end-to-end integration smoke run, not as a corpus-wide claim.
Real-world exploits (11 confirmed DeFi exploits, $3.3B total losses):
| Vulnerability | Exploits | Detected | Recall | Examples |
|---|---|---|---|---|
| Reentrancy | 3 | 3 | 100% | Euler $197M, Rari $80M, Platypus $8.5M |
| Access Control | 3 | 3 | 100% | Parity $280M, Ronin $624M |
| Flash Loan | 2 | 2 | 100% | bZx $8.1M, Compound $80M |
| Overall | 11 | 9 | 81.8% | Cohen's Kappa: 0.77 |
81.8% recall on real-world exploits — MIESC would have flagged 9 of 11 multi-million dollar exploits before deployment. Paper 1 reproducibility | Exploit evaluation
Why recall matters more than precision for pre-audit triage: High recall means fewer missed vulnerabilities. False positives are filtered in the triage step — missed vulnerabilities become exploits in production.
Research Papers and Reproducible Claims
MIESC has two linked research tracks. Paper 1 evaluates detection and multi-layer security assessment. Paper 2 extends that evidence into automatic remediation artifacts and independent verification steps. Paper 2 does not replace or invalidate Paper 1; it starts from the same detection pipeline and measures what happens after a finding is converted into a patch candidate.
| Paper | Focus | Main reproducible evidence | Artifacts |
|---|---|---|---|
| Paper 1 | Multi-layer smart contract security evaluation | SmartBugs: 93.7% recall on 143 contracts; DeFi exploits: 81.8% recall on 11 incidents; EVMBench ensemble: 111/120 high-severity findings, 92.5% recall | Reproducibility, claims matrix |
| Paper 2 | Verifiable remediation artifacts | 141/143 fixes applied; 90/141 standalone patched contracts compile; 93/141 eliminate the original finding by re-scan; 91/141 pass bounded no-regression | Reproducibility, claims matrix, experiment audit |
For research citation and review, the canonical current claims are the two paper PDFs, their reproducibility notes, and the benchmarks/results/paper*_claims_matrix.json files. The platform alignment plan maps these paper results into CLI, API, MCP, RAG, and remediation workflow requirements: Paper learnings and platform alignment. RAG source selection and weighting are governed by the RAG source policy. Older release notes, thesis drafts, and roadmap documents are kept for project history and may contain previous benchmark runs or version-specific metrics.
Current technical-debt cleanup and remaining platform work are tracked in the technical debt remediation plan.
The 9 Defense Layers
Layer 1 Static Analysis Slither, Aderyn, Solhint, Semgrep
Layer 2 Dynamic Testing Echidna, Foundry, Medusa
Layer 3 Symbolic Execution Mythril, Halmos, Manticore
Layer 4 Formal Verification SMTChecker, Scribble, Certora*
Layer 5 AI/LLM Analysis SmartLLM, GPTScan, LLMSmartAudit (Ollama)
Layer 6 Pattern Detection Gas Analyzer, Clone Detector, Threat Model
Layer 7 DeFi Security MEV Detector, Flash Loan Analyzer, Oracle Checker
Layer 8 Exploit Validation PoC Synthesizer (Foundry), Vulnerability Verifier
Layer 9 Consensus & Reporting Bayesian Consensus, RAG Enrichment, PDF Reports
*Certora requires API key. All other tools are fully open-source.
What MIESC integrates
| Category | Count | Examples |
|---|---|---|
| External security tools | 13 | Slither, Mythril, Echidna, Foundry, Halmos, Aderyn, Semgrep |
| LLM analysis modules | 6 | SmartLLM, GPTScan, PropertyGPT (via local Ollama) |
| Internal analyzers | 16 | MEV detector, gas analyzer, threat model, clone detector |
| Total analysis modules | 35 | Across 9 complementary techniques |
vs. SmartBugs 2.0 (closest competitor)
| MIESC | SmartBugs 2.0 | |
|---|---|---|
| External tools | 13 | 19 |
| AI/LLM analysis | Yes (Ollama local) | No |
| Internal analyzers | 16 | No |
| False-positive filter | Yes (RAG + ML) | No |
| Professional PDF reports | Yes | No |
| Plugin system | Yes (PyPI) | No |
| GitHub Action | Yes | No |
| SARIF output | Yes | Yes |
GitHub Action
Add smart contract security to any CI/CD pipeline in 30 seconds:
# .github/workflows/security.yml
name: Security
on: [push, pull_request]
permissions:
security-events: write
pull-requests: write
jobs:
miesc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: fboiero/MIESC@v5
with:
path: contracts/
mode: scan
fail-on: high
upload-sarif: true
comment-on-pr: true
Results appear in GitHub's Security tab and as a PR comment. See example workflow for advanced configurations.
Available Modes
| Mode | Tools | Time | Use Case |
|---|---|---|---|
scan |
Slither, Aderyn, Solhint | ~30s | Every push |
audit-quick |
4 core tools | ~2min | PR checks |
audit-full |
All 9 layers | ~10min | Pre-release |
audit-profile |
Configurable | Varies | DeFi, tokens, etc. |
Installation
# Minimal CLI
pip install miesc
# With PDF reports
pip install miesc[pdf]
# Everything (PDF, LLM, RAG, web UI)
pip install miesc[full]
# Development
git clone https://github.com/fboiero/MIESC.git
cd MIESC && pip install -e .[dev]
Requirements: Python 3.12+. Slither installs automatically. Other tools are optional — MIESC uses whatever is available.
For a full researcher workstation with isolated Mythril, Manticore, Certora CLI, Wake, Semgrep, and formal-verification tooling:
git clone https://github.com/fboiero/MIESC.git
cd MIESC
./scripts/bootstrap_researcher_tools.sh
make researcher-smoke
See Researcher Packaging Guide for the recommended PyPI + Docker + local bootstrap distribution model.
Multi-Chain Support
miesc analyze Token.sol # Auto-detects EVM (Solidity/Vyper)
miesc analyze Vault.cairo # Starknet/Cairo (13 vuln types, zkLend-informed)
miesc analyze Program.rs # Solana/Anchor (22 vuln types)
miesc analyze Module.move # Move/Sui/Aptos (19 vuln types)
77 vulnerability types across 4 ecosystems, informed by real 2024-2026 exploits (zkLend $9.6M, Braavos, Wormhole $326M, Ronin $624M).
Bridge vulnerability detection:
miesc scan Bridge.sol # Detects 7 bridge exploit patterns
All Commands
| Command | Description |
|---|---|
miesc scan |
Quick scan (3 tools + intelligence engine) |
miesc scan --diff HEAD~1 |
NEW PR-level: only changed .sol files |
miesc scan contracts/ |
NEW Directory scan (+ --recursive) |
miesc audit quick|full |
Multi-layer audit (3 quick tools or 50 configured adapters) |
miesc fix results.json |
Auto-generate patched .sol files |
miesc remediate results.json |
NEW Generate patched files plus compile/re-scan evidence |
miesc verify contract.sol |
Run Certora/Halmos/SMTChecker provers |
miesc compliance results.json |
NEW Map to ISO/NIST/MiCA/DORA |
miesc report results.json |
Professional PDF/HTML/Markdown report |
miesc specs results.json |
Generate Certora CVL / Scribble specs |
miesc export results.json |
SARIF (GitHub), CSV, HTML export |
miesc analyze contract.cairo |
Multi-chain analysis |
miesc doctor |
Check tool availability |
miesc watch contracts/ |
Live file watching + auto-scan |
Docker
# Standard image (~3GB, multi-arch including Apple Silicon)
docker pull ghcr.io/fboiero/miesc:latest
docker run --rm -v $(pwd):/contracts ghcr.io/fboiero/miesc:latest scan /contracts/MyContract.sol
# Full image (~8GB, all 50 tools, amd64)
docker pull ghcr.io/fboiero/miesc:full
docker run --rm -v $(pwd):/contracts ghcr.io/fboiero/miesc:full audit full /contracts/MyContract.sol
# Check what tools are available
docker run --rm ghcr.io/fboiero/miesc:latest doctor
Docker + Host Ollama (recommended for GPU acceleration)
MIESC's LLM layers (Layer 5) use Ollama for local AI analysis. The recommended setup runs Ollama on your host machine (with GPU access) and connects the Docker container via network:
# 1. Install and start Ollama on your HOST (not inside Docker)
# Download from https://ollama.com
ollama serve &
ollama pull qwen2.5-coder:14b # Best model for code analysis
ollama pull deepseek-coder:6.7b # Lighter alternative
# 2. Scan with host Ollama (macOS)
docker run --rm \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
-v $(pwd)/contracts:/contracts \
ghcr.io/fboiero/miesc:latest \
scan /contracts/MyContract.sol -o /contracts/results.json
# 3. Scan with host Ollama (Linux)
docker run --rm --network=host \
-e OLLAMA_HOST=http://localhost:11434 \
-v $(pwd)/contracts:/contracts \
ghcr.io/fboiero/miesc:latest \
scan /contracts/MyContract.sol
# 4. Generate AI-powered premium PDF report
docker run --rm \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
-v $(pwd):/work \
ghcr.io/fboiero/miesc:full \
report /work/results.json -t premium -f pdf \
--llm-interpret -o /work/audit_report.pdf
Why host Ollama? Docker containers don't have GPU access by default. Running Ollama on the host lets it use your GPU (Apple Silicon, NVIDIA CUDA) for 10-50x faster LLM inference.
Docker Compose (full stack with bundled Ollama)
For environments without a host Ollama, use docker-compose to run everything together:
# Start MIESC + Ollama + model initialization
docker compose -f docker/docker-compose.yml --profile llm up -d
# Run analysis
docker compose -f docker/docker-compose.yml exec miesc miesc scan /app/contracts/MyContract.sol
# Stop
docker compose -f docker/docker-compose.yml down
The compose stack automatically pulls deepseek-coder:6.7b and configures inter-service networking.
ARM / Apple Silicon notes
The standard image runs natively on ARM. The registry full image is intended as amd64 for maximum tool parity. Native ARM full builds skip Echidna, Medusa, Mythril, Manticore, Halmos, and Semgrep by default because upstream releases are amd64-only, require long Z3 source builds, or ship ARM wheels that are not reliable in Docker. Build natively with:
./scripts/build-images.sh full
For full ARM workstation parity, prefer the local bootstrap:
./scripts/bootstrap_researcher_tools.sh
To force native ARM Mythril, Manticore, Halmos, or Semgrep builds inside Docker:
MIESC_BUILD_MYTHRIL=true ./scripts/build-images.sh full
MIESC_BUILD_MANTICORE=true ./scripts/build-images.sh full
MIESC_BUILD_HALMOS=true ./scripts/build-images.sh full
MIESC_BUILD_SEMGREP=true ./scripts/build-images.sh full
CLI Reference
miesc scan contract.sol # Quick scan (Slither + Aderyn + Solhint)
miesc scan contract.sol --ci # CI mode: exit 1 on critical/high
miesc audit quick contract.sol # 4-tool audit
miesc audit full contract.sol # Full 9-layer audit
miesc audit layer 3 contract.sol # Specific layer (e.g., symbolic execution)
miesc audit profile defi contract.sol # Named profile (defi, token, security, etc.)
miesc report results.json -t premium -f pdf --llm-interpret # AI-powered PDF report
miesc export results.json -f sarif # Export as SARIF
miesc doctor # Check tool availability
miesc watch ./contracts # Auto-scan on file changes
miesc benchmark ./contracts --save # Track security posture over time
Analysis Profiles
| Profile | Layers | Best For |
|---|---|---|
fast |
1 | Quick feedback during development |
balanced |
1, 3 | Pre-commit checks |
ci |
1 | CI/CD pipelines |
security |
1, 3, 4, 5 | High-value contracts |
defi |
1, 2, 3, 5, 8 | DeFi protocols |
token |
1, 3, 5 | ERC20/721/1155 tokens |
thorough |
1-9 | Pre-release / comprehensive audit |
Extend MIESC
Custom Detectors
from miesc.detectors import BaseDetector, Finding, Severity
class DangerousDelegatecall(BaseDetector):
name = "dangerous-delegatecall"
description = "Detects unprotected delegatecall patterns"
def analyze(self, source_code, file_path=None):
findings = []
# Your detection logic here
return findings
Register via pyproject.toml:
[project.entry-points."miesc.detectors"]
dangerous-delegatecall = "my_package:DangerousDelegatecall"
Plugin System
miesc plugins install miesc-defi-detectors # Install from PyPI
miesc plugins create my-detector # Scaffold a new plugin
miesc plugins list # List installed plugins
miesc detectors list # List all available detectors
Integrations
# Pre-commit hook (.pre-commit-config.yaml)
repos:
- repo: https://github.com/fboiero/MIESC
rev: v5.4.2
hooks:
- id: miesc-quick
args: ['--ci']
# Foundry (foundry.toml)
[profile.default]
post_build_hook = "miesc audit quick ./src --ci"
// Hardhat (hardhat.config.js)
require("hardhat-miesc");
module.exports = {
miesc: { enabled: true, runOnCompile: true, failOn: "high" }
};
MCP Server (Claude Desktop / Cursor / Claude Code)
MIESC exposes its 9-layer security analysis as MCP tools via Model Context Protocol:
pip install 'miesc[mcp]' # Install with MCP support
miesc-mcp # Start stdio MCP server
Add to your Claude Desktop config.json:
{
"mcpServers": {
"miesc": {
"command": "miesc-mcp",
"env": {"OLLAMA_HOST": "http://localhost:11434"}
}
}
}
Available MCP tools: miesc_quick_scan, miesc_deep_scan, miesc_deep_audit, miesc_run_tool, miesc_run_layer, miesc_analyze_defi, miesc_apply_fix, miesc_validate_remediation, miesc_remediation_evidence_bundle, miesc_list_tools, miesc_doctor
REST API Remediation
The API exposes the same remediation evidence schema as the CLI:
| Endpoint | Purpose |
|---|---|
POST /api/v1/remediate/ |
Apply fix candidates and return an evidence bundle |
POST /api/v1/validate-remediation/ |
Apply fixes with compile/re-scan validation enabled by default |
Examples: Remediation API and MCP guide.
Python API
from miesc.api import run_tool, run_full_audit
results = run_tool("slither", "contract.sol")
report = run_full_audit("contract.sol")
Web UI
pip install miesc[web]
make webapp # Opens at http://localhost:8501
Interactive Streamlit dashboard with upload, analysis, results visualization, and report export.
Multi-Chain Support
| Chain | Status | Languages |
|---|---|---|
| EVM (Ethereum, Polygon, BSC, Arbitrum, etc.) | Production | Solidity, Vyper |
| Solana | Alpha | Rust/Anchor |
| NEAR | Alpha | Rust |
| Move (Sui, Aptos) | Alpha | Move |
| Stellar/Soroban | Alpha | Rust |
| Algorand | Alpha | TEAL, PyTeal |
| Cardano | Alpha | Plutus, Aiken |
miesc scan program.rs --chain solana
miesc scan module.move --chain sui
Non-EVM support is experimental. EVM analysis (50 tools, 9 layers) is production-ready.
Compliance Mapping
MIESC maps every finding to international security standards:
ISO/IEC 27001:2022 | NIST CSF | OWASP Smart Contract Top 10 | CWE | SWC Registry | MITRE ATT&CK | PCI-DSS | SOC 2
Architecture
Contract.sol
|
v
CLI / API / MCP / GitHub Action
|
v
Orchestrator --> 9 Layers --> 50 Tool Adapters
|
v
Finding Aggregator --> ML Pipeline --> RAG Context --> FP Filter
|
v
Report Generator --> JSON / SARIF / PDF / HTML / Markdown
MIESC uses a dual-package architecture:
miesc/- Public API (stable, pip-installable)src/- Internal implementation (35 analysis modules, ML pipeline, RAG, report generation)
See ARCHITECTURE.md for the full technical design with Mermaid diagrams.
Academic Validation
MIESC was developed as a Master's thesis in Cyberdefense at UNDEF-IUA (Argentina). The current research claims are consolidated in Paper 1, Paper 2, and their reproducibility artifacts:
- 143 contracts, 207 ground-truth vulnerabilities, 10 categories
- 93.7% recall on the latest full-corpus reproducible local SmartBugs profile; Slither alone baseline: 43.2%
- Paper 1 now treats EVMBench as the primary business-logic benchmark: static-only reaches 22/120 (18.3%), while the reproducible four-provider ensemble reaches 111/120 (92.5%)
- Paper 2 evaluates remediation artifacts: 141/143 fixes applied, 90/141 patched contracts compile standalone, 93/141 eliminate the original finding by re-scan, and 91/141 pass bounded no-regression
- Reproducible SmartBugs profile runs in 273.4s total (~1.91 sec/contract)
- Canonical results: Paper 1 reproducibility, Paper 2 reproducibility, Paper 1 claims matrix, Paper 2 claims matrix
If you use MIESC in research, please cite:
@mastersthesis{boiero2025miesc,
title={Integrated Security Assessment Framework for Smart Contracts:
A Defense-in-Depth Approach to Cyberdefense},
author={Boiero, Fernando},
year={2025},
school={Universidad de la Defensa Nacional (UNDEF) - IUA C{\'o}rdoba},
type={Master's Thesis in Cyberdefense}
}
Digital Public Good
MIESC is a candidate for Digital Public Goods Alliance certification (Application ID: GID0092948), aligned with the UN Sustainable Development Goals:
- SDG 9 (Industry & Infrastructure): Strengthening blockchain infrastructure security
- SDG 16 (Peace & Strong Institutions): Protecting digital assets, reducing fraud
- SDG 17 (Partnerships): Open-source collaboration for global security standards
The project is fully compliant with all 9 DPGA indicators: open license (AGPL-3.0), clear ownership, platform independence, comprehensive documentation, data portability (SARIF/CSV/JSON), privacy-preserving (local-first), standards-based (12 international standards), and responsible use policies.
| Policy | Description |
|---|---|
| DPG Compliance | Full 9-indicator compliance statement |
| SDG Relevance | Sustainable Development Goals mapping |
| Privacy Policy | Data handling and privacy statement |
| Responsible Use | Ethical use guidelines |
| Do No Harm | Risk assessment and mitigations |
Documentation
| Resource | Description |
|---|---|
| Installation Guide | Complete setup instructions |
| Quick Start | Get running in 5 minutes |
| Architecture | Technical design and layer details |
| Tool Reference | All 35 analysis modules and their capabilities |
| Report Guide | Report templates and customization |
| Custom Detectors | Build your own detectors |
| Multi-Chain | Non-EVM chain analysis |
| API Reference | Auto-generated from docstrings |
| Contributing | Development guidelines |
| Roadmap | What's coming next |
Troubleshooting
Common Issues
miesc: command not found
After pip install miesc, the entry point may not be on your PATH. Use:
python3 -m miesc.cli.main --help
Or add ~/.local/bin (or your venv's bin) to your PATH.
Tool 'mythril' not installed
Optional tools must be installed separately. Either:
pip install miesc[full] # All Python-based tools
brew install mythril # System install
docker run ghcr.io/fboiero/miesc:full # Or use the full Docker image
Ollama API error: HTTP 404 or LLM analysis returns empty
The required model isn't pulled. Run:
ollama pull qwen2.5-coder:14b # ~9 GB, recommended
ollama pull qwen2.5-coder:32b # ~20 GB, more accurate (needs 24+GB RAM)
Verify Ollama is running: curl http://localhost:11434/api/tags
Docker container can't reach Ollama on host (macOS)
docker run -e OLLAMA_HOST=http://host.docker.internal:11434 ...
On Linux, use --network=host and OLLAMA_HOST=http://localhost:11434.
Slither errors with solc not found or version mismatch
Install solc-select:
pip install solc-select
solc-select install 0.4.26 0.5.17 0.8.20
PDF report generation fails (weasyprint errors)
WeasyPrint needs system libraries:
brew install pango cairo gdk-pixbuf libffi # macOS
sudo apt install libpango-1.0-0 libpangoft2-1.0-0 # Linux
miesc audit full runs slowly (>10 min per contract)
Heavy LLM models (32B) on small contracts. Use qwen2.5-coder:14b in ~/.miesc/config.yaml or run quick scan: miesc audit quick.
Contributing
git clone https://github.com/fboiero/MIESC.git
cd MIESC && pip install -e .[dev]
pytest tests/ --no-cov # Fast local run
pytest tests/ --cov=src --cov-report=term-missing # Coverage run
See CONTRIBUTING.md for guidelines. CONTRIBUTING_ES.md disponible en espanol.
License
AGPL-3.0 - Free for any use. Security should not be a privilege reserved for well-funded projects. If you build a service on top of MIESC, contribute your changes back.
Author
Fernando Boiero - Master's Thesis in Cyberdefense, UNDEF-IUA Argentina
Built on the shoulders of: Slither, Mythril, Echidna, Foundry, Aderyn, Halmos, and the Ethereum security community.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file miesc-5.4.2.tar.gz.
File metadata
- Download URL: miesc-5.4.2.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
642e78b38aa6fc4d4316534659bbb8fa0fd7f30ec7bdcc5cefcc95a31c12138d
|
|
| MD5 |
78112bae92408906e05c9045fc4ece0b
|
|
| BLAKE2b-256 |
9930144c9e8bd25f3c5018ae3f8536aebbd7c1da322630a60392379e9180e1cc
|
File details
Details for the file miesc-5.4.2-py3-none-any.whl.
File metadata
- Download URL: miesc-5.4.2-py3-none-any.whl
- Upload date:
- Size: 1.4 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a64ecea3d0e347d831a0c91d22ce8c8e00010bbef16b999de55a0cf97c6ee21e
|
|
| MD5 |
6bce621dcbadfb100af51f3b0975b913
|
|
| BLAKE2b-256 |
0c9f0d1109752c46910e2930cd3d802f9d50addbec6e832117757e2747d6bb2d
|