Skip to main content

A tool for agentic recursive model improvement

Project description

Tropiflo

Version License

Beat the competition.


Why is everyone talking about the Tropiflo?

  • Idea Explosion — Launches a swarm of models, feature recipes & hyper-parameters you never knew existed.
  • Full-Map Exploration — Charts the entire optimization galaxy so you can stop guessing and start winning.
  • Hands-Free Mode — Hit run and the search party works through the night.
  • KPI Fanatic — Every evolutionary step is focused on improving your target metric.
  • Data Stays Home — Your training and testing data never leaves your server; everything runs locally.

Quickstart — 2 Minutes

Prerequisites: Docker must be installed (Get Docker)

1. Install

pip install tropiflo

2. Create Your Code

Mark the code you want to evolve with # CO_DATASCIENTIST_BLOCK_START and # CO_DATASCIENTIST_BLOCK_END:

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
import numpy as np

X = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,1,1,0])

# CO_DATASCIENTIST_BLOCK_START
pipe = Pipeline([
    ("scale", StandardScaler()),
    ("clf", LogisticRegression(random_state=0))
])
pipe.fit(X, y)
acc = accuracy_score(y, pipe.predict(X))
# CO_DATASCIENTIST_BLOCK_END

print(f"KPI: {acc:.4f}")  # Print your metric like this!

3. Create config.yaml

# Required
mode: "local"
entry_command: "python xor.py"

# Optional - for AI-powered evolution (contact us for API key)
api_key: "sk_your_token_here"

# Optional - AI engine selection (default: EVOLVE_HYPOTHESIS)
# Currently only EVOLVE_HYPOTHESIS is supported
# engine: "EVOLVE_HYPOTHESIS"

# Optional - run multiple versions in parallel
parallel: 2

# Optional - mount external data directory
data_volume: "/path/to/your/data"

# Optional - GPU/CPU/Memory resource limits per container
# enable_gpu: true        # Auto-detected by default
# gpus_per_task: 1        # Number of GPUs per container
# cpus_per_task: 4.0      # CPU cores per container
# memory_per_task: "8g"   # Memory limit per container

That's it! No Dockerfile, no requirements.txt - everything auto-detected from your environment.

4. Run

tropiflo run --config config.yaml

What happens:

  • Auto-detects Python version and packages
  • Builds Docker container automatically
  • Runs baseline
  • Evolves code to improve KPI (if api_key provided)
  • Saves all results to results/runs/{memorable_run_name}/

Without API key: Runs baseline locally (useful for testing Docker setup)
With API key: Full AI-powered evolution to optimize your code


Using a Private/Self-Hosted Backend

If you run the backend on your own host (VPC, on-prem), point the CLI at it via config or env:

  • In config.yaml:
    • backend_url: "https://your-private-backend.example.com"
    • Optionally backend_url_dev: "http://localhost:8000" for dev mode
  • Or with environment variables:
    • export CO_DATASCIENTIST_CO_DATASCIENTIST_BACKEND_URL="https://your-private-backend.example.com"
    • export CO_DATASCIENTIST_CO_DATASCIENTIST_BACKEND_URL_DEV="http://localhost:8000"
    • export CO_DATASCIENTIST_DEV_MODE=true to force the dev URL slot

If neither YAML nor env are set, the client defaults to https://co-datascientist.io.


Air-Gapped / Offline Deployment

Need to run Tropiflo in an environment without internet access? We've got you covered!

Quick Setup (One-Time, Requires Internet)

# Run this once while connected to internet
tropiflo setup-airgap

# That's it! Now you can disconnect and work offline

What It Does

  1. Pulls base Python Docker image (one-time download)
  2. Builds complete image with all your dependencies pre-installed
  3. Updates your config.yaml to use the pre-built image
  4. Everything runs locally - no internet required after setup

After Setup

# Disconnect from internet (or work in isolated environment)
tropiflo run --config config.yaml  # Works offline!

Perfect for:

  • Air-gapped production environments
  • Isolated VPC deployments
  • High-security environments
  • Offline development

See full guide: docs/AIR_GAP_DEPLOYMENT.md


Using Your Own Data

After the dummy example works, here's how to use YOUR data:

Method 1: Hardcoded Paths (Simplest)

Just put the full path in your code:

import pandas as pd

X = pd.read_csv("/full/path/to/your/data.csv")
# ... rest of your code

Method 2: Docker Volume Mounting (Recommended)

For data that lives outside your project:

Step 1: Update config.yaml

mode: "local"
parallel: 3
data_volume: "/home/user/my_datasets"  # Where your data lives on your machine

Step 2: Update your code

import os
import pandas as pd

# Tropiflo automatically sets INPUT_URI to /data inside Docker
DATA_DIR = os.environ.get("INPUT_URI", "/data")
X = pd.read_csv(os.path.join(DATA_DIR, "train.csv"))
y = pd.read_csv(os.path.join(DATA_DIR, "labels.csv"))

# CO_DATASCIENTIST_BLOCK_START
# Your model code here
# CO_DATASCIENTIST_BLOCK_END

print(f"KPI: {score}")

What happens: Tropiflo mounts /home/user/my_datasets to /data inside the Docker container, so your code can access files like train.csv.

Complete Example:

Your machine:
  /home/user/my_datasets/train.csv
  /home/user/my_datasets/test.csv

Inside Docker (automatic):
  /data/train.csv
  /data/test.csv

🖥️ Resource Allocation (GPU, CPU, Memory)

Control how much hardware each Docker container gets. Perfect for multi-GPU servers or resource-constrained environments.

GPU Configuration

Auto-detection (default):

# No configuration needed - GPUs auto-detected!
# If available: containers get GPU access
# If not available: containers run on CPU automatically

Manual control:

enable_gpu: false       # Force CPU-only (even if GPU available)
enable_gpu: true        # Force GPU (fails if not available)
gpus_per_task: 1        # Each container gets 1 GPU
gpus_per_task: 2        # Each container gets 2 GPUs

CPU & Memory Limits

cpus_per_task: 4.0      # Each container gets 4 CPU cores
memory_per_task: "8g"   # Each container gets 8GB RAM

Common Scenarios

Example 1: Single GPU Workstation

entry_command: "python train.py"
parallel: 2             # Run 2 experiments at once

gpus_per_task: 1        # Each gets 1 GPU (total: 2 GPUs)
cpus_per_task: 4.0      # Each gets 4 cores (total: 8 cores)
memory_per_task: "8g"   # Each gets 8GB (total: 16GB)

Example 2: Multi-GPU Server

entry_command: "python train.py"
parallel: 8             # Run 8 experiments at once

gpus_per_task: 1        # Each gets 1 GPU (total: 8 GPUs)
cpus_per_task: 2.0      # Each gets 2 cores (total: 16 cores)
memory_per_task: "4g"   # Each gets 4GB (total: 32GB)

Example 3: CPU-Only Machine

entry_command: "python train.py"
parallel: 4             # Run 4 experiments at once

enable_gpu: false       # Force CPU mode
cpus_per_task: 2.0      # Each gets 2 cores (total: 8 cores)
memory_per_task: "2g"   # Each gets 2GB (total: 8GB)

How It Works

  • GPU auto-detection: Checks for NVIDIA GPU + Docker GPU support on startup
  • CPU fallback: If no GPU, automatically sets CUDA_VISIBLE_DEVICES="" so PyTorch/TensorFlow use CPU
  • Resource limits: Docker enforces the limits you set per container

Pro tip: Start with no limits, then add them if you need to control resource usage across parallel runs.


🛑 Stopping a Run

Press Ctrl+C anytime to stop. Docker images are cleaned up automatically:

^C
⚠️  Received SIGINT - cleaning up...
🧹 Cleaning up Docker images...
✅ Docker cleanup complete
👋 Workflow interrupted by user

No manual cleanup needed! Automatic cleanup happens on:

  • Normal workflow completion
  • User interruption (Ctrl+C)
  • Process termination

⚠️ Important: Block Placement Rules

Block markers MUST be at top level (no indentation):

# ✅ CORRECT - No indentation before the comment
# CO_DATASCIENTIST_BLOCK_START
def my_model():
    return LinearRegression()
# CO_DATASCIENTIST_BLOCK_END

# ❌ WRONG - Inside a function (has tabs/spaces before comment)
def train():
    # CO_DATASCIENTIST_BLOCK_START  ← This will NOT be detected!
    model = train_model()
    # CO_DATASCIENTIST_BLOCK_END

Rule: Block markers must start at column 0 (no tabs or spaces before #).


📁 Results Structure

Tropiflo saves all results in a clean, organized structure:

your_project/
└── results/
    └── runs/
        ├── happy_panda_20260120_143025/    ← Memorable run name
        │   ├── timeline/                    ← All hypotheses (chronological)
        │   │   ├── 0001_kpi_0.9530_baseline_phishing/
        │   │   ├── 0002_kpi_0.9612_hypothesis_ensemble/
        │   │   └── 0003_kpi_0.9703_hypothesis_stacking/
        │   ├── by_performance/              ← Auto-sorted by KPI
        │   └── best -> timeline/0003...     ← Best checkpoint
        └── brave_tiger_20260121_091532/

Key Features:

  • Each workflow run gets a unique memorable name (e.g., happy_panda_20260120)
  • timeline/ shows every hypothesis tested in order
  • by_performance/ automatically sorts runs by KPI for easy comparison
  • best symlink always points to your best-performing version
  • Results are automatically excluded from Docker builds

How It Works

  1. Mark code blocks with # CO_DATASCIENTIST_BLOCK_START and # CO_DATASCIENTIST_BLOCK_END (at top level only!)
  2. Print your KPI: print(f"KPI: {score}")
  3. Run: tropiflo run --config config.yaml
  4. Find best result: results/runs/{run_name}/best/
  5. Deploy: tropiflo deploy results/runs/{run_name}/best/

Important Notes

  • Avoid input() or interactive prompts - Tropiflo needs to run your code automatically
  • Mark the parts you want to evolve with # CO_DATASCIENTIST_BLOCK_START and # CO_DATASCIENTIST_BLOCK_END
  • Add comments with context about your problem - Tropiflo understands your domain!

Project Structure

Co-DataScientist supports both single-file scripts and multi-file projects:

  • Single File: tropiflo run python my_script.py
  • Multi-File: Auto-detects run.sh, main.py, or run.py in your project root
  • Custom Entry Point: Just wrap your command: tropiflo run bash custom_script.sh

The system automatically detects which files contain CO_DATASCIENTIST_BLOCK markers and evolves them intelligently.


Add Domain-Specific Notes for Best Results

After your code, add comments with any extra context, known issues, or ideas you have about your problem. This helps Co-DataScientist understand your goals and constraints! The Co-Datascientist UNDERSTANDS your problem. It's not just doing a blind search!


Multi-File Evolution

When you run Co-DataScientist on a multi-file project:

  1. Scanning: It scans all .py files in your project for CO_DATASCIENTIST_BLOCK markers
  2. Selection: Each generation, it randomly picks ONE file to evolve
  3. Evolution: The AI generates hypotheses and modifies the selected block
  4. Stitching: Modified code is integrated back into your full project
  5. Testing: Your entire project runs with the new code using your run.sh or custom command
  6. Checkpointing: Best results are saved as complete directories with all files

This means you can have complex multi-file ML pipelines where each file evolves independently but is tested as a complete system. Your project structure and dependencies are preserved.

Example Evolution Flow:

Generation 1: Evolve model.py → Test full project → KPI: 30.0
Generation 2: Evolve data_loader.py → Test full project → KPI: 45.0
Generation 3: Evolve main.py → Test full project → KPI: 60.0

Other helpful stuff

Skip Q&A on Repeat Runs

For faster iterations, use cached answers from your previous run:

tropiflo run --use-cached-qa python xor.py

This skips the interactive questions and uses your previous answers, jumping straight to the optimization process.

Deploy Checkpoints to Production

The deploy command makes it easy to take your best checkpoint and create a production-ready project:

# Deploy best checkpoint from latest run
tropiflo deploy results/runs/happy_panda_20260120/best/

# Deploy specific version
tropiflo deploy results/runs/happy_panda_20260120/timeline/0003_kpi_0.9703_stacking/

# Specify original project path manually
tropiflo deploy results/runs/{run_name}/best/ --original-path /path/to/my_project

# Use custom output directory name
tropiflo deploy results/runs/{run_name}/best/ --output-dir my_optimized_v2

What it does:

  1. Copies your entire original project (including data, configs, assets)
  2. Integrates the evolved code from the checkpoint
  3. Excludes Co-DataScientist artifacts (checkpoints, cache, etc.)
  4. Creates a deployment_info.json with checkpoint metadata

The result is a complete, standalone project ready to deploy to production!


Before vs After Example

Before
KPI ≈ 0.50
After
KPI 1.00
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
import numpy as np

# XOR data
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])

pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('clf', RandomForestClassifier(n_estimators=10, random_state=0))
])

pipeline.fit(X, y)
preds = pipeline.predict(X)
accuracy = accuracy_score(y, preds)
print(f'Accuracy: {accuracy:.2f}')
print(f'KPI: {accuracy:.4f}')
import numpy as np
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.metrics import accuracy_score
from tqdm import tqdm

class ChebyshevPolyExpansion(BaseEstimator, TransformerMixin):
    def __init__(self, degree=3):
        self.degree = degree
    def fit(self, X, y=None):
        return self
    def transform(self, X):
        X = np.asarray(X)
        X_scaled = 2 * X - 1
        n_samples, n_features = X_scaled.shape
        features = []
        for f in tqdm(range(n_features), desc='Chebyshev features'):
            x = X_scaled[:, f]
            T = np.empty((self.degree + 1, n_samples))
            T[0] = 1
            if self.degree >= 1:
                T[1] = x
            for d in range(2, self.degree + 1):
                T[d] = 2 * x * T[d - 1] - T[d - 2]
            features.append(T.T)
        return np.hstack(features)

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])

pipeline = Pipeline([
    ('cheb', ChebyshevPolyExpansion(degree=3)),
    ('scaler', StandardScaler()),
    ('clf', RandomForestClassifier(n_estimators=10, random_state=0))
])

pipeline.fit(X, y)
preds = pipeline.predict(X)
accuracy = accuracy_score(y, preds)
print(f'Accuracy: {accuracy:.2f}')
print(f'KPI: {accuracy:.4f}')

We now support Databricks

Databricks Logo

Databricks setup

  1. Download the databricks CLI package
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sudo sh
  1. Get a databricks token : and test the CLI works Get your Databricks token here
  2. Prepare a config file with all of your compute/environmental requirements in databricks_config.yaml example below
# Enable Databricks integration
databricks: true

# Databricks configuration for XOR demo
databricks:
  cli: "databricks"  # databricks CLI command (optional, defaults to "databricks")
  volume_uri: "dbfs:/Volumes/workspace/default/volume"  # DBFS volume URI for file uploads
  code_path: "dbfs:/Volumes/workspace/default/volume/xor.py"  # Specific code path (optional, overrides volume_uri + temp filename)
  timeout: "30m"  # Job timeout duration
  
  job:
    name: "run-<script-stem>-<timestamp>"  # Job name template (supports <script-stem> and <timestamp>)
    tasks:
      - task_key: "t"
        spark_python_task:
          python_file: "<remote_path>"  # Will be automatically replaced with actual remote path
        environment_key: "default"
    environments:
      - environment_key: "default"
        spec:
          client: "1"
          dependencies:
            - "scikit-learn>=1.0.0"
            - "numpy>=1.20.0"

Then run the co-datascientist with:

tropiflo run --config databricks_config.yaml

Your optimized model results will save to the Databricks volume at the configured path

Local Docker Execution with Volume Mounting

Run your code in Docker containers locally with automatic data volume mounting. Perfect for reproducible environments and large datasets.

Setup

  1. Create a config file (e.g., config.yaml):
mode: "local"
data_volume: "/absolute/path/to/your/data"  # Host directory with your data files
parallel: 1  # Number of parallel executions
  1. Update your code to use environment variables:
import os
import pandas as pd

# Co-DataScientist automatically sets INPUT_URI to /data in the container
INPUT_URI = os.environ.get("INPUT_URI")
df = pd.read_csv(os.path.join(INPUT_URI, "train.csv"))
  1. Add a Dockerfile to your project:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "your_script.py"]
  1. Run Co-DataScientist:
tropiflo run --working-directory . --config config.yaml

What Happens

Co-DataScientist will:

  • Build a Docker image from your project
  • Mount your data_volume directory to /data inside the container
  • Set the INPUT_URI=/data environment variable automatically
  • Execute your code in the container with access to your data
  • Extract KPIs and manage the evolution process

Benefits

  • Reproducible: Same environment every time
  • Isolated: Dependencies don't conflict with your system
  • Scalable: Easy to move to cloud later with minimal changes
  • Clean: No need to copy large datasets into Docker images

See complete demo: /demos/docker_demo/

Google Cloud Run Jobs Integration

Execute your code at scale on Google Cloud infrastructure.

Prerequisites

One-time GCP setup (5 minutes):

  1. Install & authenticate gcloud CLI:
# Install gcloud CLI (if not installed)
# See: https://cloud.google.com/sdk/docs/install

# Authenticate
gcloud auth login
gcloud auth application-default login

# Set your project
gcloud config set project YOUR_PROJECT_ID
  1. Enable required APIs:
gcloud services enable artifactregistry.googleapis.com
gcloud services enable run.googleapis.com
  1. Create Artifact Registry repository:
gcloud artifacts repositories create co-datascientist-repo \
  --repository-format=docker \
  --location=us-central1 \
  --description="Docker images for Co-DataScientist"

Configuration

Minimal config.yaml for GCloud:

# Required
mode: "gcloud"
entry_command: "python your_script.py"
project_id: "your-gcp-project-id"

# Optional
region: "us-central1"                    # Default: us-central1
repo: "co-datascientist-repo"            # Default: co-datascientist-repo
job_name: "co-datascientist-job"         # Default: co-datascientist-job
parallel: 2                               # Parallel execution
data_volume: "gs://your-bucket"           # GCS bucket for data
api_key: "sk_your_token"                  # For AI evolution

What Happens

When you run tropiflo run --config config.yaml:

  1. Builds your Docker image locally
  2. Pushes to GCP Artifact Registry
  3. Creates & executes Cloud Run Job
  4. Retrieves results and KPIs
  5. Cleans up resources automatically

Cost efficient: Cleans up jobs and images automatically (configurable with cleanup_job and cleanup_remote_image)

Using Data from GCS

Add data_volume to mount a GCS bucket:

mode: "gcloud"
project_id: "my-project"
entry_command: "python train.py"
data_volume: "gs://my-data-bucket"

Your code accesses data at /data:

import os
DATA_DIR = os.environ.get("INPUT_URI", "/data")
df = pd.read_csv(os.path.join(DATA_DIR, "train.csv"))

Note: Your Cloud Run service account needs storage.objectViewer permission on the bucket.

AWS ECS Fargate Integration

Execute and optimize your Python code at scale using AWS ECS Fargate.

Setup

  1. Prerequisites:

    • AWS account with ECS Fargate enabled
    • Authenticated AWS CLI: aws configure
    • An ECS cluster and task definition configured for your needs
  2. Create a config file (e.g., aws_config.yaml):

aws:
  enabled: true
  script_path: "/path/to/your/script.py"
  cluster: "my-cluster"
  task_definition: "my-job-taskdef"
  launch_type: "FARGATE"
  region: "us-east-1"
  network_configuration:
    subnets: ["subnet-abc123", "subnet-def456"]
    security_groups: ["sg-123456"]
    assign_public_ip: "ENABLED"
  timeout: 1800  # seconds
  1. Run Co-DataScientist:
tropiflo run --config aws_config.yaml

Your code will be executed in AWS ECS Fargate containers, with results and KPIs retrieved automatically. Perfect for serverless compute scaling!


Analysis and Visualization Tools

Co-DataScientist includes built-in visualization tools to help you analyze your optimization results and compare different versions of your code.

Plot KPI Progression

Visualize how your KPI improves over iterations from checkpoint JSON files:

# Basic usage - plot KPI progression from run directory
tropiflo plot-kpi --checkpoints-dir results/runs/happy_panda_20260120/

# Advanced usage with custom options
tropiflo plot-kpi \
  --checkpoints-dir results/runs/happy_panda_20260120/ \
  --max-iteration 350 \
  --title "AUC Training Progress" \
  --kpi-label "AUC" \
  --output my_kpi_plot.png

Options:

  • --checkpoints-dir, -c: Directory containing checkpoint JSON files (required)
  • --max-iteration, -m: Maximum iteration to include in plot
  • --title, -t: Custom title for the plot
  • --output, -o: Output file path (auto-generated if not specified)
  • --kpi-label, -k: Label for the KPI metric (default: "RMSE")

Generate PDF Code Diffs

Create beautiful PDF reports comparing two versions of your Python code:

# Basic usage - compare two Python files
tropiflo diff-pdf baseline.py improved.py

# Advanced usage with custom options
tropiflo diff-pdf \
  baseline.py \
  optimized.py \
  --output "optimization_report.pdf" \
  --title "XOR Problem Optimization Results"

Options:

  • file1: Path to the baseline/original file (required)
  • file2: Path to the modified/new file (required)
  • --output, -o: Output PDF file path (auto-generated if not specified)
  • --title, -t: Custom title for the diff report

Example workflow:

# 1. Run optimization
tropiflo run --parallel 3 python xor.py

# 2. Plot the KPI progression (shows run name like "happy_panda_20260120")
tropiflo plot-kpi --checkpoints-dir results/runs/happy_panda_20260120/ --title "XOR Optimization"

# 3. Compare best result with baseline
tropiflo diff-pdf \
  results/runs/happy_panda_20260120/timeline/0001_kpi_0.5000_baseline/xor.py \
  results/runs/happy_panda_20260120/best/xor.py \
  --title "XOR Improvements"

These tools help you understand your optimization journey and create professional reports showing the improvements Co-DataScientist achieved.

Need help

We’d love to chat: oz.kilim@tropiflo.io


All set? Run your pipelines and track the results.

Disclaimer: Co-DataScientist executes your scripts on your own machine. Make sure you trust the code you feed it!

Made by the Tropiflo team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tropiflo-1.1.2.tar.gz (448.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tropiflo-1.1.2-py3-none-any.whl (101.4 kB view details)

Uploaded Python 3

File details

Details for the file tropiflo-1.1.2.tar.gz.

File metadata

  • Download URL: tropiflo-1.1.2.tar.gz
  • Upload date:
  • Size: 448.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for tropiflo-1.1.2.tar.gz
Algorithm Hash digest
SHA256 a275dc167fff5b451bb3e42b53be100813b27fcfdf12e706ab2a62419a372622
MD5 2dfe27c99b7a802c9765603f9272774c
BLAKE2b-256 0fd7d9af7a1faf44594eab425b7de0d01c095dad200aaa5ed45472ba3ed98094

See more details on using hashes here.

File details

Details for the file tropiflo-1.1.2-py3-none-any.whl.

File metadata

  • Download URL: tropiflo-1.1.2-py3-none-any.whl
  • Upload date:
  • Size: 101.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for tropiflo-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8f66f37055670f78a5336b621a84507c1dbec887e515e7d905af3775f87d4ea3
MD5 0e8d812bd7a96654b3b244f77a0f015c
BLAKE2b-256 fc9bfeefa1d3a8316c786a6697c8131f7e74aafc4ae7c8e9d1ebe9d15735abda

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page