MCP server exposing Google Meridian Marketing Mix Model trace data as structured tools
Project description
mcp-server-meridian
A Model Context Protocol (MCP) server that exposes Google Meridian Marketing Mix Model (MMM) trace data as structured, queryable tools. Point it at any pickled Meridian model and get instant access to ROI analysis, saturation diagnostics, budget simulation, and more — all through Claude or any MCP-compatible client.
Why
Marketing Mix Models produce rich posterior distributions, but the outputs are locked inside ArviZ traces that require statistical expertise to interpret. This server bridges the gap: it loads a fitted Meridian model and exposes 12 tools that compute MMM insights on demand, making Bayesian marketing analytics conversational.
Use Cases & Example Prompts
For Marketing Managers & Media Buyers
Weekly performance check-in
"Show me channel contributions for the last 4 weeks vs the prior 4 weeks"
Uses get_weekly_contributions(last_n_weeks=8) — compare the two halves to see if a recent campaign change moved the needle.
Is my spend efficient?
"Which channels have the best ROI and which are wasting money?"
Uses get_channel_roi + get_saturation_curves. A channel can have high ROI (historically efficient) but high saturation (no room to grow). The combo tells you where you're getting value vs. where you're overspending.
Budget planning for next quarter
"I have $50k/week. How should I split it across channels?"
Uses get_marginal_roi to rank channels by next-dollar efficiency, then simulate_budget_reallocation to test the proposed split. Iterate — "What if I move $5k from Facebook to Apple Search?" — and simulate again.
Justifying a channel to leadership
"Should we keep spending on Roku? It's only 3% of contribution"
Uses get_contribution_breakdown + get_channel_roi + get_adstock_parameters. Maybe Roku has low contribution because it gets low budget, but its ROI is solid and it has long carry-over — it punches above its weight relative to spend.
Flighting strategy
"Can we pause Vibe ads for 2 weeks without losing much?"
Uses get_adstock_parameters. If half-life is 2.5 weeks, the effect lingers — you can pulse. If Apple Search Ads has a 0.3-week half-life, pausing it kills performance immediately.
For Data Scientists & Analysts
Model sanity check
"Does this model make sense before I present it?"
Uses get_model_summary (convergence, divergences) → get_channel_roi (are ROIs plausible?) → get_saturation_curves (all channels at 90%+ saturation? suspicious) → get_adstock_parameters (absurdly wide CIs? model may be underidentified).
Seasonality investigation
"Did our Q4 holiday spend actually drive more installs?"
Uses get_weekly_contributions(start_week="2024-11-01", end_week="2025-01-15") to see if contributions spiked during holiday weeks or if the extra spend just hit saturation.
Comparing model versions
"We refit the model with 6 more months of data. What changed?"
Use list_models to see available models in S3, then load_model to switch between them mid-conversation. Call get_channel_roi + get_saturation_curves on each. Compare: did Facebook's ROI change? Did TikTok become more saturated?
Sensitivity analysis
"How confident is the model in Google's ROI vs Snapchat's?"
Uses get_channel_roi(credible_interval=0.9). If Google's CI is [0.29, 1.18] and Snapchat's is [0.28, 1.19], the model can't distinguish them — important to flag before making allocation decisions.
Diminishing returns mapping
"At what spend level does each channel hit the wall?"
Uses get_saturation_curves + get_marginal_roi. Saturation tells you where you are on the curve; mROI tells you what the next dollar returns. Together: "How much more can I spend on Apple Search before it stops being worth it?"
For Executives & Strategy
"What if" scenario planning
"What happens if we cut total budget by 20%?"
Uses simulate_budget_reallocation with all channels reduced proportionally. Then try a smarter cut: reduce saturated channels more, protect high-mROI ones. Compare the two scenarios.
Budget-neutral optimization
"Same budget, better results — is it possible?"
Uses get_marginal_roi to find the gap between over-saturated and under-invested channels, then simulate_budget_reallocation to shift dollars from low-mROI to high-mROI. Even a budget-neutral reallocation can improve outcomes.
How It Works
┌──────────────┐ stdio ┌─────────────────────┐
│ Claude Code │ ◄──────────────► │ mcp-server-meridian │
│ Claude Desktop│ │ │
│ Any MCP client│ │ ┌───────────────┐ │
└──────────────┘ │ │ Meridian .pkl │ │
│ │ (ArviZ trace) │ │
│ └───────────────┘ │
└─────────────────────┘
At startup, the server loads a pickled meridian.model.model.Meridian object via meridian_model.load_mmm(). It extracts the ArviZ InferenceData trace and exposes tools that operate directly on the posterior samples — computing medians, credible intervals, and derived metrics (marginal ROI, saturation curves, budget projections) on the fly.
The server is model-agnostic: it dynamically reads channel names, time ranges, geographies, and parameter dimensions from the trace. Different clients' models with different channel sets, time windows, and geo granularity all work without configuration changes.
Tools
get_model_summary
Returns metadata about the loaded model — channels, time range, sample count, convergence diagnostics, and model configuration (adstock type, saturation type, prior settings). Call this first to understand what model is loaded.
Parameters: None
Example output:
{
"model_name": "meridian_demo_installs_app_installs_20260303",
"n_chains": 7,
"n_draws": 2000,
"media_channels": ["facebook_ads", "google_ads", "tiktok_ads", "..."],
"time_range": { "start": "2023-11-06", "end": "2025-10-27", "n_weeks": 104 },
"adstock_type": "geometric",
"max_lag": 4,
"saturation_type": "hill",
"convergence": { "has_divergences": false, "n_divergences": 0 }
}
get_channel_roi
Returns the posterior ROI distribution for each channel — median, mean, credible intervals, and probability of positive ROI. Core metric for channel efficiency.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
credible_interval |
float | 0.9 | Width of credible interval |
Example output:
{
"channels": [
{
"name": "facebook_ads",
"roi_median": 0.376,
"roi_mean": 0.399,
"roi_ci_lower": 0.201,
"roi_ci_upper": 0.676,
"roi_std": 0.149,
"prob_positive": 1.0
}
],
"n_samples": 14000
}
get_marginal_roi
Returns the marginal ROI (mROI) — the expected return on the next dollar spent. Computed from the Hill curve derivative at current spend levels. Channels with mROI > average ROI have headroom; channels with mROI << average ROI are deep in saturation.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
credible_interval |
float | 0.9 | Width of credible interval |
Example output:
{
"channels": [
{
"name": "facebook_ads",
"mroi_median": 0.079,
"mroi_mean": 0.097,
"mroi_ci_lower": 0.009,
"mroi_ci_upper": 0.247,
"headroom_signal": "low",
"vs_average_roi": -0.298
}
]
}
get_contribution_breakdown
Returns the modeled KPI contribution by channel — how much of total outcome each channel is responsible for. Computed from Hill saturation curves and beta coefficients at actual spend levels.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
as_percentage |
bool | true | Include percentage of total |
Example output:
{
"channels": [
{
"name": "facebook_ads",
"contribution_median": 18594.31,
"contribution_ci_lower": 9295.11,
"contribution_ci_upper": 35836.91,
"contribution_pct": 16.47
}
],
"total_modeled_contribution_median": 114132.89
}
get_saturation_curves
Returns Hill saturation curve parameters (EC, slope) for each channel and computes current saturation percentage. Answers: how much runway does each channel have before diminishing returns dominate?
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
Saturation labels:
| Label | Range | Meaning |
|---|---|---|
low |
< 40% | Significant headroom |
moderate |
40–65% | Healthy operating range |
high |
65–85% | Diminishing returns |
very_high |
> 85% | Near-ceiling, reallocate |
Example output:
{
"channels": [
{
"name": "tiktok_ads",
"ec_median": 0.506,
"slope_median": 1.0,
"current_saturation_pct": 89.0,
"saturation_label": "very_high",
"interpretation": "At 89.0% saturation — incremental spend yields ~11.0% of peak efficiency."
}
]
}
get_adstock_parameters
Returns the geometric adstock (carry-over) decay rates for each channel — how long each channel's advertising effect persists after spend stops.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
Example output:
{
"channels": [
{
"name": "tvscientific_ads",
"decay_rate_median": 0.758,
"decay_rate_ci_lower": 0.147,
"decay_rate_ci_upper": 0.981,
"half_life_weeks": 2.5,
"effect_duration_weeks": 8.31,
"interpretation": "Moderate carry-over — effect lingers for ~8 weeks. Can tolerate some gaps in spend."
}
],
"adstock_type": "geometric"
}
get_channel_priors
Returns the prior distributions used for each channel — ROI, adstock decay, half-saturation, Hill slope, and beta effectiveness priors. Use this to verify modeling assumptions, check if informative priors were used, and answer client questions about what the model assumed before seeing the data.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
Example output (global priors — same for all channels):
{
"media_prior_type": "roi",
"media_effects_dist": "log_normal",
"priors_are_per_channel": false,
"global_priors": {
"roi_m": {
"distribution": "LogNormal",
"loc": 0.2,
"scale": 0.9,
"description": "ROI prior (used when media_prior_type='roi')"
},
"alpha_m": {
"distribution": "Uniform",
"low": 0.0,
"high": 1.0,
"description": "Adstock decay rate prior"
},
"ec_m": {
"distribution": "TruncatedNormal",
"loc": 0.8,
"scale": 0.8,
"low": 0.1,
"high": 10.0,
"description": "Half-saturation (EC) prior"
},
"slope_m": {
"distribution": "Deterministic",
"loc": 1.0,
"description": "Hill slope prior"
},
"beta_m": {
"distribution": "HalfNormal",
"scale": 5.0,
"description": "Media effectiveness coefficient prior"
}
},
"applies_to_channels": ["meta", "google", "snapchat", "tiktok"]
}
If the model was fitted with per-channel priors, the output will include a channels array with individual prior parameters per channel instead.
get_prior_posterior_comparison
Compares prior assumptions to what the model learned from data for each channel. Shows overlap coefficient, data influence score, and flags channels where the posterior is still dominated by the prior — meaning the estimate reflects assumptions more than data.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
credible_interval |
float | 0.9 | Width of credible interval |
Key metrics:
| Metric | Range | Meaning |
|---|---|---|
overlap_coefficient |
0–1 | High (>0.7) = prior and posterior are similar, data didn't move the estimate |
data_influence |
0–1 | Low (<0.3) = prior-dominated, High (>0.7) = data-driven |
concentration |
0–1 | How much narrower the posterior is vs the prior |
Influence labels:
| Label | Meaning |
|---|---|
prior_dominated |
Posterior ≈ prior. Estimate reflects assumptions, not data. |
weakly_informed |
Some learning, but prior still has strong influence. |
moderately_informed |
Data moved the estimate meaningfully from the prior. |
data_driven |
Posterior is substantially narrower/shifted. Data is informative. |
Example output:
{
"channels": [
{
"name": "meta",
"parameters": [
{
"parameter": "roi_m",
"label": "ROI",
"prior": { "distribution": "LogNormal", "median": 1.28, "ci_lower": 0.19, "ci_upper": 7.81 },
"posterior": { "median": 0.38, "ci_lower": 0.20, "ci_upper": 0.68 },
"overlap_coefficient": 0.31,
"prior_posterior_shift": 0.42,
"concentration": 0.94,
"data_influence": 0.68,
"influence_label": "moderately_informed"
}
],
"flags": null
}
],
"interpretation": { "..." : "..." }
}
simulate_budget_reallocation
Takes a proposed budget reallocation and projects the impact on KPI using the posterior's Hill + adstock curves. Channels not included in the proposal keep their current spend. This is the tool that powers closed-loop budget optimization.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
proposed_budgets |
dict[str, float] | Required | Channel name → new weekly spend |
n_weeks |
int | 4 | Projection horizon in weeks |
credible_interval |
float | 0.9 | Width of credible interval |
Example input:
{
"proposed_budgets": {
"facebook_ads": 10000,
"apple_search_ads": 35000,
"tiktok_ads": 15000
},
"n_weeks": 4
}
Example output:
{
"current_total_weekly_spend": 65000,
"proposed_total_weekly_spend": 67000,
"spend_change_pct": 3.1,
"projected_revenue": {
"current_median": 114000,
"proposed_median": 118000,
"change_median": 4000,
"change_pct": 3.5,
"proposed_ci_lower": 95000,
"proposed_ci_upper": 145000
},
"channel_impacts": [
{
"name": "facebook_ads",
"current_spend": 15000,
"proposed_spend": 10000,
"spend_change_pct": -33.3,
"current_contribution_median": 18594,
"projected_contribution_median": 17200,
"contribution_change": -1394
}
],
"optimization_notes": [
"Reducing facebook_ads spend by 33% loses only 7% contribution — was in saturation zone."
]
}
list_models
Lists available Meridian model files (.pkl) from the configured S3 bucket. Returns model name, size, and last modified date for each file found, and indicates which model is currently loaded. Requires MERIDIAN_S3_BUCKET to be set.
Parameters: None
Example output:
{
"bucket": "my-meridian-models",
"prefix": "models/",
"models": [
{
"name": "client_20260301.pkl",
"s3_key": "models/client_20260301.pkl",
"size_mb": 50.0,
"last_modified": "2026-03-01T00:00:00+00:00",
"is_loaded": true
},
{
"name": "client_20260201.pkl",
"s3_key": "models/client_20260201.pkl",
"size_mb": 48.0,
"last_modified": "2026-02-01T00:00:00+00:00",
"is_loaded": false
}
],
"count": 2,
"currently_loaded": "client_20260301.pkl"
}
load_model
Loads a Meridian model by name. If MERIDIAN_S3_BUCKET is configured, downloads from S3 to a local cache (skipping download if already cached), then loads it. If S3 is not configured, treats the name as a local file path. Replaces the currently loaded model and returns the model summary.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
name |
str | Required | Model filename (e.g. "client_20260301.pkl") or local path |
Example output: Same as get_model_summary.
get_weekly_contributions
Returns a time series of per-channel contributions week by week. Use this to see recent performance, trends over time, or compare specific time windows.
Parameters:
| Param | Type | Default | Description |
|---|---|---|---|
channels |
list[str] | All | Filter to specific channels |
last_n_weeks |
int | All | Return only the most recent N weeks (overrides start/end) |
start_week |
str | Earliest | Start date inclusive (ISO format, e.g. "2025-06-01") |
end_week |
str | Latest | End date inclusive (ISO format) |
Example output (last_n_weeks=4):
{
"channels": [
{
"name": "facebook_ads",
"weeks": ["2025-09-29", "2025-10-06", "2025-10-13", "2025-10-20"],
"contribution_median": [185.3, 192.1, 178.9, 190.5],
"contribution_ci_lower": [92.5, 96.0, 89.4, 95.2],
"contribution_ci_upper": [356.8, 370.1, 345.2, 367.3],
"total_median": 746.8
}
],
"n_weeks": 4,
"time_range": { "start": "2025-09-29", "end": "2025-10-20" }
}
Quick Start
The fastest way to get started — no cloning, no Docker, no dependency management. Just install and point at your model:
# Install from PyPI
pip install mcp-server-meridian
# Run it
MERIDIAN_MODEL_PATH=/path/to/your/model.pkl mcp-server-meridian
Or run without installing using uvx:
MERIDIAN_MODEL_PATH=/path/to/your/model.pkl uvx mcp-server-meridian
Then add it to your AI client — see Setup below.
What you need
- A fitted Meridian model — a
.pklfile produced bymeridian.model.model.Meridianafter calling.fit() - uv — handles Python automatically, no separate Python install needed:
# Mac / Linux curl -LsSf https://astral.sh/uv/install.sh | sh # Windows (PowerShell) powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
- An MCP-compatible client — Claude Desktop, Claude Code, or any other MCP client
Setup
Option 1: PyPI (recommended)
Install once and configure your client. No repo checkout needed.
pip install mcp-server-meridian
Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"meridian": {
"command": "mcp-server-meridian",
"env": {
"MERIDIAN_MODEL_PATH": "/path/to/your/model.pkl"
}
}
}
}
Claude Code — run this in your terminal:
claude mcp add meridian -- env MERIDIAN_MODEL_PATH=/path/to/your/model.pkl mcp-server-meridian
Or add to .claude/settings.json:
{
"mcpServers": {
"meridian": {
"command": "uvx",
"args": ["mcp-server-meridian"],
"env": {
"MERIDIAN_MODEL_PATH": "/path/to/your/model.pkl"
}
}
}
}
Option 2: Docker
No Python environment required — just Docker.
docker build -t mcp-server-meridian servers/meridian
docker run -i --rm \
-v /path/to/your/model.pkl:/models/model.pkl:ro \
-e MERIDIAN_MODEL_PATH=/models/model.pkl \
mcp-server-meridian
Client config:
{
"mcpServers": {
"meridian": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-v", "/path/to/your/model.pkl:/models/model.pkl:ro",
"-e", "MERIDIAN_MODEL_PATH=/models/model.pkl",
"mcp-server-meridian"
]
}
}
}
Option 3: From source (uv)
For development or if you want to modify the server.
cd servers/meridian
uv sync
MERIDIAN_MODEL_PATH=/path/to/model.pkl uv run mcp-server-meridian
Client config:
{
"mcpServers": {
"meridian": {
"command": "uv",
"args": ["--directory", "/path/to/servers/meridian", "run", "mcp-server-meridian"],
"env": {
"MERIDIAN_MODEL_PATH": "/path/to/meridian_model.pkl"
}
}
}
}
Option 4: S3 Multi-Model Setup
Store multiple model pickles in an S3 bucket and switch between them mid-conversation using list_models and load_model. Downloaded models are cached locally to avoid repeated downloads.
Environment variables:
| Variable | Required | Description |
|---|---|---|
MERIDIAN_S3_BUCKET |
Yes | S3 bucket name containing .pkl files |
MERIDIAN_S3_PREFIX |
No | Prefix/folder within the bucket (default: "") |
MERIDIAN_CACHE_DIR |
No | Local cache directory (default: ~/.cache/mcp-server-meridian) |
AWS_ACCESS_KEY_ID |
Yes | Standard AWS credentials |
AWS_SECRET_ACCESS_KEY |
Yes | Standard AWS credentials |
AWS_REGION |
Yes | AWS region |
You can combine S3 with MERIDIAN_MODEL_PATH — the local model loads at startup, and you can switch to S3 models on demand.
{
"mcpServers": {
"bluealpha-meridian": {
"command": "uv",
"args": ["--directory", "/path/to/servers/meridian", "run", "mcp-server-meridian"],
"env": {
"MERIDIAN_MODEL_PATH": "/path/to/default_model.pkl",
"MERIDIAN_S3_BUCKET": "my-meridian-models",
"MERIDIAN_S3_PREFIX": "models/",
"AWS_ACCESS_KEY_ID": "...",
"AWS_SECRET_ACCESS_KEY": "...",
"AWS_REGION": "us-east-1"
}
}
}
}
To use local-only mode, omit the S3 variables. To switch models locally, change MERIDIAN_MODEL_PATH and restart, or use load_model with a local file path.
Troubleshooting
"No model loaded" — Make sure MERIDIAN_MODEL_PATH points to a valid .pkl file and the file is readable. The path must be absolute.
Import errors on install — Google Meridian requires specific versions of JAX and TensorFlow. If you hit dependency conflicts, try a fresh virtual environment:
python -m venv .venv && source .venv/bin/activate
pip install mcp-server-meridian
S3 models won't download — Verify your AWS credentials have s3:GetObject and s3:ListBucket permissions on the bucket. Check AWS_REGION matches the bucket's region.
Server starts but client can't connect — The server communicates over stdio. Make sure your client config uses "command" (not "url") and that nothing else is reading from the process's stdin/stdout.
Technical Details
Model Pipeline
The server faithfully reproduces Meridian's internal computation pipeline:
- Scale — Raw spend is normalized using the model's media transformer scale factors
- Adstock — Geometric decay is applied (steady-state approximation:
x / (1 - α)) - Saturation — Hill function:
x^s / (ec^s + x^s)whereecis the half-saturation point andsis the slope - Beta scaling — Response is multiplied by the channel's effectiveness coefficient
- KPI transform — Results are scaled from normalized space to real units using the KPI transformer's standard deviation
This order is dictated by the model's hill_before_adstock = False setting, which is read dynamically from the model spec.
Posterior Sampling
All metrics are computed across the full posterior (all chains × all draws), producing proper Bayesian uncertainty estimates. Medians are used as point estimates; credible intervals are computed via percentiles.
Key Posterior Variables
| Variable | Dims | Description |
|---|---|---|
roi_m |
(chain, draw, media_channel) | ROI per channel |
alpha_m |
(chain, draw, media_channel) | Adstock decay rate |
beta_gm |
(chain, draw, geo, media_channel) | Media effectiveness coefficient |
ec_m |
(chain, draw, media_channel) | Hill half-saturation point |
slope_m |
(chain, draw, media_channel) | Hill curve slope |
Known Limitations
This server was built and tested against national-level, single-KPI Meridian models with geometric adstock. The following assumptions are baked into the current implementation and may not hold for all Meridian models:
| Assumption | Impact | When it breaks |
|---|---|---|
| Single geo | beta_gm is sliced with .isel(geo=0) — only the first geo is used |
Multi-geo models (regional, DMA-level) need aggregation across geos |
| Geometric adstock only | Steady-state formula x / (1 - α) assumes geometric decay |
Models fitted with Weibull adstock use a different decay function |
hill_before_adstock = False |
Pipeline always runs adstock → Hill | Models with hill_before_adstock = True would need the reverse order; the flag is read in get_model_summary but not used to branch computation |
| KPI scaling uses stdev only | Inverse transform multiplies by kpi_transformer._population_scaled_stdev but ignores _population_scaled_mean |
Models where the KPI mean offset is non-trivial will have biased contribution/simulation estimates |
revenue_per_kpi not applied |
The model's revenue_per_kpi tensor is never used |
If your KPI is installs/leads and you want revenue-denominated outputs, this multiplier is missing |
| Adstock type label hardcoded | get_model_summary always reports "adstock_type": "geometric" |
Cosmetic — doesn't affect computation, but would be misleading for Weibull models |
Contributions to address any of these are welcome — see Development below.
Development
uv sync --dev
uv run ruff check .
uv run pyright
uv run pytest
mcp dev src/mcp_server_meridian/server.py
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_server_meridian-0.2.6.tar.gz.
File metadata
- Download URL: mcp_server_meridian-0.2.6.tar.gz
- Upload date:
- Size: 231.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
43a758a83be1e6d3d15ed0d4a618c6016346673c2df3c0a630cd41fd85b4cb85
|
|
| MD5 |
9577655f2903e635c6001c7dece3cdf6
|
|
| BLAKE2b-256 |
901ec31da72684a5e616f79cb725f1e69737f272478fd0fb8edab5e6ddf58686
|
File details
Details for the file mcp_server_meridian-0.2.6-py3-none-any.whl.
File metadata
- Download URL: mcp_server_meridian-0.2.6-py3-none-any.whl
- Upload date:
- Size: 24.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7053544ed22cf6be0bd00a7de566f5e78451bbcc66d48c69c5e58cff0c0159d
|
|
| MD5 |
dfc3ce2fd6994f4ca904e8482bdd3131
|
|
| BLAKE2b-256 |
200eb498985e9cf642e5ec9cdb9f088263b77551c26b02b1e5f2a8689cb698d1
|