Skip to main content

Multi-tier supply chain bullwhip effect simulator

Project description

DeepBullwhip

CI codecov Docs Python License: MIT Version

Multi-tier supply chain bullwhip effect simulator with modular demand models, ordering policies, and cost functions.

Maintained by the AI Verification & Validation (AI V&V) Lab at King Fahd University of Petroleum & Minerals (KFUPM).

DeepBullwhip Summary Dashboard


Overview

DeepBullwhip provides a configurable simulation framework for studying the bullwhip effect in serial supply chains. It is designed for researchers and practitioners who need to:

  • Simulate multi-echelon supply chains under different demand patterns
  • Model arbitrary DAG supply chain topologies (serial, tree, convergent/divergent)
  • Compare ordering policies (Order-Up-To, custom policies) and cost structures
  • Quantify bullwhip amplification, fill rates, and total supply chain costs
  • Optimize inventory levels and policy parameters using mathematical programming
  • Generate publication-grade diagnostic visualizations (matplotlib + Graphviz)
  • Run Monte Carlo experiments to study forecast-accuracy vs. robustness tradeoffs
  • Integrate with the Python ecosystem: NetworkX, Graphviz, Pyomo

The package is extracted from a computational study on the accuracy–robustness tradeoff in ML-driven semiconductor supply chains (see simulation.ipynb).

Features

Component Description
Demand generators Pluggable via DemandGenerator ABC. Built-in: AR(1) semiconductor, Beer Game step, ARMA(p,q), Replay from data
Ordering policies Pluggable via OrderingPolicy ABC. Built-in: OUT, Proportional OUT, Smoothing OUT, Constant Order
Cost functions Pluggable via CostFunction ABC. Built-in: Newsvendor (h+b), Perishable (h+b+obsolescence)
Forecasters Pluggable via Forecaster ABC. Built-in: Naive, Moving Average, Exponential Smoothing
Benchmarking BenchmarkRunner for standardized policy/forecaster comparison with LaTeX/CSV export
Datasets Built-in datasets: Beer Game, WSTS semiconductor, synthetic AR(1)/ARMA, M5 Walmart
Registry Decorator-based @register system for easy extensibility and model discovery
Supply chain SerialSupplyChain supporting arbitrary K-echelon serial topologies via EchelonConfig
Network topologies SupplyChainGraph + NetworkSupplyChain for arbitrary DAG supply chains (trees, convergent/divergent)
NetworkX integration Bidirectional graph conversion, critical path analysis, centrality, topological ordering
Graphviz visualization Publication-quality SVG/PDF network rendering with metrics overlay
Pyomo optimization Inventory optimization, policy parameter tuning, network design (MIP)
Diagnostics 10 publication-grade plot functions + network diagram + geographic map visualization
Metrics BWR, NSAmp, Fill Rate, Total Cost, Chen lower bound (standalone module + backward-compat diagnostics)
Vectorized engine VectorizedSupplyChain — matrix-based (N, K, T) simulation for Monte Carlo batching. ~100x speedup over serial for N=1000 paths

Installation

# Clone the repository
git clone https://github.com/ai-vnv/deepbullwhip.git
cd deepbullwhip

# Create virtual environment and install
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

Dependencies

  • Core: numpy, scipy, pandas, matplotlib
  • Dev: pytest, pytest-cov
  • Optional (Network): networkx (pip install deepbullwhip[network])
  • Optional (Viz): graphviz (pip install deepbullwhip[viz])
  • Optional (Optimize): pyomo (pip install deepbullwhip[optimize])
  • Optional (ML): scikit-learn, torch
  • Optional (Benchmark): kaggle, tabulate
  • All optional: pip install deepbullwhip[all]

Quick Start

import numpy as np
from deepbullwhip import (
    SemiconductorDemandGenerator,
    SerialSupplyChain,
)

# 1. Generate demand (156 weeks, with shock at week 104)
gen = SemiconductorDemandGenerator()
demand = gen.generate(T=156, seed=42)

# 2. Simulate the default 4-echelon semiconductor supply chain
chain = SerialSupplyChain()
forecasts_mean = np.full_like(demand, demand.mean())
forecasts_std = np.full_like(demand, demand.std())
result = chain.simulate(demand, forecasts_mean, forecasts_std)

# 3. Inspect results
for k, er in enumerate(result.echelon_results):
    print(f"E{k+1}: {er.name:12s}  BW={er.bullwhip_ratio:.2f}  "
          f"FR={er.fill_rate:.0%}  Cost={er.total_cost:,.0f}")

Benchmarking (v0.2.0)

Compare ordering policies and forecasting methods in a single call:

from deepbullwhip.benchmark import BenchmarkRunner

runner = BenchmarkRunner(
    chain_config="semiconductor_4tier",  # or "beer_game", "consumer_2tier"
    demand="semiconductor_ar1",          # or "beer_game", "arma"
    T=156, N=100, seed=42,
)

# Compare policies
results = runner.run(
    policies=[
        "order_up_to",
        ("proportional_out", {"alpha": 0.3}),
        ("constant_order", {"order_quantity": 11.6}),
    ],
    forecasters=["naive", ("moving_average", {"window": 10})],
    metrics=["BWR", "FILL_RATE", "TC"],
)

# View results
print(results.pivot_table(index=["policy","echelon"], columns="metric", values="value"))

# Export
runner.export_csv(results, "benchmark_results.csv")
runner.export_latex(results, "benchmark_table.tex", caption="Policy Comparison")

Adding Custom Models

Extend the framework with the 3-step pattern:

from deepbullwhip.policy.base import OrderingPolicy
from deepbullwhip.registry import register

@register("policy", "my_policy")
class MyPolicy(OrderingPolicy):
    def __init__(self, lead_time: int, service_level: float = 0.95):
        self.lead_time = lead_time
    def compute_order(self, inventory_position, forecast_mean, forecast_std):
        return max(0.0, forecast_mean * 1.5 - inventory_position)

# Now use it in benchmarks:
results = runner.run(policies=["order_up_to", "my_policy"])

See Notebook 03: Custom Policies for a full walkthrough.

Real-World Dataset Benchmarks

Run benchmarks on well-known demand datasets out of the box:

from deepbullwhip.datasets.loader import load_dataset
from deepbullwhip.demand.replay import ReplayDemandGenerator

# Load M5 Walmart, Australian PBS, WSTS, or Beer Game
demand = load_dataset("m5", store="CA_1", dept="FOODS_1", freq="weekly")

runner = BenchmarkRunner(
    chain_config="consumer_2tier",
    demand=ReplayDemandGenerator(data=demand),
    T=200, N=10, seed=42,
)
results = runner.run(policies=["order_up_to", ("proportional_out", {"alpha": 0.3})])
Dataset Source Frequency Periods
M5 Walmart Kaggle M5 Competition Weekly 277
Australian PBS tidyverts/tsibbledata Monthly 197
WSTS Semiconductor Bundled sample Monthly 60
Beer Game Built-in Weekly 52

Download scripts for each dataset are in data/raw/*/download.sh. See notebooks/08_benchmark_real_datasets.ipynb for a cross-dataset comparison.

Network Topologies (v0.3.0)

Model arbitrary DAG supply chains beyond serial chains:

from deepbullwhip import SupplyChainGraph, EdgeConfig, NetworkSupplyChain, EchelonConfig
import numpy as np

# Define a distribution tree: Factory -> Warehouse -> {Retail_A, Retail_B}
graph = SupplyChainGraph(
    nodes={
        "Factory": EchelonConfig("Factory", lead_time=4, holding_cost=0.10, backorder_cost=0.40),
        "Warehouse": EchelonConfig("Warehouse", lead_time=2, holding_cost=0.15, backorder_cost=0.50),
        "Retail_A": EchelonConfig("Retail_A", lead_time=1, holding_cost=0.20, backorder_cost=0.60),
        "Retail_B": EchelonConfig("Retail_B", lead_time=1, holding_cost=0.20, backorder_cost=0.60),
    },
    edges={
        ("Factory", "Warehouse"): EdgeConfig(lead_time=3),
        ("Warehouse", "Retail_A"): EdgeConfig(lead_time=1),
        ("Warehouse", "Retail_B"): EdgeConfig(lead_time=1),
    },
)

# Simulate
chain = NetworkSupplyChain(graph)
T = 52
result = chain.simulate(
    demand={"Retail_A": np.full(T, 5.0), "Retail_B": np.full(T, 3.0)},
    forecasts_mean={"Retail_A": np.full(T, 5.0), "Retail_B": np.full(T, 3.0)},
    forecasts_std={"Retail_A": np.full(T, 1.0), "Retail_B": np.full(T, 1.0)},
)

for name, er in result.node_results.items():
    print(f"{name:12s}  BW={er.bullwhip_ratio:.2f}  FR={er.fill_rate:.0%}")

NetworkX Integration

from deepbullwhip import to_networkx, from_networkx
from deepbullwhip.network import find_critical_path, echelon_centrality

# Convert to NetworkX for graph analysis
G = to_networkx(graph)
print("Critical path:", find_critical_path(G))
print("Centrality:", echelon_centrality(G))

# Build from NetworkX
import networkx as nx
G = nx.DiGraph()
G.add_node("Supplier", lead_time=4, holding_cost=0.1, backorder_cost=0.4)
G.add_node("Store", lead_time=1, holding_cost=0.2, backorder_cost=0.6)
G.add_edge("Supplier", "Store", lead_time=2)
chain = NetworkSupplyChain.from_networkx(G)

Graphviz Visualization

from deepbullwhip import render_network, save_figure

# Render network diagram (with optional simulation overlay)
source = render_network(graph, sim_result=result, engine="dot", title="Distribution Tree")
save_figure(source, "network.pdf")

Pyomo Optimization

from deepbullwhip.optimize import tune_service_levels, tune_smoothing_factors

# Find optimal service levels via simulation-optimization
scenarios = np.random.default_rng(42).normal(10, 2, (50, 52))
scenarios = np.maximum(scenarios, 0)

result = tune_service_levels(graph, scenarios, objective="total_cost")
print("Optimal service levels:", result.parameters)
print("Expected cost:", result.objective_value)

# Find optimal smoothing factors
result = tune_smoothing_factors(graph, scenarios)
print("Optimal alpha_s:", result.parameters)

Standardized Schema + Multi-Backend Rendering (v0.3.0)

Define supply chains in a standard JSON format and render identically across matplotlib, Graphviz, and TikZ:

JSON Schema

{
  "version": "1.0",
  "metadata": {"name": "Consumer 2-Tier", "tags": ["serial", "2-echelon"]},
  "nodes": [
    {"id": "Manufacturer", "config": {"lead_time": 4, "holding_cost": 0.10, "backorder_cost": 0.40},
     "layout": {"tier": 0, "role": "manufacturer"}},
    {"id": "Retailer", "config": {"lead_time": 1, "holding_cost": 0.20, "backorder_cost": 0.80},
     "layout": {"tier": 1, "role": "retailer"}}
  ],
  "edges": [{"source": "Manufacturer", "target": "Retailer", "config": {"lead_time": 3}}]
}

Multi-Backend Rendering

from deepbullwhip import render_graph, from_serial, to_json, save_json, load_json
from deepbullwhip.chain.config import beer_game_config

graph = from_serial(beer_game_config())

# Save to standard JSON
save_json(graph, "beer_game.json", metadata={"name": "Beer Game"})

# Render with matplotlib (default) — 4 built-in themes
fig = render_graph(graph, theme="kfupm")            # KFUPM green/gold (default)
fig = render_graph(graph, theme="ieee")              # IEEE grayscale, 3.5" width
fig = render_graph(graph, theme="presentation")      # Large fonts for slides
fig = render_graph(graph, theme="minimal")           # Clean black & white

# Render as TikZ for LaTeX papers
tex = render_graph(graph, backend="tikz", theme="ieee", title="Beer Game")
with open("beer_game.tex", "w") as f:
    f.write(tex)

# Render with Graphviz (requires pip install deepbullwhip[viz])
source = render_graph(graph, backend="graphviz", engine="dot")

# One-liner: load JSON and render
fig = render_from_json("beer_game.json", theme="kfupm")

Supply Chain Examples (Different Tier Counts)

2-Tier (Manufacturer → Retailer):

from deepbullwhip.chain.config import consumer_2tier_config
fig = render_graph(from_serial(consumer_2tier_config()), theme="minimal")

4-Tier Beer Game (Factory → Distributor → Wholesaler → Retailer):

from deepbullwhip.chain.config import beer_game_config
fig = render_graph(from_serial(beer_game_config()), theme="kfupm", title="MIT Beer Game")

Distribution Tree (Factory → Warehouse → {Store A, Store B}):

from deepbullwhip import SupplyChainGraph, EdgeConfig, EchelonConfig, render_graph

tree = SupplyChainGraph(
    nodes={
        "Factory": EchelonConfig("Factory", 4, 0.10, 0.40),
        "Warehouse": EchelonConfig("Warehouse", 2, 0.15, 0.50),
        "Store_A": EchelonConfig("Store_A", 1, 0.20, 0.60),
        "Store_B": EchelonConfig("Store_B", 1, 0.20, 0.60),
    },
    edges={
        ("Factory", "Warehouse"): EdgeConfig(lead_time=3),
        ("Warehouse", "Store_A"): EdgeConfig(lead_time=1),
        ("Warehouse", "Store_B"): EdgeConfig(lead_time=1),
    },
)
fig = render_graph(tree, theme="presentation", title="Distribution Network")
tex = render_graph(tree, backend="tikz", theme="ieee")  # For LaTeX papers

Default Supply Chain Configuration

Echelon Role Lead Time h (holding) b (backorder)
E1 Distributor / OEM 2 weeks 0.15 0.60
E2 Assembly & Test (OSAT) 4 weeks 0.12 0.50
E3 Foundry / Fab 12 weeks 0.08 0.40
E4 Wafer / Material Supplier 8 weeks 0.05 0.30

Vectorized Monte Carlo Simulation

For large-scale experiments, use the matrix-based engine that processes N demand paths simultaneously via NumPy broadcasting:

from deepbullwhip import SemiconductorDemandGenerator, VectorizedSupplyChain

gen = SemiconductorDemandGenerator()
demand = gen.generate_batch(T=156, n_paths=1000, seed=42)  # (1000, 156)

vchain = VectorizedSupplyChain()
fm = np.full_like(demand, demand.mean())
fs = np.full_like(demand, demand.std())
result = vchain.simulate(demand, fm, fs)

# Average metrics across all 1000 paths
print(result.mean_metrics())

# Extract a single path as standard SimulationResult
sr = result.to_simulation_result(path_index=0)

Benchmark (N=1000, T=156, K=4):

Engine Time Speedup
Serial (SerialSupplyChain) 3.9s 1x
Vectorized (VectorizedSupplyChain) 0.04s ~100x

The vectorized engine uses:

  • Pre-allocated (N, K, T) order/inventory/cost matrices
  • Circular buffer pipeline with O(1) indexing (vs O(L) list.pop)
  • Fully vectorized OUT policy and newsvendor cost across N paths and K echelons per time step
  • Batch demand generation via generate_batch() with (N, T) noise matrix

Customization

Custom echelon configuration

from deepbullwhip import EchelonConfig, SerialSupplyChain

configs = [
    EchelonConfig("Retailer", lead_time=1, holding_cost=0.20, backorder_cost=0.80),
    EchelonConfig("Manufacturer", lead_time=6, holding_cost=0.10, backorder_cost=0.40),
]
chain = SerialSupplyChain.from_config(configs)

Custom ordering policy

from deepbullwhip.policy.base import OrderingPolicy

class MyPolicy(OrderingPolicy):
    def compute_order(self, inventory_position, forecast_mean, forecast_std):
        # Your logic here
        return max(0.0, forecast_mean - inventory_position)

Custom cost function

from deepbullwhip.cost.base import CostFunction

class MyCost(CostFunction):
    def compute(self, inventory):
        # Your logic here
        return abs(inventory) * 0.1

Visualization

Diagnostic plots

All plot functions return matplotlib.figure.Figure objects and support width="single" (3.5") or width="double" (7.0") for journal formatting. Colors use the KFUPM AI V&V Lab palette.

from deepbullwhip.diagnostics.plots import (
    plot_demand_trajectory,
    plot_order_quantities,
    plot_inventory_levels,
    plot_inventory_position,
    plot_order_streams,
    plot_cost_timeseries,
    plot_cost_decomposition,
    plot_bullwhip_amplification,
    plot_summary_dashboard,
    plot_echelon_detail,
)

fig = plot_summary_dashboard(demand, result)
fig.savefig("dashboard.pdf", dpi=300)

Network and geographic visualization

from deepbullwhip.diagnostics.network import (
    kfupm_petrochemical_network,
    plot_network_diagram,
    plot_supply_chain_map,
)

network = kfupm_petrochemical_network()
fig = plot_network_diagram(network, sim_result=result)
fig = plot_supply_chain_map(network, sim_result=result)

Batch figure generation

python scripts/visualize.py --save --outdir figures --dpi 600

Project Structure

deepbullwhip/
├── __init__.py                 # Public API re-exports
├── _types.py                   # TimeSeries, EchelonResult, SimulationResult
├── registry.py                 # Decorator-based @register system
├── sensitivity.py              # Forecast sensitivity (lambda_f)
├── demand/
│   ├── base.py                 # DemandGenerator ABC
│   ├── semiconductor.py        # AR(1) + seasonal + shock
│   ├── beer_game.py            # Classic Beer Game step demand
│   ├── arma.py                 # General ARMA(p,q) process
│   └── replay.py              # Replay from historical data
├── policy/
│   ├── base.py                 # OrderingPolicy ABC
│   ├── order_up_to.py          # Order-Up-To (OUT) policy
│   ├── proportional_out.py     # Proportional OUT (POUT)
│   ├── constant_order.py       # Constant order (BWR=0)
│   └── smoothing_out.py        # Smoothing OUT
├── cost/
│   ├── base.py                 # CostFunction ABC
│   ├── newsvendor.py           # Newsvendor h/b cost
│   └── perishable.py           # Perishable (h+b+obsolescence)
├── forecast/
│   ├── base.py                 # Forecaster ABC
│   ├── naive.py                # Naive (sample mean/std)
│   ├── moving_average.py       # Rolling window MA
│   └── exponential_smoothing.py # Single exponential smoothing
├── metrics/
│   ├── bullwhip.py             # BWR, CumulativeBWR
│   ├── inventory.py            # NSAmp, FillRate
│   ├── cost.py                 # TotalCost
│   └── bounds.py               # ChenLowerBound
├── benchmark/
│   ├── runner.py               # BenchmarkRunner
│   ├── configs.py              # Predefined chain configs
│   └── report.py               # LaTeX, Markdown, CSV export
├── datasets/
│   ├── beer_game.py            # Beer Game step demand
│   ├── synthetic.py            # AR(1), ARMA generators
│   ├── m5.py                   # M5 Walmart data loader
│   └── wsts.py                 # WSTS semiconductor data
├── chain/
│   ├── config.py               # EchelonConfig + defaults
│   ├── echelon.py              # SupplyChainEchelon
│   ├── serial.py               # SerialSupplyChain
│   ├── vectorized.py           # VectorizedSupplyChain (N,K,T) matrix engine
│   ├── graph.py                # SupplyChainGraph, EdgeConfig (v0.3.0)
│   └── network_sim.py          # NetworkSupplyChain (v0.3.0)
├── network/                    # NetworkX integration (v0.3.0)
│   ├── convert.py              # to_networkx, from_networkx
│   └── analysis.py             # critical path, centrality, etc.
├── optimize/                   # Pyomo optimization (v0.3.0)
│   ├── inventory.py            # Multi-echelon inventory optimization
│   ├── policy_tuning.py        # Service level / smoothing tuning
│   └── network_design.py       # Facility location MIP (experimental)
└── diagnostics/
    ├── metrics.py              # Bullwhip ratio, fill rate, etc.
    ├── plots.py                # 10 publication-grade plot functions
    ├── network.py              # Network diagram + geographic map
    └── graphviz_viz.py         # Graphviz rendering (v0.3.0)

├── schema/                     # JSON schema (v0.3.0)
│   ├── definition.py           # NodeLayoutHint, LayoutDefaults, NetworkMetadata
│   └── io.py                   # to_json, from_json, save/load
├── render/                     # Multi-backend renderer (v0.3.0)
│   ├── theme.py                # 4 built-in themes + registry
│   ├── layout.py               # Auto-layout from topology
│   ├── _matplotlib.py          # Matplotlib backend
│   ├── _graphviz.py            # Graphviz backend
│   ├── _tikz.py                # TikZ/LaTeX backend
│   └── api.py                  # Unified render_graph() entry point

tests/                          # 385 unit tests, 98%+ coverage
notebooks/                      # All notebooks run on Google Colab
├── 01_supply_chain_cost.ipynb      # Costs, inventory, service level tradeoffs
├── 02_bullwhip_effect.ipynb        # Bullwhip amplification & Monte Carlo
├── 03_custom_policies.ipynb        # Custom policies, smoothing, @register
├── 04_network_viz_tutorial.ipynb   # DAG topologies, JSON schema, themes, NetworkX
├── 05_pyomo_optimization.ipynb     # Policy tuning, inventory opt, network design
├── 06_benchmark_policies.ipynb      # Systematic policy comparison
├── 07_benchmark_forecasters.ipynb   # Forecaster comparison
└── 08_benchmark_real_datasets.ipynb # M5, WSTS, Beer Game benchmarks

Testing

# Run all tests
python -m pytest tests/ -v

# With coverage
python -m pytest tests/ --cov=deepbullwhip --cov-report=term-missing

Current: 385 tests across all modules, 98%+ coverage.

Tutorials

All notebooks include Google Colab setup cells and run standalone.

Notebook Topic
01 Supply Chain Cost Newsvendor costs, holding vs backorder, service level tradeoffs
02 Bullwhip Effect Bullwhip amplification, Monte Carlo validation, Chen lower bound
03 Custom Policies Implementing & registering custom ordering policies
04 Network & Viz DAG topologies, JSON schema, NetworkX integration, multi-backend rendering
05 Pyomo Optimization Service level tuning, inventory optimization, network design

Citation

If you use DeepBullwhip in your research, please cite:

@software{deepbullwhip,
  title  = {DeepBullwhip: Multi-Tier Supply Chain Bullwhip Effect Simulator},
  author = {Arief, Mansur M.},
  url    = {https://github.com/ai-vnv/deepbullwhip},
  year   = {2025}
}

Documentation

Full API documentation is available at ai-vnv.github.io/deepbullwhip.

License

MIT License. See LICENSE for details.

Developed and maintained by the AI V&V Lab at KFUPM.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepbullwhip-0.3.0.tar.gz (6.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deepbullwhip-0.3.0-py3-none-any.whl (104.4 kB view details)

Uploaded Python 3

File details

Details for the file deepbullwhip-0.3.0.tar.gz.

File metadata

  • Download URL: deepbullwhip-0.3.0.tar.gz
  • Upload date:
  • Size: 6.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for deepbullwhip-0.3.0.tar.gz
Algorithm Hash digest
SHA256 9fba40e68651e0eafce4311f37ea0cd55726f97416f6b733732e459d0650bbaf
MD5 866580d88f6dff74a8e669190f0116d2
BLAKE2b-256 60c1e7bcbc432052a5c23d2ba93de8558845ed568c08850a3f62c45607ee073c

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepbullwhip-0.3.0.tar.gz:

Publisher: publish.yml on ai-vnv/deepbullwhip

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file deepbullwhip-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: deepbullwhip-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 104.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for deepbullwhip-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6ca725a681ea2db27fd519db07dedfdd3b687c1a1690ec5b04f2cc0b99ddece5
MD5 7022110e403dc4e6589dbf55344037a1
BLAKE2b-256 39688e595b6d4d139080d5be89e6b88dfdb7cf95b6d81bb7faa40b5e0928d3c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepbullwhip-0.3.0-py3-none-any.whl:

Publisher: publish.yml on ai-vnv/deepbullwhip

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page