Skip to main content

High-performance insurance price optimisation via Lagrangian dual decomposition

Project description

Price Contour

High-performance insurance price optimisation via Lagrangian dual decomposition.


Python 3.10+ Rust Polars AGPL-3.0


Price Contour finds optimal price scenario values across a portfolio of insurance risks subject to business constraints. Give it a scored dataset with objective and constraint values at discrete price points, and it returns the scenario value per quote that maximises your objective while respecting every constraint.

The core algorithm is Lagrangian dual decomposition, implemented in Rust for speed and exposed to Python via zero-copy Polars DataFrames. A portfolio of 1M+ risks solves in seconds.


Quick start

uv add price-contour
import polars as pl
import price_contour as pc

# Long-format DataFrame: one row per (quote, price_scenario)
# with pre-computed objective and constraint values
df = pl.read_parquet("scored_quotes.parquet")

optimiser = pc.OnlineOptimiser(
    objective="income",
    constraints={"volume": {"min_pct": 0.90}},  # retain at least 90% of baseline volume
    quote_id="quote_id",
    scenario_index="scenario_index",
    scenario_value="scenario_value",
)

result = optimiser.solve(df)

print(result.converged)        # True
print(result.iterations)       # 23
print(result.lambdas)          # {'volume': 0.147}
print(result.total_objective)  # 1_284_302.5

# Per-quote optimal scenario values as a Polars DataFrame
out = result.dataframe
print(out.head())
# ┌──────────┬──────────────┬────────────────────┬─────────────────────┬──────────────────┐
# │ quote_id │ optimal_step │ optimal_scenario_value │ optimal_income      │ optimal_volume   │
# ╞══════════╪══════════════╪════════════════════╪═════════════════════╪══════════════════╡
# │ Q001     │ 14           │ 1.07               │ 42.30               │ 0.82             │
# │ Q002     │ 11           │ 0.98               │ 18.55               │ 0.91             │
# └──────────┴──────────────┴────────────────────┴─────────────────────┴──────────────────┘

What it does

Price Contour operates on pre-computed scenario data. It does not fit models or generate demand curves. Upstream, your pricing pipeline scores every quote at a grid of price scenario values (e.g. 0.8, 0.85, 0.9, ..., 1.2) and computes what the expected income, volume, loss ratio, etc. would be at each point. Price Contour then selects the optimal scenario value per quote across the portfolio.

The input is a long-format Polars DataFrame:

quote_id scenario_index scenario_value income volume loss_ratio
Q001 0 0.80 85.2 0.95 0.62
Q001 1 0.90 92.1 0.88 0.59
Q001 2 1.00 100.0 0.80 0.60
Q002 0 0.80 42.0 0.97 0.58
... ... ... ... ... ...

The output is one optimal scenario value per quote, chosen to maximise portfolio-level income while keeping portfolio-level volume above 90% of baseline (or whatever constraints you set).


Three optimisation modes

Online optimisation

Find the optimal scenario value per individual quote. Each quote independently picks its best price point, coordinated by shared Lagrange multipliers that enforce portfolio-level constraints.

optimiser = pc.OnlineOptimiser(
    objective="income",
    constraints={
        "volume": {"min_pct": 0.90},                # sum constraint
        "loss_ratio": {                              # ratio constraint
            "numerator":   "incurred",
            "denominator": "premium",
            "max":         0.65,
        },
    },
)
result = optimiser.solve(df)
print(result.lambdas)            # {'volume': 0.147, 'loss_ratio': 1.21}
print(result.total_constraints)  # {'volume': 5400.0, 'loss_ratio': 0.6498}

Both sum and ratio constraints work in all three optimisation modes (online, ratebook, apply) and in the efficient-frontier sweep.

Ratebook optimisation

Find optimal rating factors across rating dimensions. Instead of individual scenario values, find the best factor value for each level of each rating factor (e.g. age band, region, vehicle power), applied uniformly to all quotes sharing that level.

optimiser = pc.RatebookOptimiser(
    objective="income",
    constraints={"volume": {"min_pct": 0.90}},
    factor_columns=[["age_band"], ["region"], ["vehicle_power"]],
)

result = optimiser.solve(df, factors=factor_df)

print(result.factor_tables)
# {'age_band': {'18-25': 1.15, '26-35': 1.02, '36-50': 0.95, '51+': 0.98},
#  'region': {'London': 1.08, 'South East': 1.01, 'North': 0.93},
#  'vehicle_power': {'Low': 0.97, 'Medium': 1.0, 'High': 1.06}}

# Save to disk
result.save("parameters/")

# Convert to rating-step DataFrames
tables = result.to_rating_entries()

Live scoring with stored lambdas

Apply pre-computed Lagrange multipliers to new quotes in a single forward pass, with no iteration. Use this in production to score individual quotes using lambdas learned from a batch solve.

# Batch solve (offline)
result = optimiser.solve(df_portfolio)
lambdas = result.lambdas

# Live scoring (per-quote, no iteration)
applier = pc.ApplyOptimiser(
    lambdas=lambdas,
    objective="income",
    constraints={"volume": {"min_pct": 0.90}},
)
applier.save("config/applier.json")

# Later, in production:
applier = pc.ApplyOptimiser.load("config/applier.json")
live_result = applier.apply(df_single_quote)
optimal_scenario_value = live_result.dataframe["optimal_scenario_value"][0]

Efficient frontier

Sweep constraint thresholds to generate the Pareto frontier - the trade-off curve between your objective and constraints. Each point on the frontier is a full portfolio solve at a different constraint target.

frontier = optimiser.frontier(
    df,
    threshold_ranges={"volume": (0.85, 1.0)},
    n_points_per_dim=20,
)

# DataFrame with one row per frontier point
print(frontier.points)
# ┌──────────────────┬─────────────────┬──────────────┬───────────────┬────────────┬───────────┬─────────┬─────────────────┐
# │ threshold_volume │ total_objective │ total_volume │ lambda_volume │ iterations │ converged │ sv_mean │ sv_pct_increase │
# ╞══════════════════╪═════════════════╪══════════════╪═══════════════╪════════════╪═══════════╪═════════╪═════════════════╡
# │ 0.85             │ 1_350_102       │ 0.851        │ 0.089         │ 18         │ true      │ 1.04    │ 0.62            │
# │ 0.86             │ 1_342_891       │ 0.861        │ 0.102         │ 21         │ true      │ 1.03    │ 0.58            │
# │ ...              │ ...             │ ...          │ ...           │ ...        │ ...       │ ...     │ ...             │
# └──────────────────┴─────────────────┴──────────────┴───────────────┴────────────┴───────────┴─────────┴─────────────────┘

Adjacent points are warm-started from each other (nearest-neighbour traversal of the threshold grid), so the full frontier solves much faster than running each point independently. Each point also includes scenario value distribution statistics (sv_mean, sv_std, percentiles, sv_pct_increase/sv_pct_decrease).

Sweeping a ratio target — declare the constraint with None so the constructor doesn't fix it, then supply the range to frontier():

optimiser = pc.OnlineOptimiser(
    objective="income",
    constraints={
        "loss_ratio": {
            "numerator":   "incurred",
            "denominator": "premium",
            "max":         None,       # frontier supplies the target
        },
    },
)
frontier = optimiser.frontier(
    df,
    threshold_ranges={"loss_ratio": (0.55, 0.75)},
    n_points_per_dim=10,
)
# points["threshold_loss_ratio"] = [0.55, 0.572, ..., 0.75]  (user units, verbatim)
# points["total_loss_ratio"]     = actual Σ incurred / Σ premium at each optimum

Mixed sweep — sweep multiple constraints at once via the cartesian product:

frontier = optimiser.frontier(
    df,
    threshold_ranges={
        "volume":     (8000, 12000),    # absolute units
        "loss_ratio": (0.55, 0.75),     # absolute ratio targets
    },
    n_points_per_dim=10,
)
# 10 × 10 = 100 frontier points

Constraints with numeric thresholds may be omitted from threshold_ranges — they are held fixed at the constructor value across the sweep. None thresholds must have a range entry.


Constraint format

Constraints are specified as a dictionary. There are two shapes:

Sum constraints apply to a single column. The dict key is the column name in your DataFrame, the value specifies direction and threshold. Use min / max for absolute thresholds and min_pct / max_pct for thresholds expressed as a fraction of baseline (the portfolio totals at scenario_value = 1.0):

constraints = {
    "volume":  {"min_pct": 0.90},     # portfolio volume >= 90% of baseline
    "premium": {"min": 1_000_000},    # absolute: portfolio premium >= 1M
    "claims":  {"max_pct": 1.05},     # portfolio claims <= 105% of baseline
}

Ratio constraints apply to a ratio of two summed columns (e.g. loss ratio = Σ incurred / Σ premium). The dict key is a display label (does NOT need to be a column); numerator and denominator name the columns:

constraints = {
    "loss_ratio": {
        "numerator":   "incurred",
        "denominator": "premium",
        "max":         0.65,           # portfolio loss ratio <= 0.65
    },
    "combined_ratio": {
        "numerator":   "claims_plus_expenses",
        "denominator": "premium",
        "max_pct":     1.10,           # <= 110% of baseline combined ratio
    },
}

Internally, ratio constraints are linearised as Σ (num − L·denom) ≤ 0 and handed to the same Lagrangian solver. Setting Σ_baseline denom == 0 raises ValueError for _pct modes (baseline ratio undefined). If Σ_optimum denom == 0 at the chosen step set, the ratio reported in total_constraints[label] and summary() is nan (sentinel; the divide is undefined, not silently zero).

None thresholds mark frontier-only constraints — the threshold is supplied by the sweep range:

constraints = {
    "loss_ratio": {
        "numerator":   "incurred",
        "denominator": "premium",
        "max":         None,           # frontier supplies the target
    },
}

frontier = optimiser.frontier(
    df,
    threshold_ranges={"loss_ratio": (0.55, 0.75)},
    n_points_per_dim=10,
)

solve() rejects None thresholds; frontier() requires a threshold_ranges entry for every None constraint. Numeric-threshold constraints are optional in threshold_ranges — omitted ones are held fixed at their constructor value across the sweep.

points["threshold_<name>"] reports the user-supplied range value verbatim (absolute units for min/max, fractions of baseline for min_pct/max_pct); points["total_<name>"] reports the actual aggregate at the optimum (the actual ratio for ratio constraints).


Direct Parquet loading

For large datasets, build the internal grid directly from a Parquet file without materialising a DataFrame in Python memory:

grid = pc.build_grid_from_parquet(
    "scored_quotes.parquet",
    constraint_columns=["volume", "loss_ratio"],
    objective="income",
)
result = optimiser.solve(grid)

For parquets that exceed available memory in their raw form, use the streaming variant. The IO buffer is bounded by chunk_size; the file is read in row slices via Polars' with_slice pushdown so only the row groups overlapping each slice are deserialised, and column projection means only the four schema columns plus the requested constraint columns are decoded:

grid = pc.build_grid_from_parquet_chunked(
    "huge_scored_quotes.parquet",
    constraint_columns=["volume", "loss_ratio"],
    chunk_size=500_000,         # rows per IO slice; rounded down to a multiple of n_steps
    objective="income",
    # n_steps=20,               # optional: lock upfront if your first slice could be partial
)
result = optimiser.solve(grid)

The final QuoteGrid is still O(n_quotes × n_steps × n_columns × 4 bytes) — that's inherent to the solver's flat data layout — but the parquet decode buffer never exceeds chunk_size rows. Use this when the parquet itself doesn't fit in RAM, not as a way to avoid loading the grid.


Incremental grid building

For datasets streamed from upstream pipelines (e.g. when chunks arrive out-of-order or before the full dataset is materialised anywhere), build the grid incrementally:

builder = pc.QuoteGridBuilder(
    ["volume", "loss_ratio"],
    quote_id="quote_id",
    scenario_index="scenario_index",
    scenario_value="scenario_value",
    objective="income",
    # n_steps=20,               # optional: lock upfront for streaming sources
)

for chunk in upstream:          # any iterable of pl.DataFrame
    builder.append(chunk)

grid = builder.build()
result = optimiser.solve(grid)

Per-chunk contract: each chunk's rows must already be grouped by quote_id (each quote occupies n_steps contiguous rows in scenario_index order). Within a chunk this is validated row-by-row (including a scenario_value consistency check against the canonical grid). Across chunks the order is arbitrary — the builder performs an in-place sort by quote_id at build() time using cycle-following permutation, so peak memory does not double during the sort. Duplicate quote_ids across all appended chunks are detected and reported with both append-order indices.

The optional n_steps kwarg lets streaming pipelines that may receive a partial first chunk lock the contract upfront, skipping the auto-detection probe.


Streaming apply to disk

For live scoring on inputs too large to hold in RAM, stream a parquet through apply and write per-quote results to a parquet output one row group per chunk:

result = pc.apply_lambdas_to_parquet_chunked(
    parquet_in="huge_scored_quotes.parquet",
    parquet_out="scored_results.parquet",
    lambdas={"volume": 0.147, "loss_ratio": 1.21},
    constraints={
        "volume": {"min_pct": 0.90},
        "loss_ratio": {"max_pct": 1.05},
    },
    chunk_size=500_000,
)

# Aggregate totals on the result; per-quote rows are in the output parquet.
print(result.total_objective)         # 1_284_302.5
print(result.total_constraints)       # {'volume': 5400.0, 'loss_ratio': 0.6498}
print(result.output_path)             # 'scored_results.parquet'

# Read back per-quote results lazily.
opt = pl.scan_parquet(result.output_path)

The whole-portfolio optimal_steps array is never materialised — only one chunk's optimal_steps is alive at a time (chunk_size / n_steps entries), and gets dropped along with the chunk's mini-grid after the row group has been written. Aggregate totals accumulate in f64 across chunks. On any error the partial output is best-effort deleted so callers never observe a corrupt artefact, and the input/output paths are checked for equality so the input parquet can't be silently overwritten. Lambda keys not matching any constraint are rejected up front (matching ApplyOptimiser). Ratio constraints are rejected on this path — use ApplyOptimiser.apply(df) on a DataFrame instead, since the per-chunk mini-grid can't carry the raw numerator/denominator columns.


MLflow integration

Both OnlineOptimiser and RatebookOptimiser produce MLflow-ready summaries:

result = optimiser.solve(df)
summary = optimiser.summary(result)

import mlflow
mlflow.log_params(summary["params"])
mlflow.log_metrics(summary["metrics"])
mlflow.log_dict(summary["artifacts"]["lambdas"], "lambdas.json")
mlflow.log_dict(summary["artifacts"]["config"], "config.json")

How it works

The algorithm

Price Contour solves the constrained optimisation problem:

Maximise    sum_i  objective(quote_i, scenario_value_i)
Subject to  sum_i  constraint_k(quote_i, scenario_value_i) >= threshold_k   for all k
            scenario_value_i in {discrete grid}

This is a combinatorial problem (each quote picks from M discrete scenario values). Lagrangian dual decomposition relaxes the coupling constraints into the objective using dual variables (lambdas), decomposing it into N independent per-quote subproblems:

For fixed lambdas:
    Each quote picks:  argmax_m [ objective(i, m) + sum_k lambda_k * constraint_k(i, m) ]

These are independent and embarrassingly parallel.

The outer loop updates lambdas via the subgradient method with adaptive step sizes, iterating until all constraints are satisfied and lambdas converge.

Performance

The Rust core uses:

  • Quote-major memory layout - each quote's M scenario values are contiguous, optimising the per-quote argmax inner loop for cache locality
  • Rayon parallelism - the argmax across quotes is parallelised in grain sizes of 4096 quotes
  • Adaptive step scaling - per-constraint scale factors normalise for differing magnitudes, so the algorithm works equally well for constraints ranging from 0.1 to 1,000,000
  • Lambda averaging - smooths the oscillations inherent in discrete Lagrangian relaxation where all quotes can flip simultaneously

Ratebook mode

For ratebook optimisation, coordinate descent iterates over rating factors. For each factor, a grouped Lagrangian solve finds the best discrete factor value per group (e.g. per age band), with the individual quote scenario value computed as the product of all factor values times a per-quote residual. The inner grouped solve uses the same Lagrangian machinery with remapping to the nearest grid point.


Architecture

price-contour/
├── crates/
│   ├── price-contour-core/        # Pure Rust: algorithms, data structures, solver
│   │   └── src/
│   │       ├── data.rs            # QuoteGrid, SolverConfig, SolveResult, GroupMapping
│   │       ├── solver/
│   │       │   ├── online.rs      # Lagrangian dual decomposition
│   │       │   ├── grouped.rs     # Grouped solve (ratebook inner loop)
│   │       │   ├── argmax.rs      # Per-quote Lagrangian argmax (parallel)
│   │       │   ├── lambda.rs      # Subgradient lambda updates
│   │       │   └── apply.rs       # Fixed-lambda forward pass
│   │       ├── frontier.rs        # Efficient frontier sweeping
│   │       ├── constants.rs       # Solver defaults
│   │       └── error.rs           # Error types
│   └── price-contour/             # PyO3 bindings (thin wrappers)
│       └── src/
│           ├── solver_py.rs       # DataFrame ingestion + solve
│           ├── grouped_py.rs      # Grouped solve bindings
│           ├── apply_py.rs        # Apply bindings
│           ├── frontier_py.rs     # Frontier bindings
│           ├── builder_py.rs      # QuoteGridBuilder bindings
│           ├── grid_py.rs         # QuoteGrid bindings
│           └── parquet_grid_py.rs # Parquet → QuoteGrid loader
├── python/
│   └── price_contour/
│       ├── solver.py              # OnlineOptimiser, ratio linearisation, validation
│       ├── ratebook.py            # RatebookOptimiser + RatebookResult
│       ├── apply.py               # ApplyOptimiser + apply_from_grid
│       ├── frontier.py            # FrontierResult helpers + frontier_summary
│       ├── builder.py             # QuoteGridBuilder wrapper
│       ├── _ratio_results.py      # Shared ratio reporting (actual ratios + column stitching)
│       └── _frontier_helpers.py   # Shared frontier orchestrator (used by online + ratebook)
├── tests/
│   └── python/                    # Integration tests
├── notebooks/                     # Demo notebooks
├── docs/                          # Design documentation
└── scripts/                       # Utility scripts

The pure-Rust core (price-contour-core) has no Python dependencies and can be tested independently with cargo test. The PyO3 crate (price-contour) is a thin binding layer that converts between Polars DataFrames and the internal QuoteGrid representation with zero-copy where possible.


Development

# Clone
git clone https://github.com/PricingFrontier/price-contour.git
cd price-contour

# Install in development mode (compiles Rust, links Python)
uv sync --all-groups
maturin develop

# Run Rust tests
cargo test

# Run Python tests
pytest

# Rebuild after Rust changes
maturin develop

Requirements: Rust toolchain (stable), Python 3.10+, maturin.


API reference

OnlineOptimiser

Method Description
solve(df_or_grid, *, lambdas=None) Run full optimisation. Returns SolveResult. Ratio constraints require a DataFrame (the linearisation needs raw numerator/denominator columns); a pre-built QuoteGrid with ratio constraints raises ValueError.
frontier(df_or_grid, *, threshold_ranges, n_points_per_dim=10, initial_lambdas=None) Sweep the efficient frontier. Returns FrontierResult. Numeric thresholds are optional in threshold_ranges (held fixed if omitted); None thresholds require a range.
summary(result) Package result into MLflow-ready params, metrics, artifacts dicts.
config_dict() Serialisable solver configuration.

RatebookOptimiser

Method Description
solve(df_or_grid, factors, *, factor_columns=None, lambdas=None) Run ratebook optimisation via coordinate descent. Returns RatebookResult.
frontier(df_or_grid, factors, *, threshold_ranges, n_points_per_dim=5, factor_columns=None, initial_lambdas=None) Sweep the efficient frontier via coordinate descent at each threshold. Returns FrontierResult.
summary(result) Package result into MLflow-ready dicts.

ApplyOptimiser

Method Description
apply(df) Single-pass scoring with fixed lambdas. Returns ApplyResult. For ratio constraints, min_pct/max_pct resolve L = pct × baseline_LR from the apply-time DataFrame (live-scoring contract), not the solve-time baseline.
save(path) Save config + lambdas to JSON. Ratio specs round-trip verbatim.
ApplyOptimiser.load(path) Load from saved JSON. Rejects unknown keys.

QuoteGridBuilder

Method Description
QuoteGridBuilder(constraint_columns, *, quote_id, scenario_index, scenario_value, objective, n_steps=None) Construct a builder. n_steps may be passed upfront to skip auto-detection from the first chunk — useful for streaming sources where the first chunk may be partial.
append(df) Add a chunk of quotes. Rows must be grouped by quote_id with scenario_index running 0..n_steps in order. Per-row validation rejects layout violations and scenario_value drift across chunks.
build() Finalise and return a QuoteGrid. Sorts by quote_id in-place via cycle-following permutation (no 2× memory peak). Rejects duplicate quote_ids with both append-order indices in the error.

SolveResult

Property Type Description
converged bool Whether the solver converged.
iterations int Number of iterations taken.
lambdas dict[str, float] Final Lagrange multipliers (shadow prices) per constraint.
total_objective float Portfolio-level objective at optimal solution.
total_constraints dict[str, float] Portfolio-level constraint totals.
baseline_objective float Objective at scenario_value = 1.0.
baseline_constraints dict[str, float] Constraints at scenario_value = 1.0.
dataframe pl.DataFrame Per-quote results with optimal scenario values.
history list[dict] | None Per-iteration convergence records (if record_history=True).
n_quotes int Number of quotes in the grid.
n_steps int Number of scenario value steps.
scenario_values list[float] The scenario value grid.
grid QuoteGrid The internal grid (reusable for subsequent solves or apply).

ApplyResult

Property Type Description
total_objective float Portfolio-level objective.
total_constraints dict[str, float] Portfolio-level constraint totals.
baseline_objective float Objective at scenario_value = 1.0.
baseline_constraints dict[str, float] Constraints at scenario_value = 1.0.
lambdas dict[str, float] Applied Lagrange multipliers.
dataframe pl.DataFrame Per-quote results with optimal scenario values.

ChunkedApplyResult

Returned by apply_lambdas_to_parquet_chunked. Carries the same aggregate totals as ApplyResult but the per-quote rows live only in the output parquet — only one chunk's optimal_steps (chunk_size / n_steps entries) is alive at any time, then dropped after the row group is written.

Property Type Description
total_objective float Portfolio-level objective at the optimum (summed across chunks in f64).
total_constraints dict[str, float] Portfolio-level constraint totals.
baseline_objective float Objective at scenario_value = 1.0.
baseline_constraints dict[str, float] Constraints at scenario_value = 1.0.
lambdas dict[str, float] Applied Lagrange multipliers.
output_path str Path to the streamed-output parquet. Read back via pl.read_parquet or pl.scan_parquet.

FrontierResult

Property Type Description
points pl.DataFrame One row per frontier point with threshold_*, total_objective, total_*, lambda_*, iterations, converged, and scenario value statistics (sv_mean, sv_std, sv_min, sv_p5sv_p95, sv_max, sv_pct_increase, sv_pct_decrease).
n_points int Number of frontier points.

RatebookResult

Property Type Description
factor_tables dict[str, dict[str, float]] Factor name to level-value mapping.
lambdas dict[str, float] Final Lagrange multipliers.
total_objective float Portfolio-level objective at optimal solution.
total_constraints dict[str, float] Portfolio-level constraint totals.
baseline_objective float Objective at scenario_value = 1.0.
baseline_constraints dict[str, float] Constraints at scenario_value = 1.0.
converged bool Whether coordinate descent converged.
cd_iterations int Coordinate descent iterations.
clamp_rate float Fraction of remappings that hit a grid boundary.
per_factor_results list[GroupedSolveResult] Per-factor inner solve results.
save(path) Save factor tables to a directory (one JSON per factor).
to_rating_entries() dict[str, pl.DataFrame] Convert to rating-step DataFrames.

Utility functions

Function Description
build_grid_from_parquet(path, constraint_columns, *, ...) Build a QuoteGrid directly from a Parquet file. Loads the projected columns whole; column projection prunes everything outside constraint_columns + the four schema columns. Sum constraints only — ratio constraints require a DataFrame.
build_grid_from_parquet_chunked(path, constraint_columns, chunk_size, *, n_steps=None, ...) Stream a Parquet file in fixed-size row slices via Polars' with_slice pushdown. Memory peak for the parquet decode buffer is bounded by chunk_size; the final QuoteGrid is still O(total_rows). chunk_size is rounded down to a multiple of n_steps so every slice ends on a quote boundary. Use when the parquet itself doesn't fit in RAM.
apply_lambdas_to_parquet_chunked(parquet_in, parquet_out, lambdas, constraints, chunk_size, *, n_steps=None, ...) Stream a parquet through apply and write per-quote results to parquet_out one row group per chunk. Returns ChunkedApplyResult with aggregate totals; per-quote rows live in the output parquet. The input/output paths are checked for equality (refuses to overwrite the input), and any error best-effort-deletes the partial output.
apply_from_grid(grid, lambdas, constraints) Single-pass Lagrangian apply on an existing QuoteGrid. Returns ApplyResult. Sum constraints only; ratio constraints raise ValueError (use ApplyOptimiser.apply(df) on a DataFrame instead — the grid path can't carry numerator/denominator columns for linearisation).
frontier_summary(frontier_result, selected_index) Package a frontier result into MLflow-ready params, metrics, artifacts dicts.

License

Price Contour is licensed under the GNU Affero General Public License v3.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

price_contour-0.3.2.tar.gz (152.6 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

price_contour-0.3.2-cp313-cp313-win_amd64.whl (7.7 MB view details)

Uploaded CPython 3.13Windows x86-64

price_contour-0.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.5 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

price_contour-0.3.2-cp313-cp313-macosx_11_0_arm64.whl (6.4 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

price_contour-0.3.2-cp313-cp313-macosx_10_12_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.13macOS 10.12+ x86-64

price_contour-0.3.2-cp312-cp312-win_amd64.whl (7.7 MB view details)

Uploaded CPython 3.12Windows x86-64

price_contour-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.5 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

price_contour-0.3.2-cp312-cp312-macosx_11_0_arm64.whl (6.4 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

price_contour-0.3.2-cp312-cp312-macosx_10_12_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.12macOS 10.12+ x86-64

price_contour-0.3.2-cp311-cp311-win_amd64.whl (7.7 MB view details)

Uploaded CPython 3.11Windows x86-64

price_contour-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

price_contour-0.3.2-cp311-cp311-macosx_11_0_arm64.whl (6.5 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

price_contour-0.3.2-cp311-cp311-macosx_10_12_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.11macOS 10.12+ x86-64

price_contour-0.3.2-cp310-cp310-win_amd64.whl (7.7 MB view details)

Uploaded CPython 3.10Windows x86-64

price_contour-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

price_contour-0.3.2-cp310-cp310-macosx_11_0_arm64.whl (6.5 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

price_contour-0.3.2-cp310-cp310-macosx_10_12_x86_64.whl (7.0 MB view details)

Uploaded CPython 3.10macOS 10.12+ x86-64

File details

Details for the file price_contour-0.3.2.tar.gz.

File metadata

  • Download URL: price_contour-0.3.2.tar.gz
  • Upload date:
  • Size: 152.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for price_contour-0.3.2.tar.gz
Algorithm Hash digest
SHA256 24939aedc846c4da834aec33ea481dbfd7ce9ee086d1d765b348a8ac3ae7427a
MD5 24d772ebac84e916c74ec2927b200781
BLAKE2b-256 76b48f12bb19eccaa31f4ff687404568e7770f22c19bfd5c97834f3c6b8185f7

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 3c13105380260c562810b7e8f163c88a15e4669bacc510bc87984bfbe4edb024
MD5 381d168f2f4814a5e40bd62ccce28355
BLAKE2b-256 91742af57b6e10f6e0db5d2738910754bc92d44c2f8b67376660d5e66ad3eb50

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 daf6a7c64e7c3ed2ed97300d65b31c438c39cea69fdb9cfe0139cbb7d505971d
MD5 572dc965d083e30569aa04361655ac5a
BLAKE2b-256 aa66d85317e5f3b81e61ae51a4a3d6caaeb35a99f9c59486c25828f7b64e5142

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp313-cp313-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp313-cp313-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 24a8256415eb2187790191f3d98521872279f6a3c245a380e6c9c2d075fbabef
MD5 36c24aa9f230c06765aa87dcbfe5d8d7
BLAKE2b-256 51d557a676cb7e3aa77253d98f1fea711466db86e20366b79aca853613a6c0b0

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp313-cp313-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp313-cp313-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 48213cd8d8d888ad6786af951e19e51db246ee1a517a5fb10239b886c6400264
MD5 da4a04a41f5073e1d7d8b75b10a4591f
BLAKE2b-256 c580e6c769d081c1b7e4b533d627d8bea2123fda947606b1ad93a7acab10c005

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 be8819fba1f7d43270034b50509e58c04053e01a68eb3d73f99fb487aa37341b
MD5 701ca4c322b44d88d345bcd68d769ae9
BLAKE2b-256 6952e40d13f611ba870200c4af771e6f7f90a17a41a311fee0f47b4f7f2e0a02

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3ada44f239dc25a7845c4443c74247b5410528eb9e6be125d0d9c37447ab9df7
MD5 5dd6c07f16a84d2d3ac85ee10e509e2c
BLAKE2b-256 700faede360639ad033d31cb9eefe99c64548505d1db67a51671be8b5162a7ad

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2b325b14b5efe706fc51ac4ee01ce1974af590f337ddd2ea9f5a97d79d57f866
MD5 6341b8965f5b375a70552a7ba0b4172d
BLAKE2b-256 6b7d6d324fbae1f97b4114fab5d742695f306a0d62a70e2f653a16abdfe3b3c5

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp312-cp312-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp312-cp312-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 31bdad3cecb71d458c606c893703383e5689500b972359f652f6b44cf9dd9c98
MD5 7353ef215c4be3cc03692bb0fbb273d2
BLAKE2b-256 f55edd547770edc1b14b6410142aa6724dc5c16ab2a83a63e6bfc64ef2403c8e

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 2ba274ae74aa3249dbd0fd08cb6d26770fd6b761f7742037b9bb12789fc21164
MD5 4a15684d406edfca5f03bd60aabec67b
BLAKE2b-256 113dd0ff80910791b5f8d6d3186f57e84eb9fa6982d49572796d0ccb65632ca1

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 3719a38c7793e5e7ed5412731eeafca1bd3f6b8c126e5140ab4b05fafdd700f6
MD5 e8cb888fedf846fc2b965b14631d8170
BLAKE2b-256 680163753816d98b70a50afca30a57e44f63bcd118d54803bc1861087ef31b9b

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 358c187df43fa34fc916f02b7b784c75b5f317f417a487945fb48a301eef862a
MD5 edf7dc1a8b28efc8c9dd0fbacde82cfb
BLAKE2b-256 fb8bf8eba3a5539b58bec954b6ac3c3bc19e95ba3ae5cce0dfee35723d0fd7f8

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp311-cp311-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp311-cp311-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 2ca2e60b7da3cc98d5f24567152c1deb26684a5c0a9ead5bda63fcff094af468
MD5 beac58a823b0bb3382b1aea47c7dc35e
BLAKE2b-256 782278dcf183093e113e335ef9c65899a3bc89418369f490948c47c098be7037

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 d80efec25ecc7a473e78113b1e1d664f4dbb03f641747db2cab28bfa235a4636
MD5 71da0a8dae038ccfa0f2f824312b09c6
BLAKE2b-256 4c2005b992d63bd101d18fe3414b1e9360637b4f945490bd9c424de9e0deb5e9

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 404a009a6e264fc8b2b7ed61a5d7e3f5c67c7391b27d8b6c404317630bb66955
MD5 a5e488dae25362d3a9100578bed55dfe
BLAKE2b-256 78b38c66d18b581d9bfca1621046879ceee130bdd1df0f9a28d26aee5ca87d9f

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 07d3ecd5266675dcf38a397a328a73bb3c2b3e9e0014962830dba6d93b8d52f6
MD5 8b7625ab98c9cd5a80d84469cb7ceecf
BLAKE2b-256 25871f088e72939daf0f97192d18eca5081b635a5ebcdec25b1735687f3f0998

See more details on using hashes here.

File details

Details for the file price_contour-0.3.2-cp310-cp310-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for price_contour-0.3.2-cp310-cp310-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 5aa125aa8b1b5f956ca37151543e0de002058cbd51b49dfd11478d0dde9092a5
MD5 81e24dda734c0aa64e7697b1fb25005c
BLAKE2b-256 a48d752c27fe17f1516e0f91719530bb082f065703015b5cf1e91e2316d4944c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page