Skip to main content

A lightweight library for hyperparameter and configuration management

Project description

paramflow

ParamFlow is a lightweight library for layered configuration management, tailored for machine learning projects and any application that needs to merge parameters from multiple sources. It merges files, environment variables, and CLI arguments in a defined order, activates named profiles, and returns a read-only, attribute-accessible dictionary.

Requires Python 3.11+

Design philosophy

ParamFlow is intentionally minimalist. You define parameters once in a config file — no schemas, no type annotations, no boilerplate. Types are inferred from the values in the config file and automatically applied when overriding via environment variables or CLI arguments. The goal is to keep configuration code as small as possible: one pf.load() call is all you need.

Features

  • Layered configuration: Merge parameters from files, environment variables, and CLI arguments in a defined order.
  • Profile support: Manage multiple named parameter sets; activate one at runtime.
  • Immutable result: Parameters are returned as a frozen, attribute-accessible dictionary.
  • Schema-free type inference: Types come from the config file values — no annotations required.
  • Auto-generated CLI parser: Every parameter becomes a --flag automatically, with types and defaults inferred from the config.
  • Layered meta-parameters: paramflow configures itself (sources, profile, prefixes) using the same layered approach.
  • Nested configuration: Deep-merges nested dicts and same-length lists across layers.

Installation

pip install paramflow

With .env file support:

pip install "paramflow[dotenv]"

Supported formats

Format Extension Notes
TOML .toml Recommended; native types
YAML .yaml Requires pyyaml
JSON .json
INI .ini All values are strings; rely on type conversion
dotenv .env Requires paramflow[dotenv]; filtered by prefix

Basic usage

params.toml

[default]
learning_rate = 0.001
batch_size = 64
debug = true

app.py

import paramflow as pf

params = pf.load('params.toml')
print(params.learning_rate)  # 0.001
print(params.batch_size)     # 64

Run with --help to see all parameters and meta-parameters:

python app.py --help

Parameter layering

Parameters are merged in the order sources are listed. Later sources override earlier ones. By default, env and args are appended automatically:

params.toml  →  env vars  →  CLI args

You can pass multiple files — each layer overrides keys from the previous:

params = pf.load('base.toml', 'overrides.toml')

To control the order explicitly, pass all sources as positional arguments ('env' and 'args' are reserved names for environment variables and CLI arguments respectively):

params = pf.load('params.toml', 'env', 'overrides.env', 'args')

To disable auto-appending of env or args sources, pass None:

params = pf.load('params.toml', env_prefix=None, args_prefix=None)

Inline dicts as sources

Plain dicts can be mixed into the source list:

params = pf.load('params.toml', {'debug': False, 'extra_key': 'value'})

Type inference

No type declarations are needed anywhere. The type of each value in the config file is used as the target type when merging from env vars or CLI args. For example, if batch_size = 32 is in the config, then --batch_size 64 from the CLI is automatically converted to int. Booleans, floats, dicts, and lists all work the same way.

Key filtering for env vars and CLI args

Env vars and CLI args only override keys that already exist in the preceding layers. A P_NEW_KEY with no matching key in the config file is silently ignored. This keeps the config file the authoritative schema.

Profiles

Profiles let you define named parameter sets that layer on top of [default].

params.toml

[default]
learning_rate = 0.001
batch_size = 32
debug = true

[prod]
debug = false
batch_size = 128

Activate a profile via CLI:

python app.py --profile prod

Or via environment variable:

P_PROFILE=prod python app.py

Or directly in code:

params = pf.load('params.toml', profile='prod')

Overriding parameters at runtime

Any parameter can be overridden on the command line:

python app.py --profile prod --learning_rate 0.0001 --batch_size 64

Or via environment variable (default prefix P_, uppercased):

P_LEARNING_RATE=0.0001 python app.py

Meta-parameter layering

Meta-parameters control how pf.load reads its own configuration (which sources to load, which profile to activate, what prefixes to use). They follow the same layering order:

  1. pf.load(...) keyword arguments
  2. Environment variables (default prefix: P_)
  3. CLI arguments

This means you can pass a config file path entirely from the command line without hardcoding it:

python app.py --sources params.toml

Or point to a different config via env:

P_SOURCES=prod_params.toml python app.py

Metadata keys

Every result includes two metadata keys:

  • __source__: list of all sources that contributed parameters, in merge order
  • __profile__: list of activated profiles, e.g. ['default', 'prod']
params = pf.load('params.toml')
print(params.__source__)   # ['params.toml', 'env', 'args']
print(params.__profile__)  # ['default']

Freezing and unfreezing

pf.load returns a ParamsDict — an immutable, attribute-accessible dict. You can freeze/unfreeze manually when needed (e.g. for serialization):

plain = pf.unfreeze(params)   # convert to plain dict/list tree
frozen = pf.freeze(plain)     # convert back to ParamsDict/ParamsList

Accessing a missing key raises AttributeError with the parameter name:

params.nonexistent  # AttributeError: 'ParamsDict' has no param 'nonexistent'

Example: ML hyperparameter profiles

params.toml

[default]
learning_rate = 0.00025
batch_size = 32
optimizer = 'torch.optim.RMSprop'
random_seed = 13

[adam]
learning_rate = 1e-4
optimizer = 'torch.optim.Adam'
python train.py --profile adam --learning_rate 0.0002

Example: environment-based deployment config

params.yaml

default:
  debug: true
  database_url: "mysql://localhost:3306/myapp"

dev:
  database_url: "mysql://dev:3306/myapp"

prod:
  debug: false
  database_url: "mysql://prod:3306/myapp"
export P_PROFILE=prod
python app.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

paramflow-0.6.1.tar.gz (34.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

paramflow-0.6.1-py3-none-any.whl (10.0 kB view details)

Uploaded Python 3

File details

Details for the file paramflow-0.6.1.tar.gz.

File metadata

  • Download URL: paramflow-0.6.1.tar.gz
  • Upload date:
  • Size: 34.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for paramflow-0.6.1.tar.gz
Algorithm Hash digest
SHA256 4f7389893936a32809e65b3d555090d559cd0d8b18f56e38de6c5f9f5e285d47
MD5 f24c519c7e06ac1031fc8532f1276fee
BLAKE2b-256 237e50f0ce9dee774b76d81683ae2e9d2ceee0c8364db6d4387fc680c791077c

See more details on using hashes here.

File details

Details for the file paramflow-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: paramflow-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 10.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for paramflow-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 15ee253402b05fd54404db13f0d652cd4dd8a0c27f9c4d81db15891a36af408e
MD5 92c6e83af2bed027a71e21f7b33236ab
BLAKE2b-256 5c77e48fc9639bbad80be3f7c46572322afa06d580e1a33bb17ba98a2c747b30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page