Skip to main content

A lightweight library for hyperparameter and configuration management

Project description

paramflow

tests PyPI

One pf.load() call is all you need, the result is a plain Python dict, and env and CLI args are handled.

ParamFlow is a lightweight library for layered configuration management, tailored for machine learning projects and any application that needs to merge parameters from multiple sources. It merges files, environment variables, and CLI arguments in a defined order, activates named profiles, and returns a read-only, attribute-accessible dict that is fully compatible with the Python dict API.

Requires Python 3.11+

Design philosophy

ParamFlow is intentionally minimalist. You define parameters once in a config file — no schemas, no type annotations, no boilerplate. Types are inferred from the values in the config file and automatically applied when overriding via environment variables or CLI arguments. One pf.load() call is all you need, and the result is a plain Python dict — works anywhere a dict does: json.dumps, **unpacking, serialization libraries, all without conversion.

Features

  • Layered configuration: Merge parameters from files, environment variables, and CLI arguments in a defined order. Config file is optional — pure env/args loading is supported.
  • .env auto-discovery: A .env file in the current directory is picked up automatically when no sources are specified.
  • Profile support: Manage multiple named parameter sets; activate one at runtime.
  • Immutable result: Parameters are returned as a frozen, attribute-accessible dict fully compatible with the Python dict API — works with json.dumps, **unpacking, and any serialization library without conversion.
  • Schema-free type inference: Types come from the config file values — no annotations required.
  • Auto-generated CLI parser: Every parameter becomes a --flag automatically, with types and defaults inferred from the config.
  • Layered meta-parameters: paramflow configures itself (sources, profile, prefixes) using the same layered approach.
  • Nested configuration: Deep-merges nested dicts across layers; individual subkeys overridable via key__subkey syntax in env vars and CLI args.

Installation

pip install paramflow

With .env file support:

pip install "paramflow[dotenv]"

Supported formats

Format Extension Notes
TOML .toml Recommended; native types
YAML .yaml Requires pyyaml
JSON .json
INI .ini Values are type-inferred (int, float, bool, str)
dotenv .env Requires paramflow[dotenv]; filtered by prefix

Basic usage

params.toml

[default]
learning_rate = 0.001
batch_size = 64
debug = true

app.py

import paramflow as pf

params = pf.load('params.toml')
print(params.learning_rate)  # 0.001
print(params.batch_size)     # 64

Run with --help to see all parameters and meta-parameters:

python app.py --help

Parameter layering

Parameters are merged in the order sources are listed. Later sources override earlier ones. By default, env and args are appended automatically:

params.toml  →  env vars  →  CLI args

You can pass multiple files — each layer overrides keys from the previous:

params = pf.load('base.toml', 'overrides.toml')

To control the order explicitly, pass all sources as positional arguments ('env' and 'args' are reserved names for environment variables and CLI arguments respectively):

params = pf.load('params.toml', 'env', 'overrides.env', 'args')

To disable auto-appending of env or args sources, pass None as env and args prefixes:

params = pf.load('params.toml', env_prefix=None, args_prefix=None)

File-free loading

No config file is required. You can load purely from environment variables or CLI arguments — useful for containerized workloads where config comes entirely from the environment:

params = pf.load()  # env vars and CLI args only
P_LR=0.001 P_BATCH_SIZE=32 python app.py
# or
python app.py --lr 0.001 --batch_size 32

Without a config file as a schema, all prefixed env vars and all CLI args are accepted. Values are type-inferred (int, float, bool, or str) in both cases.

.env auto-discovery

If pf.load() is called with no sources and a .env file exists in the current directory, it is loaded automatically — no path needed:

params = pf.load()  # picks up .env if present

This only triggers when no sources are explicitly provided. Explicit sources always take precedence.

Inline dicts as sources

Plain dicts can be mixed into the source list:

params = pf.load('params.toml', {'debug': False, 'extra_key': 'value'})

This can be used to for example set default values or use params loaded into dict in completely custom way.

Type inference

No type declarations are needed anywhere. Types are handled automatically in all cases:

  • Config file present (TOML, YAML, JSON): the type of each value in the config is used as the target type when overriding via env vars or CLI args. batch_size = 32 in the config means --batch_size 64 and P_BATCH_SIZE=64 both produce int(64).
  • No config file (pure env/args): values are inferred in order — int, float, bool, then str. P_LR=0.001 produces float(0.001), P_DEBUG=true produces bool(True).
  • INI files: since INI has no native types, infer_type is applied to every value on load, same as the no-schema case.

The result is consistent behavior regardless of source format — you always get the most specific type possible without declaring anything.

Nested parameters

Nested parameters can be overridden using __ (double underscore) as the separator, both in env vars and CLI args:

params.toml

[default.optimizer]
lr = 0.001
momentum = 0.9

Override a single subkey via CLI:

python app.py --optimizer__lr 0.0001

Or via environment variable:

P_OPTIMIZER__LR=0.0001 python app.py

Any depth is supported:

python app.py --a__b__c 42

Key filtering for env vars and CLI args

Env vars and CLI args only override keys that already exist in the preceding layers. A P_NEW_KEY with no matching key in the config file is silently ignored. This keeps the config file the authoritative schema.

Profiles

Profiles let you define named parameter sets that layer on top of [default].

params.toml

[default]
learning_rate = 0.001
batch_size = 32
debug = true

[prod]
debug = false
batch_size = 128

Activate a profile via CLI:

python app.py --profile prod

Or via environment variable:

P_PROFILE=prod python app.py

Or directly in code:

params = pf.load('params.toml', profile='prod')

Overriding parameters at runtime

Any parameter can be overridden on the command line:

python app.py --profile prod --learning_rate 0.0001 --batch_size 64

Or via environment variable (default prefix P_, uppercased):

P_LEARNING_RATE=0.0001 python app.py

Meta-parameter layering

Meta-parameters control how pf.load reads its own configuration (which sources to load, which profile to activate, what prefixes to use). They follow the same layering order:

  1. pf.load(...) keyword arguments
  2. Environment variables (default prefix: P_)
  3. CLI arguments

This means you can pass a config file path entirely from the command line without hardcoding it:

python app.py --sources params.toml

Or point to a different config via env:

P_SOURCES=prod_params.toml python app.py

Metadata keys

Every result includes two metadata keys:

  • __source__: list of all sources that contributed parameters, in merge order
  • __profile__: list of activated profiles, e.g. ['default', 'prod']
params = pf.load('params.toml')
print(params.__source__)   # ['params.toml', 'env', 'args']
print(params.__profile__)  # ['default']

Freezing and unfreezing

pf.load returns a ParamsDict — an immutable, attribute-accessible dict. You can freeze/unfreeze manually when needed (e.g. for serialization):

plain = pf.unfreeze(params)   # convert to plain dict/list tree
frozen = pf.freeze(plain)     # convert back to ParamsDict/ParamsList

Accessing a missing key raises AttributeError with the parameter name:

params.nonexistent  # AttributeError: 'ParamsDict' has no param 'nonexistent'

Example: ML hyperparameter profiles

params.toml

[default]
learning_rate = 0.00025
batch_size = 32
optimizer = 'torch.optim.RMSprop'
random_seed = 13

[adam]
learning_rate = 1e-4
optimizer = 'torch.optim.Adam'
python train.py --profile adam --learning_rate 0.0002

Example: environment-based deployment config

params.yaml

default:
  debug: true
  database_url: "mysql://localhost:3306/myapp"

dev:
  database_url: "mysql://dev:3306/myapp"

prod:
  debug: false
  database_url: "mysql://prod:3306/myapp"
export P_PROFILE=prod
python app.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

paramflow-0.7.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

paramflow-0.7-py3-none-any.whl (17.7 kB view details)

Uploaded Python 3

File details

Details for the file paramflow-0.7.tar.gz.

File metadata

  • Download URL: paramflow-0.7.tar.gz
  • Upload date:
  • Size: 19.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for paramflow-0.7.tar.gz
Algorithm Hash digest
SHA256 335871bb110a625a844a8d39688c123bc5e90e3b47f35b786a030bee5a72bd2e
MD5 46eac28fecbd5837451298218a9ef06b
BLAKE2b-256 d28055dc15d8d1c8fa47167cf0eefbf73955c00a891d52d2b1f8b2aecbbc5b43

See more details on using hashes here.

File details

Details for the file paramflow-0.7-py3-none-any.whl.

File metadata

  • Download URL: paramflow-0.7-py3-none-any.whl
  • Upload date:
  • Size: 17.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for paramflow-0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 e73ac920a3abcc4fd0ebae87c3e30260d80b3465510e218d19ac0a0d477f09c9
MD5 5b15c7a44cc136318ead7caf5a44d6e8
BLAKE2b-256 870f110cf2d50e4709ec43a4d7bc52a228d1d8ad6cb591da7c348a10e4cb6141

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page