Skip to main content

Function-first tunables with auto JSON/YAML/TOML config & schema, per-executable composition (tags or trace), and runtime auto-injection.

Project description

tunableX banner

# tunableX

Function-first tunable parameters for Generated flags look like:

--debug --model.hidden_units --model.dropout --train.epochs --train.batch_size --train.optimizer

Root-level parameters (from Main class or no namespace) appear directly as --param, while nested parameters use dotted notation. Boolean flags support --no-... negation via jsonargparse's BooleanOptionalAction.n apps — with:

  • Ergonomic @tunable decorator (declare per‑function user parameters right where they live).
  • Centralized parameter classes (TunableParameters) for a single source of truth with inheritance-based namespaces.
  • Automatic Pydantic models → JSON & JSON Schema (rich defaults, validation constraints, literals, Paths, etc.).
  • Two composition strategies:
    • By app tags (apps=("train", "serve", ...)) for explicit executable groupings.
    • By static AST call graph of an entrypoint (no tags needed) – generate a config from just a function without executing user code.
  • Runtime auto‑injection (use_config) so decorated functions receive values transparently.
  • CLI flag generation (argparse / jsonargparse) with defaults & help text sourced from Pydantic Field metadata.
  • Deterministic merge order: defaults ← optional config file ← CLI overrides.

NOTE: Previous versions used runtime tracing (schema_by_trace, add_flags_by_trace via tracing). This has been replaced by static AST analysis for zero‑execution safety and reproducibility. The compatibility alias add_flags_by_trace still works but now delegates to static analysis.


Install

pip install tunablex  # (or your project env)

Quick Tour

1. Declare tunables

Option A: Direct decoration (explicit namespaces)

from typing import Literal
from pydantic import Field
from tunablex import tunable

@tunable("hidden_units", "dropout", namespace="model", apps=("train",))
def build_model(hidden_units: int = Field(128, ge=1, description="Hidden units"),
                dropout: float = Field(0.2, ge=0.0, le=1.0, description="Dropout")):
    ...

@tunable("epochs", "batch_size", "optimizer", namespace="train", apps=("train",))
def train(epochs: int = Field(10, ge=1, description="Epochs"),
          batch_size: int = Field(32, ge=1, description="Batch size"),
          optimizer: Literal["adam", "sgd"] = Field("adam", description="Optimizer")):
    ...

Option B: Centralized parameters (inheritance-based namespaces)

from typing import Literal
from pydantic import Field
from tunablex import tunable, TunableParameters

# Define your parameter schema once
class Main(TunableParameters):
    """Root-level parameters (appear as --param in CLI, at JSON root)."""
    debug: bool = Field(False, description="Enable debug mode")

class Model(Main):
    """Model parameters (--model.param, under 'model' in JSON)."""
    hidden_units: int = Field(128, ge=1, description="Hidden units")
    dropout: float = Field(0.2, ge=0.0, le=1.0, description="Dropout")

class Train(Main):
    """Training parameters (--train.param, under 'train' in JSON)."""
    epochs: int = Field(10, ge=1, description="Epochs")
    batch_size: int = Field(32, ge=1, description="Batch size")
    optimizer: Literal["adam", "sgd"] = Field("adam", description="Optimizer")

# Use the centralized parameters in your functions
@tunable("hidden_units", "dropout", apps=("train",))
def build_model(hidden_units=Model.hidden_units, dropout=Model.dropout, debug=Main.debug):
    if debug:
        print(f"Building model: {hidden_units} units, {dropout} dropout")
    ...

@tunable("epochs", "batch_size", "optimizer", apps=("train",))
def train(epochs=Train.epochs, batch_size=Train.batch_size, optimizer=Train.optimizer):
    print(f"Training: {epochs} epochs, batch {batch_size}, optimizer {optimizer}")
    ...

The centralized approach provides a single source of truth for your parameters—define once, use everywhere! The class hierarchy automatically determines namespaces: Model(Main) creates the "model" namespace, and parameters from Main appear at the root level.

2. Compose a config model (Explicit App Tags)

from tunablex import schema_for_apps, defaults_for_apps, make_app_config_for

schema = schema_for_apps("train")        # JSON Schema dict
defaults = defaults_for_apps("train")    # Default values dict
AppConfig = make_app_config_for("train") # Pydantic model type

3. Compose a config model (Static Entry Analysis – No Tags)

from tunablex import schema_by_entry_ast, make_app_config_for_entry

# Suppose train_main() calls several @tunable functions (directly or nested)
from mypkg.pipeline import train_main

schema, defaults, namespaces = schema_by_entry_ast(train_main)
AppConfig = make_app_config_for_entry(train_main)

The static analyzer parses the entrypoint’s source and gathers directly called function names (simple, safe heuristic) to select matching registered tunable namespaces.

4. Use a config at runtime

from tunablex import use_config
cfg = AppConfig(**{...})  # or AppConfig.model_validate(loaded_json)
with use_config(cfg):
    train_main()  # All @tunable calls see their section injected

5. Generate schema & defaults files (entrypoint)

from tunablex import schema_by_entry_ast, write_schema
schema, defaults, _ = schema_by_entry_ast(train_main)
write_schema("train_config", schema, defaults)  # writes train_config.schema.json + train_config.json

CLI Integration

jsonargparse (App Tags)

from jsonargparse import ArgumentParser
from tunablex import add_flags_by_app, build_cfg_from_file_and_args, use_config
import examples.myapp.pipeline as pipeline

parser = ArgumentParser(prog="train_jsonarg_app")
parser.add_argument("--config", help="Optional config JSON")
AppConfig = add_flags_by_app(parser, app="train")
args = parser.parse_args()
cfg_dict = build_cfg_from_file_and_args(AppConfig, args)
cfg = AppConfig.model_validate(cfg_dict)
with use_config(cfg):
    pipeline.train_main()

jsonargparse (Static Entry Analysis)

from jsonargparse import ArgumentParser
from tunablex import add_flags_by_entry, build_cfg_from_file_and_args, use_config
import examples.myapp.pipeline as pipeline

parser = ArgumentParser(prog="train_jsonarg_trace")  # name preserved for backwards compat
parser.add_argument("--config", help="Optional config JSON")
AppConfig = add_flags_by_entry(parser, pipeline.train_main)  # or add_flags_by_trace(...)
args = parser.parse_args()
cfg_dict = build_cfg_from_file_and_args(AppConfig, args)
cfg = AppConfig.model_validate(cfg_dict)
with use_config(cfg):
    pipeline.train_main()

Generated flags look like:

--model.hidden_units --model.dropout --model.preprocess.dropna ... --train.epochs --train.batch_size --train.optimizer

Boolean flags support --no-... negation via jsonargparse’s BooleanOptionalAction.

argparse (Entry Analysis)

See examples/argparse_trace/train_trace.py for schema generation & loading using static analysis (schema_by_entry_ast, load_config_for_entry).


Config Merge Order

  1. Pydantic defaults (from each @tunable Field / default value)
  2. JSON file loaded via --config (if provided)
  3. CLI overrides (flags explicitly present on the command line)

This precedence is verified by the test suite (tests/test_overrides.py).


API Reference (Exports)

  • Decorator: tunable
  • Centralized parameters: TunableParameters (base class for inheritance-based namespaces)
  • Composition (apps): make_app_config_for, schema_for_apps, defaults_for_apps, load_app_config
  • Composition (entry): make_app_config_for_entry, schema_by_entry_ast, load_config_for_entry
  • Schema output: write_schema
  • Runtime: use_config
  • CLI helpers: add_flags_by_app, add_flags_by_entry, add_flags_by_trace (alias), build_cfg_from_file_and_args

Migration From Tracing

Previous API: schema_by_trace, add_flags_by_trace(entrypoint) performed runtime execution to discover call chains. These have been superseded by static AST analysis:

  • Use schema_by_entry_ast(entrypoint) instead of schema_by_trace.
  • Use add_flags_by_entry (alias: add_flags_by_trace) for CLI flag generation. Benefits:
  • No side‑effects or data loading just to build a config schema.
  • Faster repeated schema generation in CI / docs.
  • Works in restrictive or sandboxed environments.

Examples Directory

  • examples/myapp/pipeline.py – shared tunable functions (traditional approach).
  • examples/myapp/params.py + pipeline_params.py – centralized TunableParameters approach.
  • examples/argparse_app/train_app.py – classic app‑tag flow.
  • examples/jsonargparse_app/train_jsonarg_app.py – jsonargparse + tags.
  • examples/jsonargparse_app/train_jsonarg_params.py – jsonargparse + centralized parameters.
  • examples/argparse_trace/train_trace.py – entrypoint static analysis with schema generation.
  • examples/jsonargparse_trace/train_jsonarg_trace.py – jsonargparse + static analysis.
  • examples/trace_generate_schema.py – write schema + defaults to disk (AST based).

Run tests:

pytest -q

Philosophy

Keep tunable definition close to the logic; avoid giant central configs. Let the decorator accumulate structure automatically while remaining explicit and type‑checked. Provide zero‑execution schema generation so packaging, documentation, and deployment pipelines stay safe and reproducible.


Contributors

Thanks to all the people who have contributed to tunableX:

  • Jacques PAPPER (@jackpap) - Original author and maintainer
  • Vincent Drouet (@vincentdrouet) - Core contributor

AI Development Assistance

  • Claude Sonnet 4 - implementation
  • ChatGPT 5 - implementation

See CONTRIBUTORS.md for more details.

We welcome contributions! Feel free to open issues, submit PRs, or reach out with ideas.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tunablex-0.1.6.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tunablex-0.1.6-py3-none-any.whl (16.5 kB view details)

Uploaded Python 3

File details

Details for the file tunablex-0.1.6.tar.gz.

File metadata

  • Download URL: tunablex-0.1.6.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tunablex-0.1.6.tar.gz
Algorithm Hash digest
SHA256 6afb677f2d80da295b96cd985f9e9d73d76c62f785af0c4f577b0961675f81f5
MD5 4a6a79959c24b166877727f96a537f20
BLAKE2b-256 6ec8c0f5edb69bda35d8a0597d3190df1d3ba3c4919e7db9d4c64a5963feffcb

See more details on using hashes here.

Provenance

The following attestation bundles were made for tunablex-0.1.6.tar.gz:

Publisher: python-publish.yml on jackpap/tunableX

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tunablex-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: tunablex-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 16.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for tunablex-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 53db19decb3c59888931c97f292eb4fdc6a59da296c99fdb2e1da90651127a8a
MD5 7fcfdfd94cc7dc06e79473eca37d4057
BLAKE2b-256 503b60d9b3f9f4f509499f6fec0898e8873a357277f242e65284b1bbbd664f46

See more details on using hashes here.

Provenance

The following attestation bundles were made for tunablex-0.1.6-py3-none-any.whl:

Publisher: python-publish.yml on jackpap/tunableX

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page