Skip to main content

Advanced regression methods with sklearn-like interface

Project description

Better Regressions

Advanced regression methods with an sklearn-like interface.

Current Features

  • Linear:
    • Configurable regularization: Ridge with given alpha / BayesianRidge / ARD
    • "Better bias" option to properly regularize the intercept term
  • Scaler:
    • Configurable preprocessing: Standard scaling (by second moment) / Quantile transformation with uniform/normal output / Power transformation
    • AutoScaler to automatically select the best scaling method based on validation split
  • Smooth: Boosting-based regression using smooth functions for features
    • SuperSmoother: Adaptive-span smoother for arbitrary complex functions.
    • Angle: Bagging of piecewise-linear functions, it's less flexible but because of that it's more robust to overfitting.
  • Soft: Mixture of regressors based on quantile classification
  • Stabilize: Robust scaling & clipping transformation for features/targets
  • AutoClassifier: Classification with automatic model selection (LogisticRegression or XGBoost, with auto depth selection)
  • EDA: Exploratory Data Analysis utilities
    • plot_distribution: Visualize sample distributions with fitted t-distribution parameters
    • plot_trend: Automatically detect and visualize relationships between variables + Pearson/Spearman correlation
      • For discrete features: Shows violin plots with distribution at each value
      • For continuous features: Fits trend lines with variance estimation and confidence intervals

Installation

pip install better-regressions

Basic Usage

from better_regressions import auto_angle, auto_linear, Linear, Scaler, AutoClassifier
from better_regressions.eda import plot_distribution, plot_trend
from sklearn.datasets import make_regression, make_moons
import numpy as np

X, y = make_regression(n_samples=100, n_features=5, noise=0.1)
model = auto_angle(n_breakpoints=2)
model.fit(X, y)
y_pred = model.predict(X)
print(repr(model))

# Classification example
dataset = make_moons(n_samples=200, noise=0.3)
Xc, yc = dataset
clf = AutoClassifier(depth="auto")
clf.fit(Xc, yc)
yc_pred = clf.predict(Xc)

# EDA example
plot_distribution(y, name="Target Distribution")
plot_trend(X[:, 0], y, name="Feature 0 vs Target")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

better_regressions-0.3.0.tar.gz (445.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

better_regressions-0.3.0-py3-none-any.whl (16.6 kB view details)

Uploaded Python 3

File details

Details for the file better_regressions-0.3.0.tar.gz.

File metadata

  • Download URL: better_regressions-0.3.0.tar.gz
  • Upload date:
  • Size: 445.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for better_regressions-0.3.0.tar.gz
Algorithm Hash digest
SHA256 742335b45a9bfc0b68b86f63cd6ef4462258b31bd65e13e8b2e87128d705ff9a
MD5 916ac443bb2cbe9f3bfde8ca847a8827
BLAKE2b-256 a0c4d395a81d6da8e7da7de4be3c8fd05ced4fbde335c1154e52bebf3c416eec

See more details on using hashes here.

File details

Details for the file better_regressions-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for better_regressions-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5c3e2793e435634fb24edc00035f81e496a0d32195421e708b694542ebea8de7
MD5 cf009555e332599030b459968a58d1a5
BLAKE2b-256 88697e3973a40bb38e97a22c55ec7d23d39e4a3c4af279735d25fc859740fe18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page