Skip to main content

Allow SKLearn predictions to run on database systems in pure SQL.

Project description

OrbitalML

Convert SKLearn pipelines into SQL queries for execution in a database without the need for a Python environment.

See examples directory for example pipelines and Documentation

Warning:

This is a work in progress.
You might encounter bugs or missing features.

Note:

Not all transformations and models can be represented as SQL queries,
so OrbitalML might not be able to implement the specific pipeline you are using.

Getting Started

Install OrbitalML:

$ pip install orbitalml

Prepare some data:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

COLUMNS = ["sepal.length", "sepal.width", "petal.length", "petal.width"]

iris = load_iris(as_frame=True)
iris_x = iris.data.set_axis(COLUMNS, axis=1)

# SQL and OrbitalML don't like dots in column names, replace them with underscores
iris_x.columns = COLUMNS = [cname.replace(".", "_") for cname in COLUMNS]

X_train, X_test, y_train, y_test = train_test_split(
    iris_x, iris.target, test_size=0.2, random_state=42
)

Define a Scikit-Learn pipeline and train it:

from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

pipeline = Pipeline(
    [
        ("preprocess", ColumnTransformer([("scaler", StandardScaler(with_std=False), COLUMNS)],
                                        remainder="passthrough")),
        ("linear_regression", LinearRegression()),
    ]
)
pipeline.fit(X_train, y_train)

Convert the pipeline to OrbitalML:

import orbitalml
import orbitalml.types

orbitalml_pipeline = orbitalml.parse_pipeline(pipeline, features={
    "sepal_length": orbitalml.types.DoubleColumnType(),
    "sepal_width": orbitalml.types.DoubleColumnType(),
    "petal_length": orbitalml.types.DoubleColumnType(),
    "petal_width": orbitalml.types.DoubleColumnType(),
})

You can print the pipeline to see the result:

>>> print(orbitalml_pipeline)

ParsedPipeline(
    features={
        sepal_length: DoubleColumnType()
        sepal_width: DoubleColumnType()
        petal_length: DoubleColumnType()
        petal_width: DoubleColumnType()
    },
    steps=[
        merged_columns=Concat(
            inputs: sepal_length, sepal_width, petal_length, petal_width,
            attributes: 
             axis=1
        )
        variable1=Sub(
            inputs: merged_columns, Su_Subcst=[5.809166666666666, 3.0616666666666665, 3.7266666666666666, 1.18333333...,
            attributes: 
        )
        multiplied=MatMul(
            inputs: variable1, coef=[-0.11633479416518255, -0.05977785171980231, 0.25491374699772246, 0.5475959...,
            attributes: 
        )
        resh=Add(
            inputs: multiplied, intercept=[0.9916666666666668],
            attributes: 
        )
        variable=Reshape(
            inputs: resh, shape_tensor=[-1, 1],
            attributes: 
        )
    ],
)

Now we can generate the SQL from the pipeline:

sql = orbitalml.export_sql("DATA_TABLE", orbitalml_pipeline, dialect="duckdb")

And check the resulting query:

>>> print(sql)

SELECT ("t0"."sepal_length" - 5.809166666666666) * -0.11633479416518255 + 0.9916666666666668 +  
       ("t0"."sepal_width" - 3.0616666666666665) * -0.05977785171980231 + 
       ("t0"."petal_length" - 3.7266666666666666) * 0.25491374699772246 + 
       ("t0"."petal_width" - 1.1833333333333333) * 0.5475959809777828 
AS "variable" FROM "DATA_TABLE" AS "t0"

Once the SQL is generate, you can use it to run the pipeline on a database. From here on the SQL can be exported and reused in other places:

>>> print("\nPrediction with SQL")
>>> duckdb.register("DATA_TABLE", X_test)
>>> print(duckdb.sql(sql).df()["variable"][:5].to_numpy())

Prediction with SQL
[ 1.23071715 -0.04010441  2.21970287  1.34966889  1.28429336]

We can verify that the prediction matches the one done by Scikit-Learn by running the scikitlearn pipeline on the same set of data:

>>> print("\nPrediction with SciKit-Learn")
>>> print(pipeline.predict(X_test)[:5])

Prediction with SciKit-Learn
[ 1.23071715 -0.04010441  2.21970287  1.34966889  1.28429336 ]

Supported Models

OrbitalML currently supports the following models:

  • Linear Regression
  • Logistic Regression
  • Lasso Regression
  • Elastic Net
  • Decision Tree Regressor
  • Decision Tree Classifier
  • Random Forest Classifier
  • Gradient Boosting Regressor
  • Gradient Boosting Classifier

Testing

Setup testing environment:

$ uv sync --no-dev --extra test

Run Tests:

$ uv run pytest -v

Try Examples:

$ uv run examples/pipeline_lineareg.py

Development

Setup a development environment:

$ uv sync

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orbitalml-0.2.1.tar.gz (41.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orbitalml-0.2.1-py3-none-any.whl (53.3 kB view details)

Uploaded Python 3

File details

Details for the file orbitalml-0.2.1.tar.gz.

File metadata

  • Download URL: orbitalml-0.2.1.tar.gz
  • Upload date:
  • Size: 41.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for orbitalml-0.2.1.tar.gz
Algorithm Hash digest
SHA256 3063d2e1abcc2e67e0db1398027a4716923a7a8ebedd6ab63edac7fa6bb49b83
MD5 759b96ba46dd9e8e337ece3f7ba11617
BLAKE2b-256 0157e1d19cc65b6c6f1e55c2b69961af7f637cb712db79b2d13a6d648d82ba2a

See more details on using hashes here.

File details

Details for the file orbitalml-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: orbitalml-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 53.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for orbitalml-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 634e52bebf0e904b326d52a92c83f03794a83ee4b229404a2b98ea77ad715b79
MD5 6db702438e48e00354eb8edd12df83e4
BLAKE2b-256 a61edf1daa182eb58ddcd8f89b67d0dad3043d597c6d1d7d222560ea20224c4a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page