Skip to main content

Decanter AI is a powerful AutoML tool which enables everyone to build ML models and make predictions without data science background. With Decanter AI SDK, you can integrate Decanter AI into your application more easily with Python.

Project description

Coverage Status tests PyPI version Code style: black

Mobagel decanter ai sdk

Decanter AI is a powerful AutoML tool which enables everyone to build ML models and make predictions without data science background. With Decanter AI SDK, you can integrate Decanter AI into your application more easily with Python.

It supports actions such as data uploading, model training, and prediction to run in a more efficient way and access results more easily.

To know more about Decanter AI and how you can be benefited with AutoML, visit MoBagel website and contact us to try it out!

How it works

  • Upload train and test files in both csv and pandas dataframe.
  • Setup different standards and conduct customized experiments on uploaded data.
  • Use different models to run predictions
  • Get predict data in pandas dataframe form.

Requirements

Usage

Installation

pip install decanter-ai-sdk

Constructor

To use this sdk, you must first construct a client object.

from decanter_ai_sdk.client import Client
    client = Client(
        auth_key="auth_API_key",
        project_id="project_id",
        host="host_url",
    )

Upload

After the client is constructed, now you can use it to upload your training and testing files in both csv and pandas dataframe. This function will return uploaded data id in Decanter server.

import os
sys.path.append("..")

current_path = os.path.dirname(os.path.abspath(__file__))
train_file_path = os.path.join(current_path, "ts_train.csv")
train_file = open(train_file_path, "rb")
train_id = client.upload(train_file, "train_file")

upload your pandas dataframe

import io
import pandas as pd

df_train = pd.read_csv("./yourpath/train.csv")

csv_file = io.BytesIO()
df_train.to_csv(csv_file, index=False)
csv_file.seek(0)  
train_table_id = client.upload(csv_file, 'train_file')

Experiment

To conduct an experiment, you need to first specify which type of data you are going to use , i.e., iid or ts, then you can input parameters by following our pyhint to customize your experiment. After the experiment, the function will return an object which you can get experiment attributes from it.

# Training iid data
experiment = client.train_iid(
    experiment_name=exp_name,
    experiment_table_id=train_id,
    target="Survived",
    evaluator=ClassificationMetric.AUC,
    custom_column_types={
        "Pclass": DataType.categorical,
        "Parch": DataType.categorical,
    },
)
# Training ts data
experiment = client.train_ts(
    experiment_name=exp_name,
    experiment_table_id=train_id,
    target="Passengers",
    datetime="Month",
    time_groups=[],
    timeunit=TimeUnit.month,
    groupby_method="sum",
    max_model=5,
    evaluator=RegressionMetric.MAPE,
    custom_column_types={"Pclass": DataType.numerical},
)

To get its attributes, you can either extract them by simply using dot or its functions.

# Experiment object usage
best_model = experiment.get_best_model()
model_list = experiment.get_model_list()
best_auc_model = experiment.get_best_model_by_metric(ClassificationMetric.AUC)

Prediction

Now you can use model data to run prediction.

# Predicting iid data
predict = client.predict_iid(
    keep_columns=[],
    non_negative=False,
    test_table_id=test_id,
    model=best_model
)
# Predicting ts data
predict = client.predict_ts(
    keep_columns=[],
    non_negative=False,
    test_table_id=test_id,
    model=best_model
)

To get prediction result, do

predict_data = predict.get_predict_df()

Development

Installing poetry

  1. Install poetry from the official install
  2. poetry install #Project setup.
  3. poetry shell #Start your project in poetry env. (Optional if you use Conda to manage virtual environment)

Now you can create your own branch to start developing new feature.

Testing

To run test, do:

poe test

To run integration test, do:

  1. Rename .env.example as .env

  2. Modify .env file with correct configurations

  3. run

    poe test-e2e
    

Lint and format

To lint, do:

poe lint

To reformat, do:

poe format

Releasing

  1. poetry version [new_version]
  2. git commit -m"Bump version"
  3. git push origin main
  4. create new release on github.
  5. Create release off main branch, auto generate notes, and review release note.
  6. Publish release

Enums

#TODO

License

#TODO

TODO

#TODO

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

decanter_ai_sdk-0.1.18.tar.gz (36.3 kB view details)

Uploaded Source

Built Distribution

decanter_ai_sdk-0.1.18-py3-none-any.whl (46.3 kB view details)

Uploaded Python 3

File details

Details for the file decanter_ai_sdk-0.1.18.tar.gz.

File metadata

  • Download URL: decanter_ai_sdk-0.1.18.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.8.18 Linux/6.5.0-1016-azure

File hashes

Hashes for decanter_ai_sdk-0.1.18.tar.gz
Algorithm Hash digest
SHA256 fe2544ffc67226d60a073c0cd6e4d90a33fd7d91f063d3a42ddcba483a08ef6e
MD5 4b96759758eeb119ce242907531fad73
BLAKE2b-256 e4471b7862151cde2c30c1fee4e3d07817feb971908a7d0535516d803a6c5444

See more details on using hashes here.

File details

Details for the file decanter_ai_sdk-0.1.18-py3-none-any.whl.

File metadata

  • Download URL: decanter_ai_sdk-0.1.18-py3-none-any.whl
  • Upload date:
  • Size: 46.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.8.18 Linux/6.5.0-1016-azure

File hashes

Hashes for decanter_ai_sdk-0.1.18-py3-none-any.whl
Algorithm Hash digest
SHA256 e1fbee1561de0dc106875a33a5563ee2e24070d3fd8aee929e5825e8afa8726c
MD5 ba885ac8161c38d1c05154e20e8bb327
BLAKE2b-256 74ca161ef1101d752306cccd04ee6a8844dee5e6363ba498ef56fc95f0a55386

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page