Skip to main content

A Python package for supervised and unsupervised machine learning.

Project description

## pycaret pycaret is the free software and open source machine learning library for python programming language. It is built around several popular machine learning libraries in python. Its primary objective is to reduce the cycle time of hypothesis to insights by providing an easy to use high level unified API. pycaret's vision is to become defacto standard for teaching machine learning and data science. Our strength is in our easy to use unified interface for both supervised and unsupervised machine learning problems. It saves time and effort that citizen data scientists, students and researchers spent on coding or learning to code using different interfaces, so that now they can focus on business problem and value creation.

Current Release

The current release is beta 0.0.13 (as of 10/01/2020). A full release is targetted to be available by 31/01/2020.

Features Currently Available

As per beta 0.0.13 following modules are generally available:

  • pycaret.datasets
  • pycaret.classification (binary and multiclass)
  • pycaret.regression
  • pycaret.nlp
  • pycaret.arules
  • pycaret.anamoly
  • pycaret.clustering

Future Release

Following features are targetted for future release (beta 0.0.14 & beta 0.0.15):

  • pycaret.preprocess

Installation

Dependencies

Please read requirements.txt for list of requirements. They are automatically installed when pycaret is installed using pip.

User Installation

The easiest way to install pycaret is using pip.

pip install pycaret

Quick Start

As of beta 0.0.13 classification, regression, nlp, arules, anomaly and clustering modules are available. Future release will be include Preprocessing, Time Series and Recommender System.

Classification / Regression

Getting data from pycaret repository

from pycaret.datasets import get_data
juice = get_data('juice') #classification dataset
  1. Initializing the pycaret environment setup
from pycaret.classification import * #for classification
from pycaret.regression import * #for regression
exp1 = setup(juice, 'Purchase')
  1. Creating a simple logistic regression (includes fitting, CV and metric evaluation)
lr = create_model('lr')

List of available estimators:

Logistic Regression (lr)
K Nearest Neighbour (knn)
Naive Bayes (nb)
Decision Tree (dt)
Support Vector Machine - Linear (svm)
SVM Radial Function (rbfsvm)
Gaussian Process Classifier (gpc)
Multi Level Perceptron (mlp)
Ridge Classifier (ridge)
Random Forest (rf)
Quadtratic Discriminant Analysis (qda)
Adaboost (ada)
Gradient Boosting Classifier (gbc)
Linear Discriminant Analysis (lda)
Extra Trees Classifier (et)
Extreme Gradient Boosting - xgboost (xgboost)
Light Gradient Boosting - Microsoft LightGBM (lightgbm)

  1. Compare all models at once
compare_models()
  1. Tuning a model using pre-built search grids.
tuned_xgb = tune_model('xgboost')
  1. Ensembling Model
dt = create_model('dt')
dt_bagging = ensemble_model(dt, method='Bagging')
dt_boosting = ensemble_model(dt, method='Boosting')
  1. Creating a voting classifier
voting_all = blend_models() #creates voting classifier for entire library

#create voting classifier for specific models
lr = create_model('lr')
svm = create_model('svm')
mlp = create_model('mlp')
xgboost = create_model('xgboost')

voting_clf2 = blend_models( [ lr, svm, mlp, xgboost ] )
  1. Stacking Models in Single Layer
#create individual classifiers
lr = create_model('lr')
svm = create_model('svm')
mlp = create_model('mlp')
xgboost = create_model('xgboost')

stacker = stack_models( [lr,svm,mlp], meta_model = xgboost )
  1. Stacking Models in Multiple Layers
#create individual classifiers
lr = create_model('lr')
svm = create_model('svm')
mlp = create_model('mlp')
gbc = create_model('gbc')
nb = create_model('nb')
lightgbm = create_model('lightgbm')
knn = create_model('knn')
xgboost = create_model('xgboost')

stacknet = create_stacknet( [ [lr,svm,mlp], [gbc, nb], [lightgbm, knn] ], meta_model = xgboost )
#meta model by default is Logistic Regression
  1. Plot Models
lr = create_model('lr')
plot_model(lr, plot='auc')

List of available plots:

Area Under the Curve (auc)
Discrimination Threshold (threshold)
Precision Recall Curve (pr)
Confusion Matrix (confusion_matrix)
Class Prediction Error (error)
Classification Report (class_report)
Decision Boundary (boundary)
Recursive Feature Selection (rfe)
Learning Curve (learning)
Manifold Learning (manifold)
Calibration Curve (calibration)
Validation Curve (vc)
Dimension Learning (dimension)
Feature Importance (feature)
Model Hyperparameter (parameter)

  1. Evaluate Model
lr = create_model('lr')
evaluate_model(lr) #displays user interface for interactive plotting
  1. Interpret Tree Based Models
xgboost = create_model('xgboost')
interpret_model(xgboost)
  1. Saving Model for Deployment
lr = create_model('lr')
save_model(lr, 'lr_23122019')
  1. Saving Entire Experiment Pipeline
save_experiment('expname1')
  1. Loading Model / Experiment
m = load_model('lr_23122019')
e = load_experiment('expname1')

AutoML

All modules also have AutoML module built-in. It is very easy to run AutoML.

from pycaret.datasets import get_data
juice = get_data('juice')

from pycaret.classification import *
exp1 = setup(data, 'Purchase')

aml1 = automl() #same for regression

Getting Started Tutorials

Tutorials are work in progress. Will be uploaded on our git page by 07/01/2020.

Documentation

Documentation work is in progress. They will be uploaded on our website http://www.pycaret.org as soon as they are available. (Target Availability : 21/01/2020)

Contributions

Contributions are most welcome. To make contribution please reach out moez.ali@queensu.ca

License

Copyright 2019 pycaret

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. © 2019 GitHub, Inc.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pycaret-0.0.13.tar.gz (131.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pycaret-0.0.13-py3-none-any.whl (137.0 kB view details)

Uploaded Python 3

File details

Details for the file pycaret-0.0.13.tar.gz.

File metadata

  • Download URL: pycaret-0.0.13.tar.gz
  • Upload date:
  • Size: 131.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for pycaret-0.0.13.tar.gz
Algorithm Hash digest
SHA256 5473d6cc4dfffd7dd6a8f1c6e8124d9ad9a1c3b64dc56eb75b799921ef71fd9c
MD5 ee3f678cdad0c8c3a0baa3584119499b
BLAKE2b-256 2273182d5fa36288bdf1917d2a38efe9fb43caf4e92a4a1ab61dd2dd9a45ce91

See more details on using hashes here.

File details

Details for the file pycaret-0.0.13-py3-none-any.whl.

File metadata

  • Download URL: pycaret-0.0.13-py3-none-any.whl
  • Upload date:
  • Size: 137.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.4

File hashes

Hashes for pycaret-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 bfe6480f9e191dfe48af1a64e54d18d913b13f4d829e663d3b470c38831d9350
MD5 94f2dc8a36aec491511b4481352ea678
BLAKE2b-256 8db49ec049818ee22e81b384452adf5d20a5859c9a2d224ab4f6c37063ea62cf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page