Skip to main content

Low-code feature search and enrichment library for machine learning

Project description

Upgini : Free automated data enrichment library for machine learning:
only the accuracy improving features in 2 minutes

Automatically searches through thousands of ready-to-use features from public and community shared data sources and enriches your training dataset with only the relevant features


Quick Start in Colab » | Upgini.com | Sign In | Slack Community

license Python version PyPI Latest Release stability-release-candidate Code style: black Downloads Slack Upgini

❔ Overview

Upgini is a simple feature search & enrichment library in Python. With Upgini, you spend less time on external data search and feature engineering, which will be done for you automatically. Just use your labeled dataset to initiate search through thousands of features and data sources, including public datasets and scraped data shared by Data science community. Only the relevant features that improve prediction power of your ML model are returned.
Motivation: for most supervised ML models external data & features boost accuracy significantly better than any hyperparameters tuning. But lack of automated and time-efficient search tools for external data blocks massive adoption of external features in ML pipelines.
We want radically simplify features search and delivery to make external data a standard approach. Like a hyperparameter tuning for machine learning nowadays.
Mission: Democratize access to data sources for data science community

🚀 Awesome features

⭐️ Automatically find only relevant features that give accuracy improvement for ML model. Not just correlated with target variable, what 9 out of 10 cases gives zero accuracy improvement for real-world ML case
⭐️ Calculate accuracy metrics and uplifts if you'll enrich your existing ML model with external features
⭐️ Check the stability of accuracy gain from external data on out-of-time intervals and verification datasets. Mitigate risks of unstable external data dependencies in ML pipeline
⭐️ Curated and updated data sources, including public datasets and community-shared data
⭐️ Easy to use - single request to enrich training dataset with all of the keys at once:

date / datetime phone number
postal / ZIP code hashed email / HEM
country IP-address

⭐️ Scikit-learn compatible interface for quick data integration with existing ML pipelines
⭐️ Support for most common supervised ML tasks on tabular data:

☑️ binary classification ☑️ multiclass classification
☑️ regression ☑️ time series prediction

🌎 Connected data sources and coverage

Two types of data sources with pre-computed features - Public data and Community shared data:

  • Public data is available from the public sector, academic institutions, and other sources through open data portals
  • Community shared data is a royalty / license free datasets or features from Data science community (our users). It's both a public and a scraped data.

📊 Data coverage, statistics and updates

Total: 239 countries and up to 41 years of history

Data scource Countries History, years Update Search keys API Key required
Historical weather & Weather forecast by postal/ZIP code 68 22 Monthly date, country, postal/ZIP code No
International holidays & events, Workweek calendar 232 22 Monthly date, country No
Consumer Confidence index 44 22 Monthly date, country No
World economic indicators 191 41 Monthly date, country No
Markets data - 17 Monthly date, datetime No
World mobile network coverage by postal/ZIP code 167 - Monthly country, postal/ZIP code No
World demographic data by postal/ZIP code 90 - Annual date, country, postal/ZIP code No
World house prices by postal/ZIP code 44 - Annual date, country, postal/ZIP code No
Public social media profile data for email & phone 104 - Monthly email/HEM, phone num Yes
Geolocation profile for phone & IPv4 & email 239 - Monthly date, email/HEM, phone num, IPv4 Yes
🔜 Email/WWW domain profile - - - -

👉 More details on datasets and features here

🏁 Quick start and guides

1. Quick start guide (use as a template)

Search new features for Kaggle Store Item Demand Forecasting Challenge. The goal is to predict future sales of different goods in stores based on a 5-year history of sales. The evaluation metric is SMAPE.
Run quick start guide notebook inside your browser:

Open example in Google Colab   Open in Binder  

Competition dataset was split into train (2013-2016 year) and test (2017 year) parts. FeaturesEnricher was fitted on train part. And both datasets were enriched with external features. Finally, ML model was fitted both of the initial and the enriched datasets to compare accuracy improvement. With a solid improvement of the evaluation metric achieved by the enriched ML model.

2. How to boost ML model accuracy for Kaggle TOP1 Leaderboard in 10 minutes

  • The goal is accuracy improvement for TOP1 winning Kaggle solution from new relevant external features & data.
  • Kaggle Competition is a product sales forecasting, evaluation metric is SMAPE.

3. How to do low-code feature engineering for AutoML tools

  • The goal is to save time on feature search and engineering. If there are some ready-to-use external features and data sources, let's use them to maximize overall AutoML accuracy, right out of the box.
  • Kaggle Competition is a product sales forecasting, evaluation metric is SMAPE.
  • Low-code AutoML tools: Upgini and PyCaret

4. How to improve accuracy of Multivariate Time Series forecast from external features & data

  • The goal is accuracy improvement of Multivariate Time Series prediction from new relevant external features & data. The main challenge here is a strategy of data & feature enrichment, when a component of Multivariate TS depends not only on its past values but also has some dependency on other components.
  • Kaggle Competition is a product sales forecasting, evaluation metric is RMSLE.

5. How to speed up feature engineering hypothesis tests with ready-to-use external features

  • The goal is to save time on external data wrangling and feature calculation code for hypothesis tests. The key challenge here is a time-dependent representation of information in a training dataset, which is uncommon for credit default prediction tasks. As a result, special data enrichment strategy is used.
  • Kaggle Competition is a credit default prediction, evaluation metric is normalized Gini coefficient.

Install

🐍 Install from PyPI

%pip install upgini
🐳 Docker-way
Clone $ git clone https://github.com/upgini/upgini or download upgini git repo locally
and follow steps below to build docker container 👇

1. Build docker image from cloned git repo:
cd upgini
docker build -t upgini .


...or directly from GitHub:
DOCKER_BUILDKIT=0 docker build -t upgini
git@github.com:upgini/upgini.git#main

2. Run docker image:
docker run -p 8888:8888 upgini

3. Open http://localhost:8888?token="<"your_token_from_console_output">" in your browser

💻 How it works?

1. 💡 Use your labeled training dataset for search

You can use your labeled training datasets "as is" to initiate the search. Under the hood, we'll search for relevant data using:

  • search keys from training dataset to match records from potential data sources with a new features
  • labels from training dataset to estimate relevancy of feature or dataset for your ML task and calculate feature importance metrics
  • your features from training dataset to find external datasets and features only give accuracy improvement to your existing data and estimate accuracy uplift (optional)

Load training dataset into pandas dataframe and separate features' columns from label column in a Scikit-learn way:

import pandas as pd
# labeled training dataset - customer_churn_prediction_train.csv
train_df = pd.read_csv("customer_churn_prediction_train.csv")
X = train_df.drop(columns="churn_flag")
y = train_df["churn_flag"]

2. 🔦 Choose one or multiple columns as a search keys

Search keys columns will be used to match records from all potential external data sources / features 👓.
Define one or multiple columns as a search keys with FeaturesEnricher class initialization.

from upgini import FeaturesEnricher, SearchKey
enricher = FeaturesEnricher(
	search_keys={
		"subscription_activation_date": SearchKey.DATE,
    		"country": SearchKey.COUNTRY,
    		"zip_code": SearchKey.POSTAL_CODE
	})

✨ Search key types we support (more to come!)

Search Key
Meaning Type
Description Allowed pandas dtypes (python types) Example
SearchKey.EMAIL e-mail object(str)
string
support@upgini.com
SearchKey.HEM sha256(lowercase(email)) object(str)
string
0e2dfefcddc929933dcec9a5c7db7b172482814e63c80b8460b36a791384e955
SearchKey.IP IP address (version 4) object(str, ipaddress.IPv4Address)
string
int64
192.168.0.1
SearchKey.PHONE phone number, E.164 standard object(str)
string
int64
float64
443451925138
SearchKey.DATE date object(str)
string
datetime64[ns]
period[D]
2020-02-12  (ISO-8601 standard)
12.02.2020  (non standard notation)
SearchKey.DATETIME datetime object(str)
string
datetime64[ns]
period[D]
2020-02-12 12:46:18
12:46:18 12.02.2020
SearchKey.COUNTRY Country ISO-3166 code object(str)
string
GB
US
IN
SearchKey.POSTAL_CODE Postal code a.k.a. ZIP code. Could be used only with SearchKey.COUNTRY object(str)
string
21174
061107
SE-999-99

For the meaning types SearchKey.DATE/SearchKey.DATETIME with dtypes object or string you have to clarify date/datetime format by passing date_format parameter to FeaturesEnricher. For example:

from upgini import FeaturesEnricher, SearchKey
enricher = FeaturesEnricher(
	search_keys={"subscription_activation_date": SearchKey.DATE}, 
	date_format="%Y-%d-%m"
)

⚠️ Requirements for search initialization dataset

We do dataset verification and cleaning under the hood, but still there are some requirements to follow:

  • Pandas dataframe representation
  • Correct label column types: boolean/integers/strings for binary and multiclass labels, floats for regression
  • At least one column defined as a search key
  • Min size after deduplication by search key column and NaNs removal: 100 records

3. 🔍 Start your first feature search!

The main abstraction you interact is FeaturesEnricher. FeaturesEnricher is a Scikit-learn compatible estimator, so you can easily add it into your existing ML pipelines. First, create instance of the FeaturesEnricher class. Once it created call

  • fit to search relevant datasets & features
  • than transform to enrich your dataset with features from search result

Let's try it out!

import pandas as pd
from upgini import FeaturesEnricher, SearchKey

# load labeled training dataset to initiate search
train_df = pd.read_csv("customer_churn_prediction_train.csv")
X = train_df.drop(columns="churn_flag")
y = train_df["churn_flag"]

# now we're going to create `FeaturesEnricher` class
enricher = FeaturesEnricher(
	search_keys={
		"subscription_activation_date": SearchKey.DATE,
    		"country": SearchKey.COUNTRY,
    		"zip_code": SearchKey.POSTAL_CODE
	})

# everything is ready to fit! For 200к records fitting should take around 10 minutes,
# we send email notification, just register on profile.upgini.com
enricher.fit(X, y)

That's all). We have fitted FeaturesEnricher and any pandas dataframe, with exactly the same data schema, can be enriched with features from search results. Use transform method, and let magic to do the rest 🪄

# load dataset for enrichment
test_x = pd.read_csv("test.csv")
# enrich it!
enriched_test_features = enricher.transform(test_x)
enriched_test_features.head()

4. 📈 Evaluate feature importances (SHAP values) from the search result

FeaturesEnricher class has two properties for feature importances, which will be filled after fit - feature_names_ and feature_importances_:

  • feature_names_ - feature names from the search result, and if parameter keep_input=True was used, initial columns from search dataset as well
  • feature_importances_ - SHAP values for features from the search result, same order as in feature_names_

And also has method get_features_info() which will return pandas dataframe with features and full statistics after fit, including SHAP values and match rates:

enricher.get_features_info()

You can get more details about FeaturesEnricher at runtime using docstrings, for example, via help(FeaturesEnricher) or help(FeaturesEnricher.fit).

🧹 Search dataset validation

We validate and clean search initialization dataset under the hood:
✂️ Check you search keys columns format
✂️ Check zero variance for label column
✂️ Check dataset for full row duplicates. If we find any, we remove duplicated rows and make a note on share of row duplicates
✂️ Check inconsistent labels - rows with the same features and keys but different labels, we remove them and make a note on share of row duplicates
✂️ Remove columns with zero variance - we treat any non search key column in search dataset as a feature, so columns with zero variance will be removed

❔ Supervised ML tasks detection

We detect ML task under the hood based on label column values. Currently we support:

  • ModelTaskType.BINARY
  • ModelTaskType.MULTICLASS
  • ModelTaskType.REGRESSION

In most cases, you don't need to do anything, but for certain search datasets, this detection might fail.
In this case, you can pass parameter to FeaturesEnricher with correct ML taks type:

from upgini import ModelTaskType
enricher = FeaturesEnricher(
	search_keys={"subscription_activation_date": SearchKey.DATE},
	model_task_type=ModelTaskType.REGRESSION
)

⏰ Time Series prediction support

Time series prediction supported as ModelTaskType.REGRESSION or ModelTaskType.BINARY tasks with time series specific cross-validation split:

To initiate feature search for time series prediction, you can pass cross-validation type parameter to FeaturesEnricher with time series specific CV type:

from upgini.metadata import CVType
enricher = FeaturesEnricher(
	search_keys={"sales_date": SearchKey.DATE},
	cv=CVType.time_series
)

⚠️ Pre-process search dataset in case of time series prediction:
Sort rows in dataset according to observation order, in most cases - ascending order by date/datetime

🆙 Accuracy and uplift metrics calculations

FeaturesEnricher automaticaly calculates model metrics and uplift from new relevant features either using calculate_metrics() method or calculate_metrics=True parameter in fit or fit_transform methods (example below).
You can use any model estimator with scikit-learn compartible interface, some examples are:

Evaluation metric should be passed to calculate_metrics() by scoring parameter,
out-of-the box Upgini supports 👉
Metric Description
explained_variance Explained variance regression score function
r2 R2 (coefficient of determination) regression score function
max_error Calculates the maximum residual error (negative - greater is better)
median_absolute_error Median absolute error regression loss
mean_absolute_error Mean absolute error regression loss
mean_absolute_percentage_error Mean absolute percentage error regression loss
mean_squared_error Mean squared error regression loss
mean_squared_log_error (or aliases: msle, MSLE) Mean squared logarithmic error regression loss
root_mean_squared_log_error (or aliases: rmsle, RMSLE) Root mean squared logarithmic error regression loss
root_mean_squared_error Root mean squared error regression loss
mean_poisson_deviance Mean Poisson deviance regression loss
mean_gamma_deviance Mean Gamma deviance regression loss
accuracy Accuracy classification score
top_k_accuracy Top-k Accuracy classification score
roc_auc Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores
roc_auc_ovr Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores (multi_class="ovr")
roc_auc_ovo Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores (multi_class="ovo")
roc_auc_ovr_weighted Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores (multi_class="ovr", average="weighted")
roc_auc_ovo_weighted Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores (multi_class="ovo", average="weighted")
balanced_accuracy Compute the balanced accuracy
average_precision Compute average precision (AP) from prediction scores
log_loss Log loss, aka logistic loss or cross-entropy loss
brier_score Compute the Brier score loss

In addition to that list, you can define custom evaluation metric function using scikit-learn make_scorer, for example SMAPE

By default, calculate_metrics() method calculates evaluation metric with the same cross-validation split as selected for FeaturesEnricher.fit() by parameter cv = CVType.<cross-validation-split>
But you can easily define new split by passing child of BaseCrossValidator to parameter cv in calculate_metrics()

Example with more tips-and-tricks:

from upgini import FeaturesEnricher, SearchKey
enricher = FeaturesEnricher(search_keys={"registration_date": SearchKey.DATE})

# Fit with default setup for metrics calculation
# CatBoost will be used
enricher.fit(X, y, eval_set=eval_set, calculate_metrics=True)

# LightGBM estimator for metrics. X and y - same as for fit
custom_estimator = LGBMRegressor()
enricher.calculate_metrics(X, y, eval_set, estimator=custom_estimator)

# Custom metric function to scoring param (callable or name)
custom_scoring = "RMSLE"
enricher.calculate_metrics(X, y, eval_set, scoring=custom_scoring)

# Custom cross validator
custom_cv = TimeSeriesSplit(n_splits=5)
enricher.calculate_metrics(X, y, eval_set, cv=custom_cv)

# All this custom parameters could be combined in both methods: fit, fit_transform and calculate_metrics:
enricher.fit(X, y, eval_set, calculate_metrics=True, estimator=custom_estimator, scoring=custom_scoring, cv=custom_cv)

✅ Optional: find features only give accuracy gain to existing data in the ML model

If you already have features or other external data sources, you can specifically search new datasets & features only give accuracy gain "on top" of them.
Just leave all these existing features in the labeled training dataset and Upgini library automatically use them during feature search process and as a baseline ML model to calculate accuracy metric uplift. And won't return any features that might not give an accuracy gain to the existing feature space.

✅ Optional: check robustness of accuracy improvement from external features

You can validate external features robustness on out-of-time dataset using eval_set parameter. Let's do that:

# load train dataset
train_df = pd.read_csv("train.csv")
train_ids_and_features = train_df.drop(columns="label")
train_label = train_df["label"]

# load out-of-time validation dataset
eval_df = pd.read_csv("validation.csv")
eval_ids_and_features = eval_df.drop(columns="label")
eval_label = eval_df["label"]
# create FeaturesEnricher
enricher = FeaturesEnricher(search_keys={"registration_date": SearchKey.DATE})

# now we fit WITH eval_set parameter to calculate accuracy metrics on Out-of-time dataset.
# the output will contain quality metrics for both the training data set and
# the eval set (validation OOT data set)
enricher.fit(
  train_ids_and_features,
  train_label,
  eval_set = [(eval_ids_and_features, eval_label)]
)

⚠️ Requirements for out-of-time dataset

  • Same data schema as for search initialization dataset
  • Pandas dataframe representation

✅ Optional: return initial dataframe enriched with TOP external features by importance

FeaturesEnricher can be used with fit_transform method and two parameters:

  • importance_threshold: float = 0 - only features with importance >= threshold will be added to the output dataframe
  • max_features: int - only first TOP N features by importance will be returned, where N = max_features

And keep_input=True will keep all initial columns from search dataset X:

enricher = FeaturesEnricher(
	search_keys={"subscription_activation_date": SearchKey.DATE}
)
enriched_dataframe.fit_transform(X, y, keep_input=True, max_features=2)

✅ Optional: reuse completed search for enrichment

FeaturesEnricher can be used with search id of completed state:

  • search_id: str - id of completed fit operation (enricher.get_search_id()) Search keys and features in X should be the same as on fit
enricher = FeaturesEnricher(
  search_keys={"date": SearchKey.DATE},
  search_id = "abcdef00-0000-0000-0000-999999999999"
)

enricher.transform(X)

🔑 Benefits of becoming a registered user

Register and get a free API key for exclusive data sources and features on phone numbers, hashed emails, and IP addresses:
600 mln+ phone numbers, 350 mln+ emails, 2^32 IP addresses

Benefit No Sign-up Registered user
Enrichment with date/datetime, postal/ZIP code and country keys Yes Yes
Enrichment with phone number, hashed email/HEM and IP-address keys No Yes
Email notification on search task completion No Yes
Email notification on new data source activation 🔜 No Yes

👩🏻‍💻 How can I share data/features with a community ?

If you have ANY data which you might consider as royalty / license free (Open Data) and potentially valuable for ML applications, you may publish it for community usage:

  1. Please Sign Up here
  2. Copy Upgini API key from profile and upload your data from Upgini python library with this key:
import pandas as pd
from upgini import SearchKey
from upgini.ads import upload_user_ads
import os
os.environ["UPGINI_API_KEY"] = "your_long_string_api_key_goes_here"
#you can define custom search key which might not be supported yet, just use SearchKey.CUSTOM_KEY type
sample_df = pd.read_csv("path_to_data_sample_file")
upload_user_ads("test", sample_df, {
    "city": SearchKey.CUSTOM_KEY,
    "stats_date": SearchKey.DATE
})
  1. After data verification, search results on community data will be available usual way

🛠 Getting Help & Community

Please note, that we are still in a beta stage. Requests and support, in preferred order
Claim help in slack Open GitHub issue
Please try to create bug reports that are:

  • Reproducible. Include steps to reproduce the problem.
  • Specific. Include as much detail as possible: which Python version, what environment, etc.
  • Unique. Do not duplicate existing opened issues.
  • Scoped to a Single Bug. One bug per report.

🧩 Contributing

We are a very small team and this is a part-time project for us, thus most probably we won't be able:

  • implement smooth integration with most common low-code ML libraries and platforms (PyCaret, H2O AutoML, etc. )
  • implement all possible data verification and normalization capabilities for different types of search keys (we just started with current 6 types)

And we need some help from community) So, we'll be happy about every pull request you open and issue you find to make this library more awesome. Please note that it might sometimes take us a while to get back to you. For major changes, please open an issue first to discuss what you would like to change

Developing

Some convenient ways to start contributing are:
⚙️ Open in Visual Studio Code You can remotely open this repo in VS Code without cloning or automatically clone and open it inside a docker container.
⚙️ Gitpod Gitpod Ready-to-Code You can use Gitpod to launch a fully functional development environment right in your browser.

🔗 Useful links

😔 Found mistype or a bug in code snippet? Our bad! Please report it here.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

upgini-1.1.18.tar.gz (86.4 kB view hashes)

Uploaded Source

Built Distribution

upgini-1.1.18-py3-none-any.whl (69.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page