An end-to-end machine learning pipeline to train ml model and deploy it to realtime inference endpoint
Project description
personalization
An end-to-end demo machine learning pipeline to provide an artifact for a real-time inference service
Aim
We want to create a machine learning training code which satisfies the following properties that given data can train the model and save it to artifact
Solution
Our implementation of the package 'personalization' We choose to use Polars to read data, it is roughly 2-3 times faster than Pandas and supports nice API for aggregations and features creation. For the model part, we decided to take lightGBM due to ts speed, small size (model artifact size up to 50 Mb on 300 million rows of search data) and explainability. The user should choose lightGBM parameters carefully. We tested an example lightgbm params in notebooks/train.ipynb.
Offline evaluation
The offline evaluation has been done in notebooks/train.ipynb, we can see significant increase in NDCG levels across venues with our model against the baseline.
CICD: code style and PyPI
The code is checked with pre-commit configs, tested and published in Github Actions, current coverage is around 80 percent.
The inference service code can be found here https://github.com/ra312/model-server
How to run
- obtain sessions.csv and venues.csv and move them to the root folder
- Check python --verrsion > 3.8.1
- Install personalization
python -m pip instal personalization
-
Train pipeline and get artifact,
please run the following command in shell
python3 -m personalization \
--sessions-bucket-path sessions.csv \
--venues-bucket-path venues.csv \
--objective lambdarank \
--num_leaves 100 \
--min_sum_hessian_in_leaf 10 \
--metric ndcg --ndcg_eval_at 10 20 \
--learning_rate 0.8 \
--force_row_wise True \
--num_iterations 10 \
--trained-model-path trained_model.joblib
TODO
For demo purposes, we choose to ingest sessiona and venues data locally and save model file locally. Given more time and infrastructure, I would add more things
- Scalability: add Flyte workflow (reusing the code here)
- Data: add support to ingest sessions and venues data from a database
- Versioning: add MLFlow integration
Read Latest Documentation - Browse GitHub Code Repository
=======
model server
---
title: REST-inference service
---
classDiagram
note "100 requests per second"
class VenueRating{
"""
Represents the predicted ranking of a venue.
Attributes:
-----------
venue_id : int The ID of the venue being rated.
q80_predicted_rank : float
The predicted ranking of the venue,
as a 80-quantile of predicted rating
for venue across available sessions
"""
venue_id: int
q80_predicted_rank: float
}
class TrainingPipeline{
str pre-trained-model-file: stored with mlflow in gcs bucket
}
class InferenceFeatures{
venue_id: int
conversions_per_impression: float
price_range: int
rating: float
popularity: float
retention_rate: float
session_id_hashed: int
position_in_list: int
is_from_order_again: int
is_recommended: int
}
class FastAPIEndpoint{
def predict_ratings(): Callabe
}
class Model_Instance{
joblib.load(model_artifact_bucket)
str model_artifact_bucket - variable
str rank_column - fixed for the model
str group_column - fixed for the model
}
TrainingPipeline --|> Model_Instance
InferenceFeatures --|> FastAPIEndpoint
Model_Instance --|> FastAPIEndpoint
FastAPIEndpoint --|> VenueRating
Documentation: https://ra312.github.io/model-server Training Source Code: https://github.com/ra312/personalization Source Code: https://github.com/ra312/model-server PyPI: https://pypi.org/project/model-server/
A model server for almost realtime inference
Installation
pip install model-server
Development
- Clone this repository
- Requirements:
- Poetry
- Python 3.8.1+
- Create a virtual environment and install the dependencies
poetry install
- Activate the virtual environment
poetry shell
Testing
pytest
Documentation
The documentation is automatically generated from the content of the docs directory and from the docstrings of the public signatures of the source code. The documentation is updated and published as a Github project page automatically as part each release.
Releasing
Trigger the Draft release workflow (press Run workflow). This will update the changelog & version and create a GitHub release which is in Draft state.
Find the draft release from the GitHub releases and publish it. When a release is published, it'll trigger release workflow which creates PyPI release and deploys updated documentation.
Pre-commit
Pre-commit hooks run all the auto-formatters (e.g. black
, isort
), linters (e.g. mypy
, flake8
), and other quality
checks to make sure the changeset is in good shape before a commit/push happens.
You can install the hooks with (runs for each commit):
pre-commit install
Or if you want them to run only for each push:
pre-commit install -t pre-push
Or if you want e.g. want to run all checks manually for all files:
pre-commit run --all-files
This project was generated using the wolt-python-package-cookiecutter template.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for personalization-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 01aee6b58f4ceb098ef043bdc60cc60bb37998eff159ce62c0b531c70ad60973 |
|
MD5 | 79a9c2cd1b9bccf68926c8d79678f5a5 |
|
BLAKE2b-256 | dde687510decc13e080cf60ecfe45604b46a5dd83f79952656afe092ba31d914 |