A scikit-style recommender systems library
Project description
scikit-rec
A composable, scikit-style recommender systems library.
scikit-rec provides a 3-layer architecture that cleanly separates business logic, scoring strategy, and ML models. Any recommender works with any compatible scorer and estimator, giving you a mix-and-match toolkit for building recommendation systems.
Recommender (business logic) --> Scorer (item scoring) --> Estimator (ML model)
Why scikit-rec?
Composable by design. Each layer is independently extensible. Swap XGBoost for a Two-Tower model without changing your recommender. Add a new bandit strategy without touching the scorer. The library spans XGBoost, LightGBM, and scikit-learn alongside deep learning models (NCF, Two-Tower, DeepFM, SASRec, HRNN), with GPU optional — a pure-NumPy matrix factorization (ALS/SGD) requires no PyTorch. The composable architecture also accommodates novel research: a Goal-Conditioned Supervised Learning (GCSL) recommender for multi-objective recommendation was implemented as a single Recommender subclass — no new scorer or estimator required. Contributions welcome: implement one abstract class and it works with everything else.
Beyond ranking. Contextual bandits (epsilon-greedy, static-action) and heterogeneous treatment effect estimation (T/S/X-Learner) are first-class paradigms, not afterthoughts. All share the same evaluation infrastructure, so you can directly compare a ranking policy against a bandit or uplift policy on the same logged data.
Production-grade evaluation. The most complete offline policy evaluation suite in any recommendation library: IPS, Doubly Robust, SNIPS, Direct Method, Policy-Weighted, and Replay Match, paired with eight ranking and classification metrics (Precision, Recall, MAP, MRR, NDCG, ROC-AUC, PR-AUC, Expected Reward) — enabling counterfactual policy comparison from logged data with a single call.
Production readiness. Config-driven pipeline factory with Optuna HPO, low-latency single-user inference (recommend_online), two-stage retrieval-then-ranking, and batch training.
Learn by example. Ten end-to-end Jupyter notebooks on MovieLens 1M cover ranking, bandits, uplift, sequential recommendations, multi-objective optimization, hyperparameter tuning, two-stage retrieval, and contextual two-tower models. Our SASRec achieves HR@10 = 0.8953 and NDCG@10 = 0.6331 on MovieLens-1M (leave-last-out, 1 positive + 100 negatives). Each notebook downloads data, trains, evaluates, and shows sample recommendations — ready to run.
Installation
pip install scikit-rec
Optional extras:
pip install scikit-rec[torch] # Deep learning models (DeepFM, NCF, SASRec, HRNN, Two-Tower)
pip install scikit-rec[aws] # S3 data loading
Quick Start
from skrec.estimator.classification.xgb_classifier import XGBClassifierEstimator
from skrec.scorer.universal import UniversalScorer
from skrec.recommender.ranking.ranking_recommender import RankingRecommender
from skrec.examples.datasets import (
sample_binary_reward_interactions,
sample_binary_reward_users,
sample_binary_reward_items,
)
# Build the pipeline: Estimator -> Scorer -> Recommender
estimator = XGBClassifierEstimator({"learning_rate": 0.1, "max_depth": 5})
scorer = UniversalScorer(estimator)
recommender = RankingRecommender(scorer)
# Train
recommender.train(
interactions_ds=sample_binary_reward_interactions,
users_ds=sample_binary_reward_users,
items_ds=sample_binary_reward_items,
)
# Recommend
interactions_df = sample_binary_reward_interactions.fetch_data()
users_df = sample_binary_reward_users.fetch_data()
recommendations = recommender.recommend(interactions=interactions_df, users=users_df, top_k=5)
Components
Recommenders
| Recommender | Description |
|---|---|
RankingRecommender |
Rank items by predicted score |
ContextualBanditsRecommender |
Exploration-exploitation strategies (epsilon-greedy, static action) |
UpliftRecommender |
Uplift modeling (S-Learner, T-Learner, X-Learner) |
SequentialRecommender |
Sequence-aware recommendations |
HierarchicalSequentialRecommender |
Session-aware hierarchical sequences (HRNN) |
GcslRecommender |
Multi-objective goal-conditioned supervised learning |
Scorers
| Scorer | Description |
|---|---|
UniversalScorer |
Single global model using item features (auto-dispatches tabular vs. embedding) |
IndependentScorer |
Separate model per item |
MulticlassScorer |
Items as competing classes |
MultioutputScorer |
Multiple outcomes per prediction |
SequentialScorer |
For sequential estimators (SASRec) |
HierarchicalScorer |
For HRNN estimators |
Estimators
| Type | Models |
|---|---|
| Tabular | XGBoost, LightGBM, Logistic Regression, sklearn classifiers/regressors |
| Embedding | Matrix Factorization, NCF, Two-Tower, DCN, DeepFM |
| Sequential | SASRec, HRNN |
Evaluators
| Evaluator | Description |
|---|---|
SimpleEvaluator |
Standard offline evaluation on held-out data |
IPSEvaluator |
Inverse Propensity Scoring for counterfactual evaluation |
DREvaluator |
Doubly Robust — combines direct estimation with IPS |
SNIPSEvaluator |
Self-Normalized IPS — reduces variance of IPS |
DirectMethodEvaluator |
Uses a reward model to estimate policy value |
PolicyWeightedEvaluator |
Weights logged rewards by policy/logging probability ratio |
ReplayMatchEvaluator |
Unbiased evaluation using only matching logged actions |
Metrics
Precision@k, Recall@k, MAP, MRR, NDCG, ROC-AUC, PR-AUC, Expected Reward.
Retrievers
Two-stage retrieval: Popularity, Content-Based, Embedding-Based.
Example Notebooks
| Notebook | What it demonstrates |
|---|---|
| Ranking with XGBoost | Feature-based ranking with demographics and genre features |
| Uplift Modeling | S-Learner, T-Learner, X-Learner treatment effect estimation |
| GCSL Multi-Objective | Goal-conditioned recommendations — steer quality vs. novelty |
| HPO with Optuna | Hyperparameter tuning with TPE, GP, and CMA-ES samplers |
| Two-Stage Retrieval | Popularity, content-based, and embedding retrieval + ranking |
| Two-Tower Models | Three context modes: user_tower, trilinear, scoring_layer |
| SASRec (Positives) | Self-attentive sequential recommendation on positive interactions |
| SASRec (Ratings) | SASRec with explicit ratings as soft labels |
| SASRec (MSE) | SASRec regressor with MSE loss |
| HRNN | Hierarchical RNN for session-aware recommendations |
All notebooks use MovieLens 1M (downloaded automatically) and include training, evaluation, and sample recommendations.
Documentation
Full documentation is available at intuit.github.io/scikit-rec.
Development
git clone https://github.com/intuit/scikit-rec.git
cd scikit-rec
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scikit_rec-0.3.0.tar.gz.
File metadata
- Download URL: scikit_rec-0.3.0.tar.gz
- Upload date:
- Size: 48.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
949b0ea7b79047b85e49d08466c49837fb74dd1f7d61ce6557f4caf6d680f7fd
|
|
| MD5 |
b2adc1cba19741f84bc8509af6765432
|
|
| BLAKE2b-256 |
5914077b13f8fb29d6a7ddc4d7b4d1bdf7cae87578e0a011e7a461d0c0860cf3
|
Provenance
The following attestation bundles were made for scikit_rec-0.3.0.tar.gz:
Publisher:
publish.yml on intuit/scikit-rec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scikit_rec-0.3.0.tar.gz -
Subject digest:
949b0ea7b79047b85e49d08466c49837fb74dd1f7d61ce6557f4caf6d680f7fd - Sigstore transparency entry: 1340441516
- Sigstore integration time:
-
Permalink:
intuit/scikit-rec@660f8b6d4c27355e1391e824880e9e34ad1a73b6 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/intuit
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@660f8b6d4c27355e1391e824880e9e34ad1a73b6 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file scikit_rec-0.3.0-py3-none-any.whl.
File metadata
- Download URL: scikit_rec-0.3.0-py3-none-any.whl
- Upload date:
- Size: 454.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bc65fb537bd1be1c6b9eaffec1864aa34dd0d374b428b080598bcb65cac08804
|
|
| MD5 |
256fd3e392365de5acb7078936e4e7c4
|
|
| BLAKE2b-256 |
e5cc7c2e8350b61d21e9e77505d7eb3e51e84dba041f941f1686641d0fe4bf19
|
Provenance
The following attestation bundles were made for scikit_rec-0.3.0-py3-none-any.whl:
Publisher:
publish.yml on intuit/scikit-rec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scikit_rec-0.3.0-py3-none-any.whl -
Subject digest:
bc65fb537bd1be1c6b9eaffec1864aa34dd0d374b428b080598bcb65cac08804 - Sigstore transparency entry: 1340441517
- Sigstore integration time:
-
Permalink:
intuit/scikit-rec@660f8b6d4c27355e1391e824880e9e34ad1a73b6 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/intuit
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@660f8b6d4c27355e1391e824880e9e34ad1a73b6 -
Trigger Event:
workflow_dispatch
-
Statement type: