Unifying Academic Rigor and Industrial Scale for Responsible, Reproducible, and Efficient Recommendation
Project description
๐ WarpRec
WarpRec is a flexible and efficient framework designed for building, training, and evaluating recommendation models. It supports a wide range of configurations, customizable pipelines, and powerful optimization tools to enhance model performance and usability.
WarpRec is designed for both beginners and experienced practitioners. For newcomers, it offers a simple and intuitive interface to explore and experiment with state-of-the-art recommendation models. For advanced users, WarpRec provides a modular and extensible architecture that allows rapid prototyping, complex experiment design, and fine-grained control over every step of the recommendation pipeline.
Whether you're learning how recommender systems work or conducting high-performance research and development, WarpRec offers the right tools to match your workflow.
๐๏ธ Architecture
WarpRec is built on 4 foundational pillars โ Scalability, Green AI, Agentic Readiness, and Scientific Rigor โ and organized into 5 modular engines that manage the end-to-end recommendation lifecycle:
- Reader โ Ingests user-item interactions and metadata from local or cloud storage via a backend-agnostic Narwhals abstraction layer.
- Data Engine โ Applies configurable filtering and splitting strategies to produce clean, leak-free train/validation/test sets.
- Recommendation Engine โ Trains and optimizes models using PyTorch, with seamless scaling from single-GPU to multi-node Ray clusters.
- Evaluation Engine โ Computes 40 GPU-accelerated metrics in a single pass with automated statistical significance testing.
- Writer โ Serializes results, checkpoints, and carbon reports to local or cloud storage.
An Application Layer exposes trained models through a REST API (FastAPI) and an MCP server for agentic AI workflows.
๐ Table of Contents
- โจ Key Features
- โ๏ธ Installation
- ๐ Usage
- ๐ค Contributing
- ๐ License
- ๐ Citation
- ๐ง Contact
โจ Key Features
- 55 Built-in Algorithms: WarpRec ships with 55 state-of-the-art recommendation models spanning 6 paradigms โ Unpersonalized, Content-Based, Collaborative Filtering (e.g.,
LightGCN,EASE$^R$,MultiVAE), Context-Aware (e.g.,DeepFM,xDeepFM), Sequential (e.g.,SASRec,BERT4Rec,GRU4Rec), and Hybrid. All models are fully configurable and extend a standardized base class, making it easy to prototype custom architectures within the same pipeline. - Backend-Agnostic Data Engine: Built on Narwhals, WarpRec operates over Pandas, Polars, and Spark without code changes โ enabling a true "write-once, run-anywhere" workflow from laptop to distributed cluster. Data ingestion supports both local filesystems and cloud object storage (Azure Blob Storage).
- Comprehensive Data Processing: The data module provides 13 filtering strategies (filter-by-rating, k-core, cold-start heuristics) and 6 splitting protocols (random/temporal Hold-Out, Leave-k-Out, Fixed Timestamp, k-fold Cross-Validation), for a total of 19 configurable strategies to ensure rigorous and reproducible experimental setups.
- 40 GPU-Accelerated Metrics: The evaluation suite covers 40 metrics across 7 families โ Accuracy, Rating, Coverage, Novelty, Diversity, Bias, and Fairness โ including multi-objective metrics for simultaneous optimization of competing goals. All metrics are computed with full GPU acceleration for large-scale experiments.
- Statistical Rigor: WarpRec automates hypothesis testing with paired (Student's t-test, Wilcoxon signed-rank) and independent-group (Mann-Whitney U) tests, and applies multiple comparison corrections via Bonferroni and FDR (Benjamini-Hochberg) to prevent p-hacking and ensure statistically robust conclusions.
- Distributed Training & HPO: Seamless vertical and horizontal scaling from single-GPU to multi-node Ray clusters. Hyperparameter optimization supports Grid, Random, Bayesian, HyperOpt, Optuna, and BoHB strategies, with ASHA pruning and model-level early stopping to maximize computational efficiency.
- Green AI & Carbon Tracking: WarpRec is the first recommendation framework with native CodeCarbon integration, automatically quantifying energy consumption and COโ emissions for every experiment and persisting carbon footprint reports alongside standard results.
- Agentic AI via MCP: WarpRec natively implements a Model Context Protocol server (
infer-api/mcp_server.py), exposing trained recommenders as callable tools within LLM and autonomous agent workflows โ transforming the framework from a static predictor into an interactive, agent-ready component. - REST API & Model Serving: Trained models are instantly deployable as RESTful microservices via the built-in FastAPI server (
infer-api/server.py), decoupling the modeling core from serving infrastructure with zero additional engineering effort. - Experiment Tracking: Native integrations with
TensorBoard,Weights & Biases, andMLflowfor real-time monitoring of metrics, training dynamics, and multi-run management. - Custom Pipelines & Callbacks: Beyond the three standard pipelines (Training, Design, Evaluation), WarpRec exposes an event-driven Callback system for injecting custom logic at any stage โ enabling complex experiments without modifying framework internals.
โ๏ธ Installation
WarpRec is designed to be easily installed via pip or via Conda. This ensures that all dependencies and the Python environment are managed consistently. Conda environment is available both for CPU and GPU.
๐ Quick Install (PyPI)
The easiest way to get started is using pip:
pip install warprec
WarpRec provides extra dependencies for specific use cases:
| extra | usage |
|---|---|
| dashboard | Dashboard functionalities like MLflow and Weights & Biases. |
| remote-io | Remote communication with cloud services like Azure. |
| serving | Optional dependencies to serve your recommendation models. |
| all | All of the above. |
You can install them at any moment using the following command:
pip install "warprec[dashboard, remote-io]"
๐ฆ Install via Poetry
If you use Poetry for dependency management, you can easily install WarpRec and its dependencies directly from the source:
- Clone the repository
Open your terminal and clone the WarpRec repository:
git clone <repository_url> cd warprec
- Install the project
poetry install # Or you can install all extra dependencies poetry install --extras all
๐ ๏ธ Development Setup (Conda)
If you want to contribute or need a specific environment (CPU/GPU), we recommend using Conda. The conda environment already contains all the extra dependencies:
-
Clone the repository Open your terminal and clone the WarpRec repository:
git clone <repository_url> cd warprec
-
Create the Conda environment Use the provided environment.gpu.yml (or environment.cpu.yml) file to create the virtual environment. This will install Python 3.12 and the necessary core dependencies.
# For GPU support conda env create --file environment.gpu.yml # Or for CPU only conda env create --file environment.cpu.yml
-
Activate the environment:
conda activate warprec
๐ Usage
๐๏ธโโ๏ธ Training a model
To train a model, use the train pipeline. Here's an example:
- Prepare a configuration file (e.g.
config/train_config.yml) with details about the model, dataset and training parameters. - Start a Ray HEAD node:
ray start --head
- Run the following command:
# Running with pip warprec -c config/train_config.yml -p train # Or with cloned repo python -m warprec.run -c config/train_config.yml -p train
This command starts the training process using the specified configuration file.
โ๏ธ Design a model
To implement a custom model, WarpRec provides a dedicated design interface via the design pipeline. The recommended workflow is as follows:
- Prepare a configuration file (e.g.
config/design_config.yml) with details about the custom models, dataset and training parameters. - Run the following command:
# Running with pip warprec -c config/design_config.yml -p design # Or with cloned repo python -m warprec.run -c config/design_config.yml -p design
This command initializes a lightweight training pipeline, specifically intended for rapid prototyping and debugging of custom architectures within the framework.
๐ Evaluate a model
To run only evaluation on a model, use the eval pipeline. Here's an example:
- Prepare a configuration file (e.g.
config/eval_config.yml) with details about the model, dataset and training parameters. - Run the following command:
# Running with pip warprec -c config/eval_config.yml -p eval # Or with cloned repo python -m warprec.run -c config/eval_config.yml -p eval
This command starts the evaluation process using the specified configuration file.
๐งฐ Makefile Commands
The project includes a Makefile to simplify common operations:
- ๐งน Run linting:
make lint - ๐งโ๐ฌ Run tests:
make test
๐ค Contributing
We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or proposing new features, your input is highly valued.
To get started:
- Fork the repository and create a new branch for your feature or fix.
- Follow the existing coding style and conventions.
- Make sure the code passes all checks by running
make lint. - Open a pull request with a clear description of your changes.
If you encounter any issues or have questions, feel free to open an issue in the Issues section of the repository.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Citation
Citation details will be provided in an upcoming release. Stay tuned!
๐ง Contact
For questions or suggestions, feel free to contact us at:
- Marco Avolio - marco.avolio@wideverse.com
- Potito Aghilar - potito.aghilar@wideverse.com
- Sabino Roccotelli - sabino.roccotelli@wideverse.com
- Vito Walter Anelli - vitowalter.anelli@poliba.it
- Joseph Trotta - joseph.trotta@ovs.it
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file warprec-1.4.0.tar.gz.
File metadata
- Download URL: warprec-1.4.0.tar.gz
- Upload date:
- Size: 270.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e67cf16ba0b4a92b90ed711e114dce0db3f43cad6054ba1bb68d3824e99e5c4
|
|
| MD5 |
f96c965a4dd04c1e6ac881b9c2b9c917
|
|
| BLAKE2b-256 |
6d6c3983e09e4d464ae3dad6b0146a50a49e9d0e65b59e4d289067b5be98d5f7
|
Provenance
The following attestation bundles were made for warprec-1.4.0.tar.gz:
Publisher:
release.yml on sisinflab/warprec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
warprec-1.4.0.tar.gz -
Subject digest:
5e67cf16ba0b4a92b90ed711e114dce0db3f43cad6054ba1bb68d3824e99e5c4 - Sigstore transparency entry: 1437139037
- Sigstore integration time:
-
Permalink:
sisinflab/warprec@05e89bab509ccdfeaff6fd4430bae9f53fda8a7e -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sisinflab
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@05e89bab509ccdfeaff6fd4430bae9f53fda8a7e -
Trigger Event:
push
-
Statement type:
File details
Details for the file warprec-1.4.0-py3-none-any.whl.
File metadata
- Download URL: warprec-1.4.0-py3-none-any.whl
- Upload date:
- Size: 465.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b9649f50fee8b84f216b2098a48d05975a068c01a9cf9463cfd92131962aa67
|
|
| MD5 |
75aa2c35c982325d93cf6b7a8ec737a4
|
|
| BLAKE2b-256 |
6089c835a9e691a03e4ebb074d01e1c14105360b081e62625d63309babe6e35f
|
Provenance
The following attestation bundles were made for warprec-1.4.0-py3-none-any.whl:
Publisher:
release.yml on sisinflab/warprec
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
warprec-1.4.0-py3-none-any.whl -
Subject digest:
3b9649f50fee8b84f216b2098a48d05975a068c01a9cf9463cfd92131962aa67 - Sigstore transparency entry: 1437139049
- Sigstore integration time:
-
Permalink:
sisinflab/warprec@05e89bab509ccdfeaff6fd4430bae9f53fda8a7e -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sisinflab
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@05e89bab509ccdfeaff6fd4430bae9f53fda8a7e -
Trigger Event:
push
-
Statement type: