Skip to main content

A Comparative Framework for Multimodal Recommender Systems

Project description

Cornac

Cornac is a comparative framework for multimodal recommender systems. It focuses on making it convenient to work with models leveraging auxiliary data (e.g., item descriptive text and image, social network, etc). Cornac enables fast experiments and straightforward implementations of new models. It is highly compatible with existing machine learning libraries (e.g., TensorFlow, PyTorch).

Cornac is one of the frameworks recommended by ACM RecSys 2023 for the evaluation and reproducibility of recommendation algorithms.

Quick Links

Website | Documentation | Tutorials | Examples | Models | Datasets | Paper | Preferred.AI

.github/workflows/python-package.yml CircleCI AppVeyor Codecov Docs
Release PyPI Conda Conda Recipe Downloads
Python Conda Platforms License

Installation

Currently, we are supporting Python 3. There are several ways to install Cornac:

  • From PyPI (recommended):

    pip3 install cornac
    
  • From Anaconda:

    conda install cornac -c conda-forge
    
  • From the GitHub source (for latest updates):

    pip3 install Cython numpy scipy
    pip3 install git+https://github.com/PreferredAI/cornac.git
    

Note:

Additional dependencies required by models are listed here.

Some algorithm implementations use OpenMP to support multi-threading. For Mac OS users, in order to run those algorithms efficiently, you might need to install gcc from Homebrew to have an OpenMP compiler:

brew install gcc | brew link gcc

Getting started: your first Cornac experiment

Flow of an Experiment in Cornac

import cornac
from cornac.eval_methods import RatioSplit
from cornac.models import MF, PMF, BPR
from cornac.metrics import MAE, RMSE, Precision, Recall, NDCG, AUC, MAP

# load the built-in MovieLens 100K and split the data based on ratio
ml_100k = cornac.datasets.movielens.load_feedback()
rs = RatioSplit(data=ml_100k, test_size=0.2, rating_threshold=4.0, seed=123)

# initialize models, here we are comparing: Biased MF, PMF, and BPR
mf = MF(k=10, max_iter=25, learning_rate=0.01, lambda_reg=0.02, use_bias=True, seed=123)
pmf = PMF(k=10, max_iter=100, learning_rate=0.001, lambda_reg=0.001, seed=123)
bpr = BPR(k=10, max_iter=200, learning_rate=0.001, lambda_reg=0.01, seed=123)
models = [mf, pmf, bpr]

# define metrics to evaluate the models
metrics = [MAE(), RMSE(), Precision(k=10), Recall(k=10), NDCG(k=10), AUC(), MAP()]

# put it together in an experiment, voilà!
cornac.Experiment(eval_method=rs, models=models, metrics=metrics, user_based=True).run()

Output:

MAE RMSE AUC MAP NDCG@10 Precision@10 Recall@10 Train (s) Test (s)
MF 0.7430 0.8998 0.7445 0.0548 0.0761 0.0675 0.0463 0.13 1.57
PMF 0.7534 0.9138 0.7744 0.0671 0.0969 0.0813 0.0639 2.18 1.64
BPR N/A N/A 0.8695 0.1042 0.1500 0.1110 0.1195 3.74 1.49

Model serving

Here, we provide a simple way to serve a Cornac model by launching a standalone web service with Flask. It is very handy for testing or creating a demo application. First, we install the dependency:

$ pip3 install Flask

Supposed that we want to serve the trained BPR model from previous example, we need to save it:

bpr.save("save_dir", save_trainset=True)

After that, the model can be deployed easily by running Cornac serving app as follows:

$ FLASK_APP='cornac.serving.app' \
  MODEL_PATH='save_dir/BPR' \
  MODEL_CLASS='cornac.models.BPR' \
  flask run --host localhost --port 8080

# Running on http://localhost:8080

Here we go, our model service is now ready. Let's get top-5 item recommendations for the user "63":

$ curl -X GET "http://localhost:8080/recommend?uid=63&k=5&remove_seen=false"

# Response: {"recommendations": ["50", "181", "100", "258", "286"], "query": {"uid": "63", "k": 5, "remove_seen": false}}

If we want to remove seen items during training, we need to provide TRAIN_SET which has been saved with the model earlier, when starting the serving app. We can also leverage WSGI server for model deployment in production. Please refer to this guide for more details.

Efficient retrieval with ANN search

One important aspect of deploying recommender model is efficient retrieval via Approximate Nearest Neighbor (ANN) search in vector space. Cornac integrates several vector similarity search frameworks for the ease of deployment. This example demonstrates how ANN search will work seamlessly with any recommender models supporting it (e.g., matrix factorization).

Supported Framework Cornac Wrapper Example
spotify/annoy AnnoyANN quick-start, deep-dive
meta/faiss FaissANN quick-start, deep-dive
nmslib/hnswlib HNSWLibANN quick-start, hnsw-lib, deep-dive
google/scann ScaNNANN quick-start, deep-dive

Models

The table below lists the recommendation models/algorithms featured in Cornac. Examples are provided as quick-start showcasing an easy to run script, or as deep-dive explaining the math and intuition behind each model. Why don't you join us to lengthen the list?

Year Model and Paper Type Environment Example
2024 Hypergraphs with Attention on Reviews (HypAR), docs, paper Hybrid / Sentiment / Explainable requirements, CPU / GPU quick-start
2022 Disentangled Multimodal Representation Learning for Recommendation (DMRL), docs, paper Content-Based / Text & Image requirements, CPU / GPU quick-start
2021 Bilateral Variational Autoencoder for Collaborative Filtering (BiVAECF), docs, paper Collaborative Filtering / Content-Based requirements, CPU / GPU quick-start, deep-dive
Causal Inference for Visual Debiasing in Visually-Aware Recommendation (CausalRec), docs, paper Content-Based / Image requirements, CPU / GPU quick-start
Explainable Recommendation with Comparative Constraints on Product Aspects (ComparER), docs, paper Explainable CPU quick-start
2020 Adversarial Multimedia Recommendation (AMR), docs, paper Content-Based / Image requirements, CPU / GPU quick-start
Hybrid Deep Representation Learning of Ratings and Reviews (HRDR), docs, paper Content-Based / Text requirements, CPU / GPU quick-start
LightGCN: Simplifying and Powering Graph Convolution Network, docs, paper Collaborative Filtering requirements, CPU / GPU quick-start
Predicting Temporal Sets with Deep Neural Networks (DNNTSP), docs, paper Next-Basket requirements, CPU / GPU quick-start
Recency Aware Collaborative Filtering (UPCF), docs, paper Next-Basket requirements, CPU quick-start
Temporal-Item-Frequency-based User-KNN (TIFUKNN), docs, paper Next-Basket CPU quick-start
Variational Autoencoder for Top-N Recommendations (RecVAE), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start
2019 Correlation-Sensitive Next-Basket Recommendation (Beacon), docs, paper Next-Basket requirements, CPU / GPU quick-start
Embarrassingly Shallow Autoencoders for Sparse Data (EASEᴿ), docs, paper Collaborative Filtering CPU quick-start
Neural Graph Collaborative Filtering (NGCF), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start
2018 Collaborative Context Poisson Factorization (C2PF), docs, paper Content-Based / Graph CPU quick-start
Graph Convolutional Matrix Completion (GCMC), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start
Multi-Task Explainable Recommendation (MTER), docs, paper Explainable CPU quick-start, deep-dive
Neural Attention Rating Regression with Review-level Explanations (NARRE), docs, paper Explainable / Content-Based requirements, CPU / GPU quick-start
Probabilistic Collaborative Representation Learning (PCRL), docs, paper Content-Based / Graph requirements, CPU / GPU quick-start
Variational Autoencoder for Collaborative Filtering (VAECF), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, param-search, deep-dive
2017 Collaborative Variational Autoencoder (CVAE), docs, paper Content-Based / Text requirements, CPU / GPU quick-start
Conditional Variational Autoencoder for Collaborative Filtering (CVAECF), docs, paper Content-Based / Text requirements, CPU / GPU quick-start
Generalized Matrix Factorization (GMF), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive
Indexable Bayesian Personalized Ranking (IBPR), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive
Matrix Co-Factorization (MCF), docs, paper Content-Based / Graph CPU quick-start, cross-modality
Multi-Layer Perceptron (MLP), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive
Neural Matrix Factorization (NeuMF) / Neural Collaborative Filtering (NCF), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive
Online Indexable Bayesian Personalized Ranking (Online IBPR), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive
Visual Matrix Factorization (VMF), docs, paper Content-Based / Image requirements, CPU / GPU quick-start
2016 Collaborative Deep Ranking (CDR), docs, paper Content-Based / Text requirements, CPU / GPU quick-start
Collaborative Ordinal Embedding (COE), docs, paper Collaborative Filtering requirements, CPU / GPU
Convolutional Matrix Factorization (ConvMF), docs, paper Content-Based / Text requirements, CPU / GPU quick-start, deep-dive
Learning to Rank Features for Recommendation over Multiple Categories (LRPPM), docs, paper Explainable CPU quick-start
Session-based Recommendations With Recurrent Neural Networks (GRU4Rec), docs, paper Next-Item requirements, CPU / GPU quick-start
Spherical K-means (SKM), docs, paper Collaborative Filtering CPU quick-start
Visual Bayesian Personalized Ranking (VBPR), docs, paper Content-Based / Image requirements, CPU / GPU quick-start, cross-modality, deep-dive
2015 Collaborative Deep Learning (CDL), docs, paper Content-Based / Text requirements, CPU / GPU quick-start, deep-dive
Hierarchical Poisson Factorization (HPF), docs, paper Collaborative Filtering CPU quick-start
TriRank: Review-aware Explainable Recommendation by Modeling Aspects, docs, paper Explainable CPU quick-start
2014 Explicit Factor Model (EFM), docs, paper Explainable CPU quick-start, deep-dive
Social Bayesian Personalized Ranking (SBPR), docs, paper Content-Based / Social CPU quick-start
2013 Hidden Factors and Hidden Topics (HFT), docs, paper Content-Based / Text CPU quick-start
2012 Weighted Bayesian Personalized Ranking (WBPR), docs, paper Collaborative Filtering CPU quick-start
2011 Collaborative Topic Regression (CTR), docs, paper Content-Based / Text CPU quick-start, deep-dive
Earlier Baseline Only, docs, paper Baseline CPU quick-start
Bayesian Personalized Ranking (BPR), docs paper Collaborative Filtering CPU quick-start, deep-dive
Factorization Machines (FM), docs, paper Collaborative Filtering / Content-Based Linux, CPU quick-start, deep-dive
Global Average (GlobalAvg), docs, paper Baseline CPU quick-start
Global Personalized Top Frequent (GPTop), paper Next-Basket CPU quick-start
Item K-Nearest-Neighbors (ItemKNN), docs, paper Neighborhood-Based CPU quick-start, deep-dive
Matrix Factorization (MF), docs, paper Collaborative Filtering CPU / GPU quick-start, pre-split-data, deep-dive
Maximum Margin Matrix Factorization (MMMF), docs, paper Collaborative Filtering CPU quick-start
Most Popular (MostPop), docs, paper Baseline CPU quick-start
Non-negative Matrix Factorization (NMF), docs, paper Collaborative Filtering CPU quick-start, deep-dive
Probabilistic Matrix Factorization (PMF), docs, paper Collaborative Filtering CPU quick-start
Session Popular (SPop), docs, paper Next-Item / Baseline CPU quick-start
Singular Value Decomposition (SVD), docs, paper Collaborative Filtering CPU quick-start, deep-dive
Social Recommendation using PMF (SoRec), docs, paper Content-Based / Social CPU quick-start, deep-dive
User K-Nearest-Neighbors (UserKNN), docs, paper Neighborhood-Based CPU quick-start, deep-dive
Weighted Matrix Factorization (WMF), docs, paper Collaborative Filtering requirements, CPU / GPU quick-start, deep-dive

Resources

Contributing

This project welcomes contributions and suggestions. Before contributing, please see our contribution guidelines.

Citation

If you use Cornac in a scientific publication, we would appreciate citations to the following papers:

Cornac: A Comparative Framework for Multimodal Recommender Systems, Salah et al., Journal of Machine Learning Research, 21(95):1–5, 2020.
@article{salah2020cornac,
  title={Cornac: A Comparative Framework for Multimodal Recommender Systems},
  author={Salah, Aghiles and Truong, Quoc-Tuan and Lauw, Hady W},
  journal={Journal of Machine Learning Research},
  volume={21},
  number={95},
  pages={1--5},
  year={2020}
}
Exploring Cross-Modality Utilization in Recommender Systems, Truong et al., IEEE Internet Computing, 25(4):50–57, 2021.
@article{truong2021exploring,
  title={Exploring Cross-Modality Utilization in Recommender Systems},
  author={Truong, Quoc-Tuan and Salah, Aghiles and Tran, Thanh-Binh and Guo, Jingyao and Lauw, Hady W},
  journal={IEEE Internet Computing},
  year={2021},
  publisher={IEEE}
}
Multi-Modal Recommender Systems: Hands-On Exploration, Truong et al., ACM Conference on Recommender Systems, 2021.
@inproceedings{truong2021multi,
  title={Multi-modal recommender systems: Hands-on exploration},
  author={Truong, Quoc-Tuan and Salah, Aghiles and Lauw, Hady},
  booktitle={Fifteenth ACM Conference on Recommender Systems},
  pages={834--837},
  year={2021}
}

License

Apache License 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cornac-2.2.2.tar.gz (5.9 MB view hashes)

Uploaded Source

Built Distributions

cornac-2.2.2-cp312-cp312-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.12 Windows x86-64

cornac-2.2.2-cp312-cp312-manylinux1_x86_64.whl (22.4 MB view hashes)

Uploaded CPython 3.12

cornac-2.2.2-cp312-cp312-macosx_10_9_universal2.whl (6.4 MB view hashes)

Uploaded CPython 3.12 macOS 10.9+ universal2 (ARM64, x86-64)

cornac-2.2.2-cp311-cp311-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.11 Windows x86-64

cornac-2.2.2-cp311-cp311-manylinux1_x86_64.whl (22.7 MB view hashes)

Uploaded CPython 3.11

cornac-2.2.2-cp311-cp311-macosx_10_9_universal2.whl (6.4 MB view hashes)

Uploaded CPython 3.11 macOS 10.9+ universal2 (ARM64, x86-64)

cornac-2.2.2-cp310-cp310-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.10 Windows x86-64

cornac-2.2.2-cp310-cp310-manylinux1_x86_64.whl (21.3 MB view hashes)

Uploaded CPython 3.10

cornac-2.2.2-cp310-cp310-macosx_12_0_x86_64.whl (3.5 MB view hashes)

Uploaded CPython 3.10 macOS 12.0+ x86-64

cornac-2.2.2-cp310-cp310-macosx_10_9_universal2.whl (6.4 MB view hashes)

Uploaded CPython 3.10 macOS 10.9+ universal2 (ARM64, x86-64)

cornac-2.2.2-cp39-cp39-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.9 Windows x86-64

cornac-2.2.2-cp39-cp39-manylinux1_x86_64.whl (21.4 MB view hashes)

Uploaded CPython 3.9

cornac-2.2.2-cp39-cp39-macosx_14_0_arm64.whl (3.3 MB view hashes)

Uploaded CPython 3.9 macOS 14.0+ ARM64

cornac-2.2.2-cp39-cp39-macosx_12_0_x86_64.whl (3.5 MB view hashes)

Uploaded CPython 3.9 macOS 12.0+ x86-64

cornac-2.2.2-cp38-cp38-win_amd64.whl (3.1 MB view hashes)

Uploaded CPython 3.8 Windows x86-64

cornac-2.2.2-cp38-cp38-manylinux1_x86_64.whl (21.9 MB view hashes)

Uploaded CPython 3.8

cornac-2.2.2-cp38-cp38-macosx_14_0_arm64.whl (3.3 MB view hashes)

Uploaded CPython 3.8 macOS 14.0+ ARM64

cornac-2.2.2-cp38-cp38-macosx_12_0_x86_64.whl (3.5 MB view hashes)

Uploaded CPython 3.8 macOS 12.0+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page