Skip to main content

Reinforcement Learning Recommender Systems Framework

Project description

Overview iRec

iRec is structured in three main components as usually made by classical frameworks in the RS field. These main components are:

Environment Setting: responsible for loading, preprocessing, and splitting the dataset into train and test sets (when required) to create the task environment for the pipeline;

Recommendation Agent: responsible for implementing the recommendation model required as an interactive algorithm that will interact with the environment;

Experimental Evaluation: responsible for defining how the agent will interact with the environment to simulate the interactive scenario and get the logs required for a complete evaluation.

Introduction

For Python >= 3.8

Interactive Recommender Systems Framework

Main features:

  • Several state-of-the-art reinforcement learning models for the recommendation scenario
  • Novelty, coverage and much more different type of online metrics
  • Integration with the most used datasets for evaluating recommendation systems
  • Flexible configuration
  • Modular and reusable design
  • Contains multiple evaluation policies currently used in the literature to evaluate reinforcement learning models
  • Online Learning and Reinforcement Learning models
  • Metrics and metrics evaluators are awesome to evaluate recommender systems in different ways

Also, we provide a amazing application created using the iRec library, the iRec-cmdline, that can be used to setup a experiment under 5~ minutes with parallel processes, log registry and results views. The main features are:

  • Powerful application to run any reinforcement learning experiment powered by MLflow
  • Entire pipeline of execution is fully parallelized
  • Log registry
  • Results views
  • Statistical test
  • Extensible environment

Install

Install with pip:

pip install irec

Examples

iRec-cmdline contains a example of a application using iRec and MLflow, where different experiments can be run with easy using existing recommender systems.

Check this example of a execution using the example application:

dataset=("Netflix 10k" "Good Books" "Yahoo Music 10k");\
models=(Random MostPopular UCB ThompsonSampling EGreedy);\
metrics=(Hits Precision Recall);\
eval_pol=("FixedInteraction");
metric_evaluator="Interaction";\

cd agents &&
python run_agent_best.py --agents "${models[@]}" --dataset_loaders "${dataset[@]}" --evaluation_policy "${eval_pol[@]}" &&

cd ../evaluation &&
python eval_agent_best.py --agents "${models[@]}" --dataset_loaders "${dataset[@]}" --evaluation_policy "${eval_pol[@]}" --metrics "${metrics[@]}" --metric_evaluator "${metric_eval[@]}" &&

python print_latex_table_results.py --agents "${models[@]}" --dataset_loaders "${dataset[@]}" --evaluation_policy "${eval_pol[@]}" --metric_evaluator "${metric_eval[@]}" --metrics "${metrics[@]}"

Datasets

Our framework has the ability to use any type of dataset, as long as it is suitable for the recommendation domain and is formatted correctly. Below we list some datasets tested and used in some of our experiments.

Dataset Domain Sparsity Link
MovieLens 100k Movies 93.69% Link
MovieLens 1M Movies 95.80% Link
MovieLens 10M Movies 98.66% Link
Netflix Movies 98.69% Link
Ciao DVD Movies 99.97% Link
Yahoo Music Musics 97.63% Link
LastFM Musics 99.84% Link
Good Books Books 98.88% Link
Good Reads Books 99.50% Link
Amazon Kindle Store Products 99.97% Link
Clothing Fit Clothes 99.97% Link

Models

The recommender models supported by irec are listed below.

Year Model Paper Description
2002 ε-Greedy Link In general, ε-Greedy models the problem based on an ε diversification parameter to perform random actions.
2013 Linear ε-Greedy Link A linear exploitation of the items latent factors defined by a PMF formulation that also explore random items with probability ε.
2011 Thompson Sampling Link A basic item-oriented bandit algorithm that follows a Gaussian distribution of items and users to perform the prediction rule based on their samples.
2013 GLM-UCB Link It follows a similar process as Linear UCB based on the PMF formulation, but it also adds a sigmoid form in the exploitation step and makes a time-dependent exploration.
2018 ICTR Link It is an interactive collaborative topic regression model that utilizes the TS bandit algorithm and controls the items dependency by a particle learning strategy.
2015 PTS Link It is a PMF formulation for the original TS based on a Bayesian inference around the items. This method also applies particle filtering to guide the exploration of items over time.
2019 kNN Bandit Link A simple multi-armed bandit elaboration of neighbor-based collaborative filtering. A variant of the nearest-neighbors scheme, but endowed with a controlled stochastic exploration capability of the users’ neighborhood, by a parameter-free application of Thompson sampling.
2017 Linear TS Link An adaptation of the original Thompson Sampling to measure the latent dimensions by a PMF formulation.
2013 Linear UCB Link An adaptation of the original LinUCB (Lihong Li et al. 2010) to measure the latent dimensions by a PMF formulation.
2020 NICF Link It is an interactive method based on a combination of neural networks and collaborative filtering that also performs a meta-learning of the user’s preferences.
2016 COFIBA Link This method relies on upper-confidence-based tradeoffs between exploration and exploitation, combined with adaptive clustering procedures at both the user and the item sides.
2002 UCB Link It is the original UCB that calculates a confidence interval for each item at each iteration and tries to shrink the confidence bounds.
2021 Cluster-Bandit (CB) Link it is a new bandit algorithm based on clusters to face the cold-start problem.
2002 Entropy Link The entropy of an item i is calculated using the relative frequency of the possible ratings. In general, since entropy measures the spread of ratings for an item, this strategy tends to promote rarely rated items, which can be considerably informative.
2002 log(pop)*ent Link It combines popularity and entropy to identify potentially relevant items that also have the ability to add more knowledge to the system. As these concepts are not strongly correlated, it is possible to achieve this combination through a linear combination of the popularity ρ of an item i by its entropy ε: score(i) = log(ρi) · εi.
- Random Link This method recommends totally random items.
- Most Popular Link It recommends items with the higher number of ratings received (most-popular) at each iteration.
- Best Rated Link Recommends top-rated items based on their average ratings in each iteration.

Metrics

The recommender metrics supported by iRec are listed below.

Metric Reference Description
Hits Link Number of recommendations made successfully.
Precision Link Precision is defined as the percentage of predictions we get right.
Recall Link Represents the probability that a relevant item will be selected.
EPC Link Represents the novelty for each user and it is measured by the expected number of seen relevant recommended items not previously seen.
EPD Link EPD is a distance-based novelty measure, which looks at distances between the items in the user’s profile and the recommended items.
ILD Link It represents the diversity between the list of items recommended. This diversity is measured by the Pearson correlation of the item’s features vector.
Gini Coefficient Link Diversity is represented as the Gini coefficient – a measure of distributional inequality. It is measured as the inverse of cumulative frequency that each item is recommended.
Users Coverage Link It represents the percentage of distinct users that are interested in at least k items recommended (k ≥ 1).

API

For writing anything new to the library (e.g., value function, agent, etc) read the documentation.

Contributing

All contributions are welcome! Just open a pull request.

Related Projects

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

irec-1.2.8.tar.gz (20.1 kB view details)

Uploaded Source

Built Distribution

irec-1.2.8-py3-none-any.whl (16.3 kB view details)

Uploaded Python 3

File details

Details for the file irec-1.2.8.tar.gz.

File metadata

  • Download URL: irec-1.2.8.tar.gz
  • Upload date:
  • Size: 20.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for irec-1.2.8.tar.gz
Algorithm Hash digest
SHA256 39f3eed03085961a63a9675cb9eff760b309d60ccde679f48e050182608cf7ce
MD5 47107da9bc54d6ebcf88bf82a56d6dd5
BLAKE2b-256 451fa2bb28399ad919944b4781b47a824f070c4f3a5a414ac9715e4508bf8cf0

See more details on using hashes here.

File details

Details for the file irec-1.2.8-py3-none-any.whl.

File metadata

  • Download URL: irec-1.2.8-py3-none-any.whl
  • Upload date:
  • Size: 16.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for irec-1.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 8f387e47bfd9354bf6741a5b75a0a60a3f4b027477dc1507dacb2b5b905568cd
MD5 4bc1f511f1b79a923be5d275b52b5b1a
BLAKE2b-256 6fb634c6c9800be5ac04b040639156ece6a7518c5e5cdd319bb98a9669611c77

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page