Scalable collaborative filtering model based on sparse approximate inverse
Project description
SANSA: how to compute EASE on million item datasets
Official implementation of scalable collaborative filtering model SANSA.
Scalable Approximate NonSymmetric Autoencoder for Collaborative Filtering
Spišák M., Bartyzal R., Hoskovec A., Peška L., Tůma M.
Paper: 10.1145/3604915.3608827Best Short Paper Runner-Up, 17th ACM Conference on Recommender Systems (ACM RecSys 2023)
Reproducibility
See branch reproduce_our_results for codes used in experiments and complete experimental results.
About
SANSA is a scalable modification of EASE, a shallow autoencoder for collaborative filtering, specifically designed to handle item sets with millions of items.
- End-to-end sparse training procedure: instead of strenuously inverting the Gramian $X^TX$ of user-item interaction matrix $X$, SANSA efficiently finds a sparse approximate inverse of $X^TX$.
- Training memory requirements are proportional to the number of non-zero elements in $X^TX$ (and this can be improved further).
- The model's density is prescribed via a hyperparameter.
- As a sparse neural network, SANSA offers very fast inference times.
Learn more in our short paper, or check out the conference poster.
Installation
pip install sansa
(make sure to install prerequisites first, see next section)
Prerequisites
Training of SANSA uses scikit-sparse, which depends on the SuiteSparse numerical library. To install SuiteSparse on Ubuntu and macOS, run the commands below:
# Ubuntu
sudo apt-get install libsuitesparse-dev
# macOS
brew install suite-sparse
Note that brew
(and possibly other package managers) installs SuiteSparse objects to non-standard location. Before installing the package, you need to set
the correct path to SuiteSparse by setting the following 2 environment variables:
export SUITESPARSE_INCLUDE_DIR={PATH TO YOUR SUITESPARSE}/include/suitesparse
export SUITESPARSE_LIBRARY_DIR={PATH TO YOUR SUITESPARSE}/lib
For brew
, you can find {PATH TO YOUR SUITESPARSE}
by running brew info suite-sparse
. To streamline this process, you can run
SUITESPARSE_DIR=$(brew info suitesparse | sed -n 4p | awk '{print $1}') # path to brew-installed package is on the 4th line, 1st column
export SUITESPARSE_INCLUDE_DIR=$SUITESPARSE_DIR/include/suitesparse
export SUITESPARSE_LIBRARY_DIR=$SUITESPARSE_DIR/lib
which should set the correct environment variables for you.
Installation from source
With SuiteSparse path correctly specified, simply run
pip install .
in the root directory of this repository.
Usage
Configuration
SANSA model supports two methods of factorization of the Gramian matrix $X^TX$ and one method for inverting the lower triangular factor. Factorizers and inverters are configured separately and included in the model configuration:
from sansa import SANSAConfig
config = SANSAConfig(
l2 = 20.0, # regularization strength
weight_matrix_density = 5e-5, # desired density of weights
gramian_factorizer_config = factorizer_config, # factorizer configuration
lower_triangle_inverter_config = inverter_config, # inverter configuration
)
To get the configuration of a model instance, use the config
property:
config = model.config
Factorizer configuration
Choose between two factorization techniques:
- CHOLMOD = exact Cholesky factorization sparsified after factorization. More accurate but memory-hungry; recommended for smaller, denser matrices.
from sansa import CHOLMODGramianFactorizerConfig
factorizer_config = CHOLMODGramianFactorizerConfig() # no hyperparameters
- ICF = Incomplete Cholesky factorization. Less accurate but much more memory-efficient; recommended for very large, sparse matrices.
from sansa import ICFGramianFactorizerConfig
factorizer_config = ICFGramianFactorizerConfig(
factorization_shift_step = 1e-3, # initial diagonal shift if incomplete factorization fails
factorization_shift_multiplier = 2.0, # multiplier for the shift for subsequent attempts
)
Inverter configuration
Currently only one inverter is available: UMR -- residual minimization approach
from sansa import UMRUnitLowerTriangleInverterConfig
inverter_config = UMRUnitLowerTriangleInverterConfig(
scans=1, # number of scans through all columns of the matrix
finetune_steps=5, # number of finetuning steps, targeting worst columns
)
Training
from sansa import SANSA
X = ... # training data -- scipy.sparse.csr_matrix (rows=users, columns=items)
config = ... # specify configuration of SANSA model
# Instantiate model with the config
model = SANSA(config)
# Train model on the user-item matrix
model.fit(X)
# or on a precomputed symmetric item-item matrix
model.fit(X, compute_gramian=False)
Weights of a SANSA model can be accessed using the weights
attribute:
w1, w2 = model.weights # tuple of scipy.sparse.csr_matrix of shape (num_items, num_items)
Weights can be loaded into a model using the load_weights
method:
w1, w2 = ... # tuple of scipy.sparse.csr_matrix of shape (num_items, num_items)
model.load_weights((w1, w2))
Inference
1. High-level inference: recommendation for a batch of users
X = ... # input interactions -- scipy.sparse.csr_matrix (rows=users, columns=items)
# Get indices of top-k items for each user + corresponding scores
# if mask_input=True, input items get score=0
top_k_indices, top_k_scores = model.recommend(X, k=10, mask_input=True) # np.ndarrays of shape (X.shape[0], k)
2. Low-level inference: forward pass
X = ... # input interactions -- scipy.sparse.csr_matrix (rows=users, columns=items)
# Forward pass
scores = model.forward(X) # scipy.sparse.csr_matrix of shape X.shape
License
Copyright 2023 Inspigroup s.r.o.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://github.com/glami/sansa/blob/main/LICENSE
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Cite us
Please consider citing our paper:
@inproceedings{10.1145/3604915.3608827,
author = {Spi\v{s}\'{a}k, Martin and Bartyzal, Radek and Hoskovec, Anton\'{\i}n and Peska, Ladislav and T\r{u}ma, Miroslav},
title = {Scalable Approximate NonSymmetric Autoencoder for Collaborative Filtering},
year = {2023},
isbn = {9798400702419},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3604915.3608827},
doi = {10.1145/3604915.3608827},
abstract = {In the field of recommender systems, shallow autoencoders have recently gained significant attention. One of the most highly acclaimed shallow autoencoders is easer, favored for its competitive recommendation accuracy and simultaneous simplicity. However, the poor scalability of easer (both in time and especially in memory) severely restricts its use in production environments with vast item sets. In this paper, we propose a hyperefficient factorization technique for sparse approximate inversion of the data-Gram matrix used in easer. The resulting autoencoder, sansa, is an end-to-end sparse solution with prescribable density and almost arbitrarily low memory requirements — even for training. As such, sansa allows us to effortlessly scale the concept of easer to millions of items and beyond.},
booktitle = {Proceedings of the 17th ACM Conference on Recommender Systems},
pages = {763–770},
numpages = {8},
keywords = {Algorithm scalability, Numerical approximation, Sparse approximate inverse, Sparse autoencoders},
location = {Singapore, Singapore},
series = {RecSys '23}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sansa-1.1.0.tar.gz
.
File metadata
- Download URL: sansa-1.1.0.tar.gz
- Upload date:
- Size: 31.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1d3dd8ac9d762fcc5246ea33c20ed2887474becb9bf10c3e41350f30db457423 |
|
MD5 | f0343c0c80e52fa4f0cab89ad6f4adcc |
|
BLAKE2b-256 | faf9e78d3c3e2211ba327086b43c6ad308d044d8bade5f3e96ba31fdd89fae7a |
File details
Details for the file sansa-1.1.0-py3-none-any.whl
.
File metadata
- Download URL: sansa-1.1.0-py3-none-any.whl
- Upload date:
- Size: 30.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 52edc1534f0b6c1f25b118212d16f7d5f14a2c50bbb489fc0ecf7c03bb7bb993 |
|
MD5 | ac4a0694398442e1b0460934be733654 |
|
BLAKE2b-256 | e46492c8bb11aaa4489e077540cdb5cf8b70936008682412c703c71d9e656ee8 |