A configurable, tunable, and reproducible library for CTR prediction
Click-through rate (CTR) prediction is a critical task for many industrial applications such as online advertising, recommender systems, and sponsored search. FuxiCTR provides an open-source library for CTR prediction, with key features in configurability, tunability, and reproducibility. We hope this project could benefit both researchers and practitioners with the goal of open benchmarking for CTR prediction tasks.
Configurable: Both data preprocessing and models are modularized and configurable.
Tunable: Models can be automatically tuned through easy configurations.
Reproducible: All the benchmarks can be easily reproduced.
Extensible: It supports both pytorch and tensorflow models, and can be easily extended to any new models.
- :point_right: See reusable dataset splits for CTR prediction.
- :point_right: See benchmarking configurations and steps.
- :point_right: See the BARS benchmark leaderboard.
FuxiCTR has the following dependency requirements.
- python 3.6+
- pytorch 1.0/1.10+ (required only for torch models)
- tensorflow 2.1+ (required only for tf models)
Other packages can be installed via
pip install -r requirements.txt.
Run the demo examples
Examples are provided in the demo directory to show some basic usage of FuxiCTR. Users can run the examples for quick start and to understand the workflow.
cd demo python example1_build_dataset_to_h5.py python example2_DeepFM_with_h5_input.py
Run an existing model
Users can easily run each model in the model zoo following the commands below, which is a demo for running DCN. In addition, users can modify the dataset config and model config files to run on their own datasets or with new hyper-parameters. More details can be found in the readme file.
cd model_zoo/DCN/DCN_torch python run_expid.py --expid DCN_test --gpu 0 # Change `MODEL` according to the target model name cd model_zoo/MODEL_PATH python run_expid.py --expid MODEL_test --gpu 0
Implement a new model
The FuxiCTR code structure is modularized, so that every part can be overwritten by users according to their needs. In many cases, only the model class needs to be implemented for a new customized model. If data preprocessing or data loader is not directly applicable, one can also overwrite a new one through the core APIs. We show a concrete example which implements our new model FinalMLP that has been recently published in AAAI 2023. More examples can be found in the model zoo.
:bell: If you find our code or benchmarks helpful in your research, please kindly cite the following papers.
Jieming Zhu, Jinyang Liu, Shuai Yang, Qi Zhang, Xiuqiang He. Open Benchmarking for Click-Through Rate Prediction. The 30th ACM International Conference on Information and Knowledge Management (CIKM), 2021. [Bibtex]
Jieming Zhu, Quanyu Dai, Liangcai Su, Rong Ma, Jinyang Liu, Guohao Cai, Xi Xiao, Rui Zhang. BARS: Towards Open Benchmarking for Recommender Systems. The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022. [Bibtex]
Welcome to join our WeChat group for any question and discussion. We also have open positions for internships and full-time jobs. If you are interested in research and practice in recommender systems, please reach out via our WeChat group.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.