Contribute to BARS#

Datasets#

Recommender systems are a typical practical-oriented research topic. Real-world recommendation systems often require at least tens of millions of training data serving for more than millions of users and items. However, many of the popular academic datasets (e.g., Movielens-1M) have much fewer training samples, which raises a discrepancy between research and practice. Therefore, we call for more open data release of industrial-scale datasets to reduce this gap. Specifically, the submitted dataset is expected to be well-preprocessed and split for evaluation. A description file which contains some basic information, e.g., data source, statistics, important features, and suggested evaluation metrics, should also be provided.

New Models and Results#

Recommender systems are a flourishing area in both academia and industry with new effective models quickly emerging. Keeping track of the latest models over a long period of time is an arduous task. Therefore, we would love for you to actively contribute the compelete code of your model to advance benchmarking in this area. In addition to the model’s code itself, you need to submit its performance results on at least one public dataset, as well as a detailed script for easy reproduction, which should at least include the running environment, parameter settings (with tuning scales if available), the command code, and training log. Take xue-pai/Open-Match-Benchmark for reference of the script.

We will carefully double-check your generous submission and further format them for this open benchmark. Furthermore, we will continuously test the effect of different models on different datasets to supplement the benchmark results. As we know, Building and maintaining a comprehensive open benchmark is a very challenging but worthwhile work, which needs the support from the whole community. We appreciate and respect every contribution, and your name will be marked on your contribution in the benchmark.

Looking forward to your support!