Openness is key to fostering progress in science. BARS is a project aimed for open BenchmArking for Recommender Systems, allowing better reproducibility and replicability of quantitative studies. The ultimate goal of BARS is to drive more reproducible research in the development of recommender systems. In summary, BARS is built with the following key features:

  • Open datasets: BARS collects a set of widely-used public datasets for recommendation research, and assign unique dataset IDs to track specific data partitions of each dataset. This allows to share and experiment with the datasets in a uniform way.
  • Open-source code: BARS supports the open source principles and provides a list of open-source model implementations for recommendation research.
  • Benchmarking pipeline: BARS builds an open benchmarking pipeline to ensure transparency and availability of all artifacts produced at each investigative step.
  • Comprehensive results: BARS provides the most comprehensive benchmarking results to date, covering tens of SOTA models and tens of dataset partitions. These results could be easily reused for future research.
  • Reproducing steps: The core of BARS is to ensure reproducibility of each benchmarking result through detailed recordings of the reproducing steps, following the open benchmarking pipeline.
  • Editable by anyone: BARS is open to the community. Anyone can contribute new datasets, new models, or new benchmarking results through a pull request on Github. The contributor will be marked in the benchmark accordingly.

By setting up an open benchmarking standard, together with the freely available datasets, source code, and reproducing steps, we hope that the BARS project could benefit all researchers, practitioners, and educators in the community.

Figure: Open benchmarking pipeline
Open-CTR-Benchmark Open-Match-Benchmark