Berlin SPARQL Benchmark (BSBM) Specification - V3.1

Authors:
Chris Bizer (Web-based Systems Group, Freie Universität Berlin, Germany)
Andreas Schultz (Institut für Informatik, Freie Universität Berlin, Germany)
 
This version:
http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/spec/20110607/
Latest version:
http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/spec/
Publication Date: 06/07/2011

Abstract

This document gives an overview on the Berlin SPARQL Benchmark (BSBM) for measuring the performance of storage systems that expose SPARQL endpoints. The benchmark suite is built around an e-commerce use case, where a set of products is offered by different vendors and different consumers have posted reviews about products. The BSBM benchmark suite defines three use case motivated query mixes which focus on different aspects of the SPARQL query language.

Table of Contents

1. Introduction

The SPARQL Query Language for RDF, SPARQL Update and the SPARQL Protocol for RDF are implemented by a growing number of storage systems and are used within enterprise and open web settings. As SPARQL is taken up by the community there is a growing need for benchmarks to compare the performance of storage systems that expose SPARQL endpoints via the SPARQL protocol. Such systems include native RDF stores, Named Graph stores, systems that map relational databases into RDF, and SPARQL wrappers around other kinds of data sources.

The Berlin SPARQL Benchmark (BSBM) defines a suite of benchmarks for comparing the performance of these systems across architectures. The benchmark is built around an e-commerce use case in which a set of products is offered by different vendors and consumers have posted reviews about products.

The Berlin SPARQL Benchmark (BSBM) consists of:

The Berlin SPARQL Benchmark was designed along three goals: First, the benchmark should allow the comparison of different storage systems that expose SPARQL endpoints across architectures. Testing storage systems with realistic workloads of use case motivated queries is a well established benchmarking technique in the database field and is for instance implemented by the TPC benchmarks. The Berlin SPARQL Benchmark should apply this technique to systems that expose SPARQL endpoints. As an increasing number of Semantic Web applications do not rely on heavyweight reasoning but focus on the integration and visualization of large amounts of data from autonomous data sources on the Web, the Berlin SPARQL Benchmark should not be designed to require complex reasoning but to measure the performance of queries against large amounts of RDF data.

This document defines Version 3.1 of the BSBM Benchmark. In the step from Version 3 to 3.1 the Business Intelligence use case has been refined. Compared to BSBM Version 2, which has been released in September 2008, Version 3.1 is split into three use case scenarios:

  1. The Explore Use Case: Read-only scenario that simulates the the search and navigation pattern of a consumer looking for a product.
  2. The Explore and Update Use Case: Simulates a read/write scenario in which SPARQL 1.1 Update queries are excecuted in addition to the queries from the Explore use case.
  3. The Business Intelligence Use Case: Simulates different stakeholders asking analytical questions against the dataset. The queries rely on SPARQL 1.1 features such as grouping and aggregation.

The rest of this document is structured as follows: Section 2 defines the schema of benchmark dataset and describes the rules that are used by the data generator for populating the dataset according to the chosen scale factor. Section 3 defines the different use case scenarios that can be tested individually (or in combination). Each use case is defined in its own document. Section 4 describes how to do proper test runs and reporting of benchmark results.

2. Benchmark Dataset

All three scenarios use the same Benchmark Dataset .The bataset is built around an e-commerce use case, where a set of products is offered by different vendors and different consumers have posted reviews about products. The content and the production rules for the dataset are described in the BSBM Dataset Specification.

3. Benchmark Use Cases

This section defines a suite of benchmark use cases, each with a different focus and thus different query mix. Right now we devised three different test cases:

3.1 Explore Use Case

The benchmark query mix of the Explore use case illustrates the search and navigation pattern of a consumer looking for a product. The query mix consists of 12 distinct queries that comply to the SPARQL 1.0 Standard and use different features of the query language. The query mix is equal to query mix from BSBM Version 2 without the query 6 (regex).

3.2 Explore and Update Use Case

The query mix of the Explore and Update use case consists of the query mix from the Explore use case that illustrates the search and navigation pattern of a consumer looking for a product. Additionally the query mix incorporates queries from the Update use case that simulates the change of the dataset by means of adding new product information, reviews and offers and deleting outdated offers. Updates are handled using SPARQL 1.1 Update queries.

3.3 Business Intelligence Use Case

This benchmark query mix consists of the query mix from the Business Intelligence use case that simulates different stakeholders asking analytical questions against the dataset. The query mix consists of 8 distinct queries representing analytical questions about the dataset. The queries conform to the SPARQL 1.1 Working Draft, which is already implemented in several stores.

4. Benchmark Rules

All the information around the BSBM benchmark like reporting results and doing test runs is described in it's own document. Here we give a short overview.

4.1 Benchmark Metrics

Benchmark metrics are the units used to represent benchmark results of a test run against a System Under Test (SUT). These metrics are defined in the Performance Metrics section.

4.2 Rules for Running the Benchmark and Reporting Results

The rules to carry out benchmark test runs and reporting the results are described here.

4.3 Reporting Results

In order to compare the benchmark results generated by different parties the results have to be published with accompanying information about the test run. How to do proper reporting is decribed in the reporting section.

4.4 Data Generator and Test Driver

The section about the data generator and the test driver describes how to pick the right options for dataset generation and how to run the test driver.

5. References

For more information about RDF and SPARQL Benchmarks please refer to:

ESW Wiki Page about RDF Benchmarks

Other SPARQLBenchmarks

Papers about RDF and SPARQL Benchmarks

Appendix A: Changes

Appendix B: Acknowledgements

The work on the BSBM Benchmark Version 3 is funded through the LOD2 project.