IBM/spark-tpc-ds-performance-test

Name: spark-tpc-ds-performance-test

Owner: International Business Machines

Description: Use the TPC-DS benchmark to test Spark SQL performance

Created: 2017-09-12 19:54:03.0

Updated: 2018-01-18 02:04:43.0

Pushed: 2017-12-01 19:46:45.0

Homepage:

Size: 372324

Language: C

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Explore Spark SQL and its performance using TPC-DS workload

Apache Spark is a popular distributed data processing engine that is built around speed, ease of use and sophisticated analytics, with APIs in Java, Scala, Python, R, and SQL. Like other data processing engines, Spark has a unified optimization engine that computes the optimal way to execute a workload with the main purpose of reducing the disk IO and CPU usage.

We can evaluate and measure the performance of Spark SQL using the TPC-DS benchmark. TPC-DS is a widely used industry standard decision support benchmark that is used to evaluate performance of data processing engines. Given that TPC-DS exercises some key data warehouse features, running TPC-DS successfully reflects the readiness of Spark in terms of addressing the need of a data warehouse application. Apache Spark v2.0 supports all the ninety-nine decision support queries that is part of this TPC-DS benchmark.

This Code Pattern is aimed at helping Spark developers quickly setup and run the TPC-DS benchmark in their own development setup.

When the reader has completed this Code Pattern, they will understand the following:

Architecture diagram

Flow
Included components
Featured technologies

Steps

There are two modes of exercising this Code Pattern:

Run locally
  1. Clone the repository
  2. Setup development tools
  3. Install Spark
  4. Run the script
1. Clone the repository

Clone the spark-tpc-ds-performance-test repo locally. In a terminal, run:

t clone https://github.com/IBM/spark-tpc-ds-performance-test 
2. Setup development tools

Make sure the required development tools are installed in your platform. This Code Pattern is supported on Mac and Linux platforms only. Depending on your platform, run the following command to install the necessary development tools:

3. Install Spark

To successfully run the TPC-DS tests, Spark must be installed and pre-configured to work with an Apache Hive metastore.

Perform 1 or more of the following options to ensure that Spark is installed and configured correctly. Once completed, modify `bin/tpcdsenv.shto setSPARK_HOME` pointing to your Spark installation directory.

Option 1 - If you already have Spark installed, complete the following steps to ensure your Spark version is properly configured:

 $SPARK_HOME
n/spark-shell

 Enter the following command at the scala prompt
ala> spark.conf
ale> spark.conf.get("spark.sql.catalogImplementation")
s5: String = hive
ala> <ctrl-c>

Note: You must exit out of the spark-shell process or you will encounters errors when performing the TPC-DS tests.

If the prompt returns String = hive, then your installation is properly configured.

Option 2 - If you don't have an installed Spark version, or your current installation is not properly configured, we suggest trying to pull down version 2.2.0 from the Spark downloads page. This version should be configured to work with Apache Hive, but please run the test in the previous option to make sure.

Option 3 - The last option available is it to download and build it yourself. The first step is to clone the Spark repo:

t clone https://github.com/apache/spark.git

Then build it using these instructions. Please make sure to build Spark with Hive support by following the Building With Hive and JDBC Support section.

4. Run the script

Note: Verify that the bin/tpcdsenv.sh script has SPARK_HOME setup correctly.

Now that we have Spark setup and the TPC-DS scripts downloaded, we are ready to setup and start running the TPC-DS queries using the bin/tpcdsspark.sh utility script. This driver script will allow you to compile the TPC-DS toolkit to produce the data and the queries, and then run them to collect results.

Perform the following steps to complete the execution of the script:

d spark-tpc-ds-performance-test
in/tpcdsspark.sh 

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 
Setup Option: “(1) - Compile TPC-DS toolkit”

The most recent toolkit can be downloaded from http://www.tpc.org/tpcds/. To make it easier for users, a toolkit based on v2.4 is available locally in src/toolkit. If you download the newer toolkit from the official tpc-ds site, then make sure you overlay the code in src/toolkit before proceeding with this option.

This option compiles the toolkit to produce the data generation (dsdgen) and query generation (dsqgen) binaries.

Below is the screen-shot when this option is chosen.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 1
------------------------------------------

: Starting to compile..
: make OS=MACOS
: Completed building toolkit successfully..
s any key to continue
Setup Option: “(2) - Generate TPC-DS data with 1GB scale”

This option uses the data generation binary produced in the previous step to generate the test data at a 1GB scale factor. The data is generated in the directory TPCDS_GENDATA_DIR. The default location of TPCDS_GENDATA_DIR is the local directory gendata. This can be changed by modifying the script bin/tpcdsenv.sh.

Technically, this option can be used to generate data at a different scale. However, since this Code Pattern is targeted towards the developer environment, the scale has been fixed at 1GB. To modify this script to generate data at a different scale factor, see the discussion in the scaling upto 100TB section below.

Below is the screenshot when this option is chosen.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 2
------------------------------------------

: Starting to generate data. Will take a few minutes ...
: Progress : [########################################] 100%
: TPCDS data is generated successfully at spark-tpc-ds-performance-test/gendata
s any key to continue
Setup Option: “(3) - Create Spark Tables”

After data generation has completed, this option creates the tables in the database name specified by TPCDS_DBNAME defined in bin/tpcdsenv.sh. The default name is TPCDS but can be changed if needed.

The SQL statements to create the tables can be found in src/ddl/create_tables.sql, and are created in parquet format.

Below is the screenshot when this option is chosen.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 3
------------------------------------------

: Creating tables. Will take a few minutes ...
: Progress : [########################################] 100%
: Spark tables created successfully..
s any key to continue
Setup Option: “(4) - Generate TPC-DS queries”

This option uses the query generation binary (dsqgen) produced in “option (1)” to generate the 99 TPC-DS queries. The queries are generated in the TPCDS_GEN_QUERIES_DIR, with a default location of genqueries. This can be changed my modifying the `bin/tpcdsenv.sh' script.

Below is the screenshot when this option is chosen.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 4
------------------------------------------

: Generating TPC-DS qualification queries.
: Completed generating TPC-DS qualification queries.
s any key to continue
Run Option: “(5) - Run a subset of TPC-DS queries”

A comma separated list of queries can be specified in this option. The result of each query in the supplied list is generated in TPCDS_WORK_DIR, with a default directory location of work. The format of the result file is query<number>.res.

A summary file named run_summary.txt is also generated. It contains information about query number, execution time and number of rows returned.

Note: The query number is a two digit number, so for query 1 the results will be in query01.res.

Note: If you are debugging and running queries using this option, make sure to save run_summary.txt after each of your runs.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 5
------------------------------------------

r a comma separated list of queries to run (ex: 1, 2), followed by [ENTER]:

: Checking pre-reqs for running TPC-DS queries. May take a few seconds..
: Checking pre-reqs for running TPC-DS queries is successful.
: Running TPCDS queries. Will take a few minutes depending upon the number of queries specified.. 
: Progress : [########################################] 100%
: TPCDS queries ran successfully. Below are the result details
: Individual result files: spark-tpc-ds-performance-test/work/query<number>.res
: Summary file: spark-tpc-ds-performance-test/work/run_summary.txt
s any key to continue
Run Option: “(6) - Run all (99) TPC-DS queries”

The only difference between this and option (5) is that all 99 TPC-DS queries are run instead of a subset.

Note: If you are running this on your laptop, it can take a few hours to run all 99 TPC-DS queries.

==========================================
DS On Spark Menu
------------------------------------------
P
 Compile TPC-DS toolkit
 Generate TPC-DS data with 1GB scale
 Create spark tables
 Generate TPC-DS queries

 Run a subset of TPC-DS queries
 Run All (99) TPC-DS Queries
NUP
 Cleanup toolkit
 Quit
------------------------------------------
se enter your choice followed by [ENTER]: 6
------------------------------------------
: Checking pre-reqs for running TPC-DS queries. May take a few seconds..
: Checking pre-reqs for running TPC-DS queries is successful.
: Running TPCDS queries. Will take a few minutes depending upon the number of queries specified.. 
: Progress : [########################################] 100%
: TPCDS queries ran successfully. Below are the result details
: Individual result files: spark-tpc-ds-performance-test/work/query<number>.res
: Summary file: spark-tpc-ds-performance-test/work/run_summary.txt
s any key to continue
Cleanup option: “(7) - Cleanup toolkit”

This will clean up all of the files generated during option steps 1, 2, 3, and 4. If you use this option, make sure to run the setup steps ( 1, 2, 3, 4) before running queries using option 5 and 6.

Cleanup option: “(Q) - Quit”

This will exit the script.

Run using a Jupyter notebook in the IBM Data Science Experience
  1. Sign up for the Data Science Experience
  2. Create the notebook
  3. Run the notebook
  4. Save and Share
1. Sign up for the Data Science Experience

Sign up for IBM's Data Science Experience. By signing up for the Data Science Experience, two services: DSX-Spark and DSX-ObjectStore will be created in your IBM Cloud account. If these services do not exist, or if you are already using them for some other application, you will need to create new instances.

To create these services:

Note: When creating your Object Storage service, select the Swift storage type in order to avoid having to pay an upgrade fee.

Take note of your service names as you will need to select them in the following steps.

2. Create the notebook

First you must create a new Project:

Create the Notebook:

3. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

There are several ways to execute the code cells in your notebook:

4. Save and Share
How to save your work:

Under the File menu, there are several ways to save your notebook:

How to share your work:

You can share your notebook by selecting the ?Share? button located in the top right section of your notebook panel. The end result of this action will be a URL link that will display a ?read-only? version of your notebook. You have several options to specify exactly what you want shared from your notebook:

Considerations while increasing the scale factor.

This Code Pattern walks us through the steps that need to be performed to run the TPC-DS benchmark with the qualification scale factor(1GB). Since this is a performance benchmark, typically we need to run the benchmark with varying scale factors to gauge the throughput of the underlying data processing engine. In the section below, we will briefly touch on things to be considered while increasing the data and running the workload against a production cluster.

Learn more

License

Apache 2.0


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.