efficient/cicada-exp-sigmod2017-silo

Name: cicada-exp-sigmod2017-silo

Owner: Efficient Computing at Carnegie Mellon

Description: A fork of Silo for Cicada SIGMOD 2017 evaluation

Created: 2016-08-22 07:30:57.0

Updated: 2017-10-28 15:39:49.0

Pushed: 2017-06-09 22:10:05.0

Homepage:

Size: 3548

Language: C++

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Silo

This project contains the prototype of the database system described in

Speedy Transactions in Multicore In-Memory Databases 
Stephen Tu, Wenting Zheng, Eddie Kohler, Barbara Liskov, Samuel Madden 
SOSP 2013. 
http://people.csail.mit.edu/stephentu/papers/silo.pdf

This code is an ongoing work in progress.

Build

There are several options to build. MODE is an important variable governing the type of build. The default is MODE=perf, see the Makefile for more options. DEBUG=1 triggers a debug build (off by default). CHECK_INVARIANTS=1 enables invariant checking. There are two targets: the default target which builds the test suite, and dbtest which builds the benchmark suite. Examples:

MODE=perf DEBUG=1 CHECK_INVARIANTS=1 make -j
MODE=perf make -j dbtest

Each different combination of MODE, DEBUG, and CHECK_INVARIANTS triggers a unique output directory; for example, the first command above builds to out-perf.debug.check.masstree.

Silo now uses Masstree by default as the default index tree. To use the old tree, set MASSTREE=0.

Running

To run the tests, simply invoke <outdir>/test with no arguments. To run the benchmark suite, invoke <outdir>/benchmarks/dbtest. For now, look in benchmarks/dbtest.cc for documentation on the command line arguments. An example invocation for TPC-C is:

<outdir>/benchmarks/dbtest \
    --verbose \
    --bench tpcc \
    --num-threads 28 \
    --scale-factor 28 \
    --runtime 30 \
    --numa-memory 112G 
Benchmarks

To reproduce the graphs from the paper:

$ cd benchmarks
$ python runner.py /unused-dir <results-file-prefix>

If you set DRYRUN=True in runner.py, then you get to see all the commands that would be issued by the benchmark script.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.