Name: rocksplicator
Owner: Pinterest
Description: RocksDB Replication
Created: 2016-10-24 14:45:53.0
Updated: 2018-05-19 17:26:19.0
Pushed: 2018-05-18 21:43:18.0
Size: 67091
Language: C++
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Rocksplicator is a set of C++ libraries and tools for building large scale RocksDB based stateful services. Its goal is to help application developers solve common difficulties of building large scale stateful services, such as data replication, request routing and cluster management. With Rocksplicator, application developers just need to focus on their application logics, and won't need to deal with data replication, request routing nor cluster management.
Rocksplicator includes:
Introduction of Rocksplicator can be found in in our presentation at 2016 Annual RocksDB meetup at FB HQ and @Scale presentation (starting from 17:30).
Currently, we have 9 different online services based on rocksplicator running at Pinterest, which consist of nearly 30 clusters, over 4000 hosts and process tens of PB data per day.
The third-party dependencies of Rocksplicator can be found in docker/Dockerfile.
Docker is used for building Rocksplicator. Follow the Docker installation instructions to get Docker running on your system.
You can build your own docker image (if you want to change the docker file and test it locally).
ocker && docker build -t rocksplicator-build .
Or pull the one we uploaded.
er pull angxu/rocksplicator-build:latest
ocksplicator && git submodule update --init
Get into the docker build environment. We are assuming the rocksplicator repo is under $HOME/code/, and $HOME/docker-root is an existing directory.
er run -v <SOURCE-DIR>:/rocksplicator -v $HOME/docker-root:/root -ti angxu/rocksplicator-build:latest bash
Run the following command in the docker bash to build Rocksplicator:
rocksplicator && mkdir -p build && cd build && cmake .. && make -j
Run the following command in the docker bash:
rocksplicator && mkdir -p build && cd build && cmake .. && make -j && make test
There is an example counter service under examples/counter_service/, which demonstrated a typical usage pattern for RocksDB replicator.
Please check cluster_management directory for Helix powered automated cluster management and recovery.
The cluster mangement tool rocksdb_admin.py is under rocksdb_admin/tool/.
Before using the tool, we need to generate python client code for Admin interface as follows.
rocksplicator/rocksdb_admin/tool/ && ./sync.sh
host_file is a text file containing all hosts in the cluster. Each line is for a host in format “ip:port:zone”. For example “192.168.0.101:9090:us-east-1c”
on rocksdb_admin.py new_cluster_name config --host_file=./host_file --segment=test --shard_num=1000 --overwrite
on rocksdb_admin.py cluster_name ping
on rocksdb_admin.py cluster_name remove_host "ip:port:zone"
on rocksdb_admin.py cluster_name promote
on rocksdb_admin.py cluster_name add_host "ip:port:zone"
on rocksdb_admin.py cluster_name rebalance
on rocksdb_admin.py "cluster" load_sst "segment" "s3_bucket" "s3_prefix" --concurrency 64 --rate_limit_mb 64
on rocksdb_admin.py cluster_name remove_host old_ip:old_port:zone_a
on rocksdb_admin.py cluster_name promote
on rocksdb_admin.py cluster_name add_host new_ip:new_port:zone_a
on rocksdb_admin.py cluster_name rebalance