elodina/kafka

Name: kafka

Owner: Elodina

Description: Apache Kafka on Apache Mesos

Created: 2016-01-04 01:15:24.0

Updated: 2016-03-28 12:03:59.0

Pushed: 2016-04-08 13:56:46.0

Homepage:

Size: 1428

Language: Scala

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Kafka Mesos Framework

For issues https://github.com/mesos/kafka/issues

Installation

Typical Operations

Navigating the CLI

Using the REST API

Project Goals

Installation

Install OpenJDK 7 (or higher) http://openjdk.java.net/install/

Install gradle http://gradle.org/installation

Clone and build the project

# git clone https://github.com/mesos/kafka
# cd kafka
# ./gradlew jar
# wget https://archive.apache.org/dist/kafka/0.8.2.2/kafka_2.10-0.8.2.2.tgz
Environment Configuration

Before running ./kafka-mesos.sh, set the location of libmesos:

# export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so

If the host running scheduler has several IP addresses you may also need to

# export LIBPROCESS_IP=<IP_ACCESSIBLE_FROM_MASTER>
Scheduler Configuration

The scheduler is configured through the command line or kafka-mesos.properties file.

The following options are available:

kafka-mesos.sh help scheduler
t scheduler 
e: scheduler [options] [config.properties]

on               Description
--               -----------
i                Api url. Example: http://master:7000
nd-address       Scheduler bind address (master, 0.0.0.0, 192.168.50.*, if:eth1). Default - all
bug <Boolean>    Debug mode. Default - false
amework-name     Framework name. Default - kafka
amework-role     Framework role. Default - *
amework-timeout  Framework timeout (30s, 1m, 1h). Default - 30d
e                JRE zip-file (jre-7-openjdk.zip). Default - none.
g                Log file to use. Default - stdout.
ster             Master connection settings. Examples:
                  - master:5050
                  - master:5050,master2:5050
                  - zk://master:2181/mesos
                  - zk://username:password@master:2181
                  - zk://master:2181,master2:2181/mesos
incipal          Principal (username) used to register framework. Default - none
cret             Secret (password) used to register framework. Default - none
orage            Storage for cluster state. Examples:
                  - file:kafka-mesos.json
                  - zk:/kafka-mesos
                 Default - file:kafka-mesos.json
er               Mesos user to run tasks. Default - none
                 Kafka zookeeper.connect. Examples:
                  - master:2181
                  - master:2181,master2:2181

Additionally you can create kafka-mesos.properties containing values for CLI options of scheduler.

Example of kafka-mesos.properties:

age=file:kafka-mesos.json
er=zk://master:2181/mesos
aster:2181
http://master:7000

Now if running scheduler via ./kafka-mesos.sh scheduler (no options specified) the scheduler will read values for options from the above file. You could also specify alternative config file by using config argument of the scheduler.

Run the scheduler

Start the Kafka scheduler using this command:

# ./kafka-mesos.sh scheduler

Note: you can also use Marathon to launch the scheduler process so it gets restarted if it crashes.

Starting and using 1 broker

First let's start up and use 1 broker with the default settings. Further in the readme you can see how to change these from the defaults.

kafka-mesos.sh broker add 0
er added:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m

You now have a cluster with 1 broker that is not started.

kafka-mesos.sh broker list
er:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m

Now let's start the broker.

kafka-mesos.sh broker start 0
er started:
: 0
tive: true
ate: running
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0
sk:
id: broker-0-d2d94520-2f3e-4779-b276-771b4843043c
running: true
endpoint: 172.16.25.62:31000
attributes: rack=r1

Great! Now let's produce and consume from the cluster. Let's use kafkacat, a nice third party c library command line tool for Kafka.

ho "test"|kafkacat -P -b "172.16.25.62:31000" -t testTopic -p 0

And let's read it back.

fkacat -C -b "172.16.25.62:31000" -t testTopic -p 0 -e

This is a beta version.

Typical Operations

Changing the location where data is stored
kafka-mesos.sh broker stop 0
er stopped:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0, expires:2015-07-10 15:51:43+03

kafka-mesos.sh broker update 0 --options log.dirs=/mnt/array1/broker0
er updated:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
tions: log.dirs=/mnt/array1/broker0
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0, expires:2015-07-10 15:51:43+03

kafka-mesos.sh broker start 0
er started:
: 0
tive: true
ate: running
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0
sk:
id: broker-0-d2d94520-2f3e-4779-b276-771b4843043c
running: true
endpoint: 172.16.25.62:31000
attributes: rack=r1
Starting 3 brokers
afka-mesos.sh broker add 0..2 --heap 1024 --mem 2048
ers added:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m

: 1
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m

: 2
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m

afka-mesos.sh broker start 0..2
ers started:
: 0
tive: true
ate: running
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0
sk:
id: broker-0-d2d94520-2f3e-4779-b276-771b4843043c
running: true
endpoint: 172.16.25.62:31000
attributes: rack=r1

: 1
tive: true
ate: running
.
View broker log

Strings are always being read from the end of a file. Get last 100 lines from stdout file of broker 0,

fka-mesos.sh broker log 0

or from stderr

fka-mesos.sh broker log 0 --name stderr

or any file in kafka-*/log/, for example: server.log

fka-mesos.sh broker log 0 --name server.log

or maybe more lines

fka-mesos.sh broker log 0 --name server.log --lines 200

current limit is 100Kb no matter how many lines being requested.

High Availability Scheduler State

The scheduler supports storing the cluster state in Zookeeper. It currently shares a znode within the mesos ensemble. To turn this on in properties

terStorage=zk:/kafka-mesos
Failed Broker Recovery

When a broker fails, kafka mesos scheduler assumes that the failure is recoverable. The scheduler will try to restart the broker after waiting failover-delay (i.e. 30s, 2m). The initial waiting delay is equal to failover-delay setting. After each consecutive failure this delay is doubled until it reaches failover-max-delay value.

If failover-max-tries is defined and the consecutive failure count exceeds it, the broker will be deactivated.

The following failover settings exists:

ilover-delay     - initial failover delay to wait after failure, required
ilover-max-delay - max failover delay, required
ilover-max-tries - max failover tries to deactivate broker, optional
Broker Placement Stickiness

If a broker is started within a stickiness-period interval from it's stop time, the scheduler will place it on the same node it was on during the last successful start. This applies both to failover and manual restarts.

The following stickiness settings exists:

ickiness-period  - period of time during which broker would be restarted on the same node
Passing multiple options

A common use case is to supply multiple log.dirs, or provide other options. To do this you may use comma escaping like this:

fka-mesos.sh broker update 0 --options log.dirs=/mnt/array1/broker0\\,/mnt/array2/broker0,num.io.threads=16
er updated:
: 0
tive: false
ate: stopped
sources: cpus:1.00, mem:2048, heap:1024, port:auto
tions: log.dirs=/mnt/array1/broker0\,/mnt/array2/broker0,num.io.threads=16
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0, expires:2015-07-29 11:54:39Z
Broker metrics

Executor sends broker metrics to scheduler every 30 seconds, such as:

fka-mesos.sh broker list
ers:
: 0
tive: true
ate: running
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave0
sk:
id: broker-0-826e8075-5dd3-49ab-b86e-6432fa03ef66
state: running
endpoint: slave0:9092 (slave0)
trics:
collected: 2016-01-18 11:53:36Z
under-replicated-partitions: 0
offline-partitions-count: 0
is-active-controller: 1

: 1
tive: true
ate: running
sources: cpus:1.00, mem:2048, heap:1024, port:auto
ilover: delay:1m, max-delay:10m
ickiness: period:10m, hostname:slave1
sk:
id: broker-1-4648130a-9aa0-4c7e-b8af-761c03aa111c
state: running
endpoint: slave1:9092 (slave1)
trics:
collected: 2016-01-18 11:53:36Z
under-replicated-partitions: 0
offline-partitions-count: 0
is-active-controller: 0
Rolling restart

For example there are 3 running brokers and you want to add new log dir /mnt/array2/broker$id alongside already used /mnt/array1/broker$id

kafka-mesos.sh broker list
ers:
: 0
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave0
nstraints: hostname=like:slave0
tions: log.dirs=/mnt/array1/broker$id
.

: 1
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave1
nstraints: hostname=like:slave1
tions: log.dirs=/mnt/array1/broker$id
.

: 2
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave2
nstraints: hostname=like:slave2
tions: log.dirs=/mnt/array1/broker$id
.

Update brokers by adding new log dir (they were already using dir /mnt/array1/broker$id).

NOTE: you could update brokers in running state, however configuration updated only in scheduler, thus you need to restart brokers to apply changes. Whenever you update running broker broker list will show you notification (modified, needs restart) right in state description, notice will disappear once broker restarted.

kafka-mesos.sh broker update 1..3 --options log.dirs=/mnt/array1/broker\$id\\,/mnt/array2/broker\$id
ers updated:
: 0
tive: true
ate: running (modified, needs restart)
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave0
nstraints: hostname=like:slave0
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

: 1
tive: true
ate: running (modified, needs restart)
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave1
nstraints: hostname=like:slave1
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

: 2
tive: true
ate: running (modified, needs restart)
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave2
nstraints: hostname=like:slave2
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

Rolling restart (sequentially stop then start each broker) gives you ability to combine stop then start for each broker in single action. See restart CLI options.

kafka-mesos.sh broker restart 1..3 --timeout 5m
ers restarted:
: 0
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave0
nstraints: hostname=like:slave0
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

: 1
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave1
nstraints: hostname=like:slave1
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

: 2
tive: true
ate: running
sources: cpus:0.50, mem:2048, heap:1024, port:9092
nd-address: slave2
nstraints: hostname=like:slave2
tions: log.dirs=/mnt/array1/broker$id\,/mnt/array2/broker$id
.

It's possible that some broker could timeout on stop or start, in such case restart halts with notice:

kafka-mesos.sh broker restart 1..3 --timeout 5m
r: broker 1 timeout on stop

or for start

kafka-mesos.sh broker restart 1..3 --timeout 5m
r: broker 1 timeout on start

Navigating the CLI

Adding brokers to the cluster
kafka-mesos.sh help broker add
broker
e: broker add <broker-expr> [options]

on                Description
--                -----------
nd-address        broker bind address (broker0, 192.168.50.*, if:eth1). Default - auto
nstraints         constraints (hostname=like:master,rack=like:1.*). See below.
us <Double>       cpu amount (0.5, 1, 2)
ilover-delay      failover delay (10s, 5m, 3h)
ilover-max-delay  max failover delay. See failoverDelay.
ilover-max-tries  max failover tries. Default - none
ap <Long>         heap amount in Mb
m-options         jvm options string (-Xms128m -XX:PermSize=48m)
g4j-options       log4j options or file. Examples:
                   log4j.logger.kafka=DEBUG\, kafkaAppender
                   file:log4j.properties
m <Long>          mem amount in Mb
tions             options or file. Examples:
                   log.dirs=/tmp/kafka/$id,num.io.threads=16
                   file:server.properties
rt                port or range (31092, 31090..31100). Default - auto
ickiness-period   stickiness period to preserve same node for broker (5m, 10m, 1h)
lume              pre-reserved persistent volume id

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1

traint examples:
ke:master     - value equals 'master'
like:master   - value not equals 'master'
ke:slave.*    - value starts with 'slave'
ique          - all values are unique
uster         - all values are the same
uster:master  - value equals 'master'
oupBy         - all values are the same
oupBy:3       - all values are within 3 different groups
Updating broker configurations
kafka-mesos.sh help broker update
te broker
e: broker update <broker-expr> [options]

on                Description
--                -----------
nd-address        broker bind address (broker0, 192.168.50.*, if:eth1). Default - auto
nstraints         constraints (hostname=like:master,rack=like:1.*). See below.
us <Double>       cpu amount (0.5, 1, 2)
ilover-delay      failover delay (10s, 5m, 3h)
ilover-max-delay  max failover delay. See failoverDelay.
ilover-max-tries  max failover tries. Default - none
ap <Long>         heap amount in Mb
m-options         jvm options string (-Xms128m -XX:PermSize=48m)
g4j-options       log4j options or file. Examples:
                   log4j.logger.kafka=DEBUG\, kafkaAppender
                   file:log4j.properties
m <Long>          mem amount in Mb
tions             options or file. Examples:
                   log.dirs=/tmp/kafka/$id,num.io.threads=16
                   file:server.properties
rt                port or range (31092, 31090..31100). Default - auto
ickiness-period   stickiness period to preserve same node for broker (5m, 10m, 1h)
lume              pre-reserved persistent volume id

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1

traint examples:
ke:master     - value equals 'master'
like:master   - value not equals 'master'
ke:slave.*    - value starts with 'slave'
ique          - all values are unique
uster         - all values are the same
uster:master  - value equals 'master'
oupBy         - all values are the same
oupBy:3       - all values are within 3 different groups

: use "" arg to unset an option
Starting brokers in the cluster
kafka-mesos.sh help broker start
t broker
e: broker start <broker-expr> [options]

on     Description
--     -----------
meout  timeout (30s, 1m, 1h). 0s - no timeout

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1
Stopping brokers in the cluster
kafka-mesos.sh help broker stop
 broker
e: broker stop <broker-expr> [options]

on     Description
--     -----------
rce    forcibly stop
meout  timeout (30s, 1m, 1h). 0s - no timeout

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1
Restarting brokers in the cluster
kafka-mesos.sh help broker restart
art broker
e: broker restart <broker-expr> [options]

on     Description
--     -----------
meout  time to wait until broker restarts (30s, 1m, 1h). Default - 2m

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1
Removing brokers from the cluster
kafka-mesos.sh help broker remove
ve broker
e: broker remove <broker-expr> [options]

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1
Retrieving broker log
ieve broker log
e: broker log <broker-id> [options]

on             Description
--             -----------
nes <Integer>  maximum number of lines to read from the end of file.
                 Default - 100
me             name of log file (stdout, stderr, server.log). Default
                 - stdout
meout          timeout (30s, 1m, 1h). Default - 30s

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000
Listing Topics
afka-mesos.sh help topic list
 topics
e: topic list [<topic-expr>]

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

c-expr examples:
        - topic t0
,t1     - topics t0, t1
        - any topic
        - topics starting with 't'
Adding Topic
afka-mesos.sh help topic add
topic
e: topic add <topic-expr> [options]

on                  Description
--                  -----------
oker                <broker-expr>. Default - *. See below.
tions               topic options. Example: flush.ms=60000,retention.ms=6000000
rtitions <Integer>  partitions count. Default - 1
plicas <Integer>    replicas count. Default - 1

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

c-expr examples:
        - topic t0
,t1     - topics t0, t1
        - any topic
        - topics starting with 't'

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1
Updating Topic
afka-mesos.sh help topic update
te topic
e: topic update <topic-expr> [options]

on     Description
--     -----------
tions  topic options. Example: flush.ms=60000,retention.ms=6000000

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

c-expr examples:
        - topic t0
,t1     - topics t0, t1
        - any topic
        - topics starting with 't'
Rebalancing topics
afka-mesos.sh help topic rebalance
lance topics
e: topic rebalance <topic-expr>|status [options]

on                Description
--                -----------
oker              <broker-expr>. Default - *. See below.
plicas <Integer>  replicas count. Default - 1
meout             timeout (30s, 1m, 1h). 0s - no timeout

ric Options
on  Description
--  -----------
i   Api url. Example: http://master:7000

c-expr examples:
        - topic t0
,t1     - topics t0, t1
        - any topic
        - topics starting with 't'

er-expr examples:
     - broker 0
1    - brokers 0,1
.2   - brokers 0,1,2
1..2 - brokers 0,1,2
     - any broker
ibute filtering:
rack=r1]           - any broker having rack=r1
hostname=slave*]   - any broker on host with name starting with 'slave'
.4[rack=r1,dc=dc1] - any broker having rack=r1 and dc=dc1

Using the REST API

The scheduler REST API fully exposes all of the features of the CLI with the following request format:

/broker/<cli command>/broker={broker-expr}&<setting>=<value>
/topic/<cli command>/topic={topic-expr}&<setting>=<value>

Listing brokers

rl "http://localhost:7000/api/broker/list"
okers" : [{"id" : "0", "mem" : 128, "cpus" : 0.1, "heap" : 128, "failover" : {"delay" : "10s", "maxDelay" : "60s", "failures" : 5, "failureTime" : 1426651240585}, "active" : true}, {"id" : "5", "mem" : 128, "cpus" : 0.5, "heap" : 128, "failover" : {"delay" : "10s", "maxDelay" : "60s"}, "active" : false}, {"id" : "8", "mem" : 43008, "cpus" : 8.0, "heap" : 128, "failover" : {"delay" : "10s", "maxDelay" : "60s"}, "active" : true}]}

Adding a broker

rl "http://localhost:7000/api/broker/add?broker=0&cpus=8&mem=43008"
okers" : [{"id" : "0", "mem" : 43008, "cpus" : 8.0, "heap" : 128, "failover" : {"delay" : "10s", "maxDelay" : "60s"}, "active" : false}]}

Starting a broker

rl "http://localhost:7000/api/broker/start?broker=0"
ccess" : true, "ids" : "0"}

Stopping a broker

rl "http://localhost:7000/api/broker/stop?broker=0"
ccess" : true, "ids" : "0"}

Restarting a broker

rl "http://localhost:7000/api/broker/restart?broker=0"
atus" : "restarted", "brokers" : [{"task" : {"hostname" : "slave0", "state" : "running", "slaveId" : "fd935975-5db0-4732-bfa4-3063b534972d-S3", "executorId" : "broker-0-a8e0d084-b890-4482-800e-12e72ed7f9ed", "attributes" : {}, "id" : "broker-0-ff11db36-206a-4019-9cd2-6993376831eb", "endpoint" : "slave0:9092"}, "stickiness" : {"period" : "10m", "hostname" : "slave0"}, "bindAddress" : "slave0", "options" : "log.dirs=\/tmp\/kafka\/$id", "id" : "2", "port" : "9092", "constraints" : "hostname=like:slave0", "mem" : 1024, "cpus" : 0.5, "metrics" : {"underReplicatedPartitions" : 0, "offlinePartitionsCount" : 0, "activeControllerCount" : 0, "timestamp" : 1455557472857}, "heap" : 1024, "failover" : {"delay" : "1m", "maxDelay" : "14m"}, "active" : true}]}

Removing a broker

rl "http://localhost:7000/api/broker/remove?broker=0"
s" : "0"}

Listing topics

rl "http://localhost:7000/api/topic/list"
pics" : [{"name" : "t", "partitions" : {"0" : "0, 1"}, "options" : {"flush.ms": "1000"}}]}

Adding topic

rl "http://localhost:7000/api/topic/add?topic=t"
pic" : {"name" : "t", "partitions" : {"0" : "1"}, "options" : {}}}

Updating topic

rl "http://localhost:7000/api/topic/update?topic=t&options=flush.ms%3D1000"
pic" : {"name" : "t", "partitions" : {"0" : "0, 1"}, "options" : {"flush.ms" : "1000"}}}

Project Goals


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.