confluentinc/securing-kafka-blog

Name: securing-kafka-blog

Owner: Confluent Inc.

Description: Secure Kafka cluster (in a VM) for development and testing

Created: 2016-02-02 16:36:02.0

Updated: 2018-05-16 15:47:18.0

Pushed: 2016-07-14 10:20:22.0

Homepage: http://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption

Size: 43

Language: Puppet

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Secure Kafka Cluster (VM for testing and development)


Table of Contents


Overview

Based on the instructions in the Confluent blog post Apache Kafka Security 101, this project provides a pre-configured virtual machine to run a secure Kafka cluster using the Confluent Platform.

This VM is intended for development and testing purposes, and is not meant for production use.

What's included in the VM

Usage

Starting the VM and the secure Kafka cluster

First, you must install two prerequisites on your local machine (e.g. your laptop):

Then you can launch the VM from your local machine:

one this git repository
t clone https://github.com/confluentinc/securing-kafka-blog
 securing-kafka-blog

art and provision the VM (this may take a few minutes).
is step will boot the VM as well as install and configure
fka, ZooKeeper, Kerberos, etc.
grant up

Once the VM is provisioned, the last step is to log into the VM and start ZooKeeper and Kafka with security enabled:

nnect from your local machine to the VM via SSH
grant ssh default

u will see the following prompt if you're sucessfully connected to the VM
rant@kafka ~]$
art secure ZooKeeper and secure Kafka
rant@kafka ~]$ sudo /usr/sbin/start-zk-and-kafka

The services that will now be running inside the VM include:

Your local machine (the host of the VM) cannot access these ports: Because the VM has no port forwarding configured yet (cf. Vagrantfile), you can only access Kafka or ZooKeeper from inside the VM. You cannot, however, directly access Kafka or ZooKeeper from your local machine.

Test-driving the secure Kafka cluster

You can use the example commands in Apache Kafka Security 101 to test-drive this environment.

Simple example:


e following commands assume that you're connected to the VM!
n `vagrant ssh default` on your local machine if you are not connected yet.


eate the Kafka topic `securing-kafka`
rant@kafka ~]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
rant@kafka ~]$ kafka-topics --create --topic securing-kafka \
                            --replication-factor 1 \
                            --partitions 3 \
                            --zookeeper localhost:2181

unch the console consumer to continuously read from the topic `securing-kafka`
u may stop the consumer at any time by entering `Ctrl-C`.
rant@kafka ~]$ kafka-console-consumer --bootstrap-server localhost:9093 \
                                      --topic securing-kafka \
                                      --new-consumer \
                                      --consumer.config /etc/kafka/consumer_ssl.properties \
                                      --from-beginning

 another terminal:
unch the console producer to write some data to the topic `securing-kafka`.
u can then enter input data by writing some line of text, followed by ENTER.
ery line you enter will become the message value of a single Kafka message.
u may stop the producer at any time by entering `Ctrl-C`.
rant@kafka ~]$ kafka-console-producer --broker-list localhost:9093 \
                                      --topic securing-kafka \
                                      --producer.config /etc/kafka/producer_ssl.properties

w when you manually enter some data via the console producer,
en your console consumer in the other terminal will show you
e same data again.

Another example is to run a secure Kafka Streams application against the secure Kafka cluster in this VM:

Stopping the VM

Once you're done experimenting, you can stop the VM and thus the ZooKeeper and Kafka instances via:

n this command on your local machine (i.e. the host of the VM)
grant destroy

Troubleshooting

Configuration files

Main configuration files for both Kafka and ZooKeeper are stored under /etc/kafka.

Notably:

  • /etc/kafka/server.properties – Kafka broker configuration file
  • /etc/kafka/zookeeper.properties – ZooKeeper configuration file

Security related configuration files are also found under:

  • /etc/security/keytabs
  • /etc/security/tls
  • /etc/krb5.conf

Log files

Inside the VM you can find log files in the following directories:

  • Kafka: /var/log/kafka – notably the server.log

Useful references


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.