allegro/consul-registration-hook

Name: consul-registration-hook

Owner: Allegro Tech

Description: Hook that can be used for synchronous registration and deregistration in Consul discovery service on Kubernetes or Mesos cluster with Allegro executor

Created: 2018-02-13 13:42:05.0

Updated: 2018-05-22 08:03:00.0

Pushed: 2018-05-22 08:03:27.0

Homepage:

Size: 2228

Language: Go

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Consul Registration Hook

Build Status Go Report Card Codecov GoDoc

Hook that can be used for synchronous registration and deregistration in Consul discovery service on Kubernetes or Mesos cluster with Allegro executor.

Why hook uses synchronous communication

Synchronous communication with Consul allows to achieve a gracefull shutdown of old application version during the deployment. New instances are considered running and healthy when they are registered succesfully in discovery service. Old instances are first deregistered and then killed with configurable delay, which allows to propagate deregistration across whole Consul cluster and its clients.

Synchronous communication has one drawback - deregistration from Consul may never take place. This situation is mitigated by forcing to use DeregisterCriticalServiceAfter field in Consul checks, which deregisters automatically instances that are unhealthy for too long. The time after which unhealthy instances are removed can be long enough that some other application will start up on the same address and start responding to Consul checks - this is mitigated by using service ID composed from IP and port of the instance that should be registered. This results in overwriting the old obsolete instance with a new one, accelerating the cleaning of the Consul service catalog.

Usage
Kubernetes

On Kubernetes the hook is fired by using Container Lifecycle Hooks:

ntainer
cycle:
stStart:
exec:
  command: ["/bin/sh", "-c", "/hooks/consul-registration-hook register k8s"]
eStop:
exec:
  command: ["/bin/sh", "-c", "/hooks/consul-registration-hook deregister k8s"]

Hook requires additional configuration passed by environmental variables. Because the pod name and namespace is not passed by default to the container they have to be passed manually:

ntainer

name: KUBERNETES_POD_NAME
valueFrom:
  fieldRef:
    fieldPath: metadata.name
name: KUBERNETES_POD_NAMESPACE
valueFrom:
  fieldRef:
    fieldPath: metadata.namespace

Optionally, if Consul agent requires token for authentication it can be passed by using Secrets:

ainers:
. other configuration ...
volumeMounts:
  - name: consul-acl
    mountPath: /consul-acl
lifecycle:
postStart:
  exec:
    command: ["/bin/sh", "-c", "/hooks/consul-registration-hook --consul-acl-file /consul-acl/token register k8s"]
preStop:
  exec:
    command: ["/bin/sh", "-c", "/hooks/consul-registration-hook --consul-acl-file /consul-acl/token deregister k8s"]
. other configuration ...
mes:
name: consul-acl
secret:
  secretName: consul-acl
  items:
  - key: agent-token
    path: token
    mode: 511
Production

It is recommended to have a local copy of the hook on the production environment. For example on Google Cloud Platform you can have a copy of the hook in dedicated Cloud Storage bucket. Then you can authorize Compute Engine service account to have read only access to the bucket. After everything is prepared you can use Init Container to download hook and expose it on shared volume to the main container:

ersion: v1
: Pod
data:
me: pod-with-consul-hook
bels:
consul: service-name
:
itContainers:
name: hook-init-container
image: google/cloud-sdk:alpine
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "gsutil cat ${GS_URL} | tar -C /hooks -zxvf -"]
env:
- name: GS_URL
    valueFrom:
      configMapKeyRef:
        name: consul-registration-hook
        key: GS_URL
volumeMounts:
- name: hooks
  mountPath: /hooks
ntainers:
name: service-with-consul-hook-container
image: python:2
command: ["python", "-m", "SimpleHTTPServer", "8080"]
env:
- name: KUBERNETES_POD_NAME
  valueFrom:
    fieldRef:
      fieldPath: metadata.name
- name: KUBERNETES_POD_NAMESPACE
  valueFrom:
    fieldRef:
      fieldPath: metadata.namespace
- name: HOST_IP
  valueFrom:
    fieldRef:
      fieldPath: status.hostIP
- name: CONSUL_HTTP_ADDR
  value: "$(HOST_IP):8500"
ports:
- containerPort: 8080
volumeMounts:
- name: hooks
  mountPath: /hooks
lifecycle:
  postStart:
    exec:
      command: ["/bin/sh", "-c", "/hooks/consul-registration-hook register k8s"]
  preStop:
    exec:
      command: ["/bin/sh", "-c", "/hooks/consul-registration-hook deregister k8s"]
lumes:
name: hooks
emptyDir: {}
Mesos

Registration based on data provided from Mesos API is supported only partially. Because Mesos API do not provide health check definions we are unable to sync them with Consul agent.

Development
Kubernetes integration

To develop the hook locally you need the following things to be installed on your machine:

When everything is installed and setup properly, you can build hook for the Linux operating system (as Minikube starts Kubernetes cluster on Linux virtual machine):

 build-linux

After successful build, you can start your local mini Kubernetes cluster with project root mounted to the Kubernetes virtual machine:

kube start --mount --mount-string .:/hooks
Simple usecase, consul agent in separate container in the pod

Create a pod with Consul agent in development mode and hooks mounted:

ctl create -f ./examples/service-for-dev.yaml

You can login to the container with hooks using the following command:

ctl exec -it myservice-pod -- /bin/bash
Consul ACL & DaemonSet usecase

Create consul secret:

ctl create -f ./examples/secret-for-consul-agent.yaml

Create consul agent DaemonSet:

ctl create -f ./examples/daemonset-with-acl-bootstrapping.yaml

Create service pod:

ctl create -f ./examples/service-with-consul-lifecycle-hooks-and-acl-support.yaml

You can find the hook binary in /hooks folder on the container. All required environment variables are set up so you can run a command without any additional configuration.

Mesos integration

To develop the hook locally you need the following things to be installed on your machine:

When everything is installed and setup properly, you can build hook for the Linux operating system (we will use dockerized Mesos cluster for development):

 build-linux

After successful build, you can start your local Mesos + Marathon cluster:

er-compose up

Hook binary is available on Mesos slave container in /opt/consul-registration-hook/ folder, and can be used directly when deploying apps using Marathon (localhost:8080).


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.