linux-on-ibm-z/typha

Name: typha

Owner: LinuxONE and Linux on z Systems Open-source Team

Description: Beta: Calico's Felix datastore fan-out daemon.

Created: 2018-02-12 12:46:31.0

Updated: 2018-02-12 12:46:33.0

Pushed: 2018-02-16 12:17:53.0

Homepage:

Size: 506

Language: Go

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Build Status Coverage Status Slack Status IRC Channel

Project Calico

This repository contains the source code for Project Calico's optional Typha daemon, which is currently in beta. An instance of Typha sits between the datastore (such as the Kubernetes API server) and many instances of Felix.

A small cluster of Typha nodes fan out updates to many Felix instances

This has many advantages:

How can I try Typha?

We're still in the process of adding Typha to our documentation. In the meantime, if you're

and you'd like to try it out, follow the instructions below…

Since Typha has the most impact when using the KDD, we're focusing on that to begin with. Install the Kubernetes specs below to create a 3-node deployment of Typha and expose them as a service called calico-typha. A three-node deployment is enough for ~600 Felix instances. Typha scales horizontally so feel free to reduce/increase the number of replicas. If you're running a small cluster, you may wish to reduce the CPU reservation proportionately.

apiVersion: v1
kind: Service
metadata:
  name: calico-typha
  namespace: kube-system
  labels:
    k8s-app: calico-typha
spec:
  ports:
    - port: 5473
      protocol: TCP
      targetPort: calico-typha
      name: calico-typha
  selector:
    k8s-app: calico-typha
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-typha
  namespace: kube-system
  labels:
    k8s-app: calico-typha
spec:
  replicas: 3
  revisionHistoryLimit: 2
  template:
    metadata:
      labels:
        k8s-app: calico-typha
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      hostNetwork: true
      containers:
      - image: calico/typha:v0.2.2
        name: calico-typha
        ports:
        - containerPort: 5473
          name: calico-typha
          protocol: TCP
        env:
          - name: TYPHA_LOGFILEPATH
            value: "none"
          - name: TYPHA_LOGSEVERITYSYS
            value: "none"
          - name: TYPHA_LOGSEVERITYSCREEN
            value: "info"
          - name: TYPHA_PROMETHEUSMETRICSENABLED
            value: "true"
          - name: TYPHA_PROMETHEUSMETRICSPORT
            value: "9093"
          - name: TYPHA_DATASTORETYPE
            value: "kubernetes"
          - name: TYPHA_CONNECTIONREBALANCINGMODE
            value: "kubernetes"
        volumeMounts:
        - mountPath: /etc/calico
          name: etc-calico
          readOnly: true
        resources:
          requests:
            cpu: 1000m
      volumes:
      # Mount in the Calico config directory from the host.
      - name: etc-calico
        hostPath:
          path: /etc/calico

Once you have a Typha service running, you can tell Felix v2.3.0+ (calico/node:v1.3.0+) to connect to it by setting the following environment variable in your calico/node pod spec, which tells Felix to discover Typha using the Kubernetes service API:

- name: FELIX_TYPHAK8SSERVICENAME
  value: "calico-typha"

Note:

How can I get support for contributing to Project Calico?

The best place to ask a question or get help from the community is the calico-users #slack. We also have an IRC channel.

Who is behind Project Calico?

Tigera, Inc. is the company behind Project Calico and is responsible for the ongoing management of the project. However, it is open to any members of the community ? individuals or organizations ? to get involved and contribute code.

Contributing

Thanks for thinking about contributing to Project Calico! The success of an open source project is entirely down to the efforts of its contributors, so we do genuinely want to thank you for even thinking of contributing.

Before you do so, you should check out our contributing guidelines in the CONTRIBUTING.md file, to make sure it's as easy as possible for us to accept your contribution.

How do I build Typha?

Typha mostly uses Docker for builds. We develop on Ubuntu 16.04 but other Linux distributions should work (there are known Makefile that prevent building on OS X).
To build Typha, you will need:

Then, as a one-off, run

 update-tools

which will install a couple more go tools that we haven't yet containerised.

Then, to build the calico-typha binary:

 bin/calico-typha

or, the calico/typha docker image:

 calico/typha
How can I run Typha's unit tests?

To run all the UTs:

 ut

To start a ginkgo watch, which will re-run the relevant UTs as you update files:

 ut-watch

To get coverage stats:

 cover-report

or

 cover-browser
How can a subset of the go unit tests?

If you want to be able to run unit tests for specific packages for more iterative development, you'll need to install

then run make update-tools to install ginkgo, which is the test tool used to run Typha's unit tests.

There are several ways to run ginkgo. One option is to change directory to the package you want to test, then run ginkgo. Another is to use ginkgo's watch feature to monitor files for changes:

o
go watch -r

Ginkgo will re-run tests as files are modified and saved.

How do I build packages/run Typha?
Docker

After building the docker image (see above), you can run Typha and log to screen with, for example: docker run --privileged --net=host -e TYPHA_LOGSEVERITYSCREEN=INFO calico/typha


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.