openshift/console

Name: console

Owner: OpenShift

Description: OpenShift Cluster Console UI

Created: 2018-04-13 17:54:59.0

Updated: 2018-05-24 12:17:29.0

Pushed: 2018-05-24 12:17:25.0

Homepage:

Size: 86329

Language: JavaScript

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

OpenShift Cluster Console

External Service Integration Criteria

Any external service that integrates with console should satisfy the following requirements:

Codename: “Bridge”

Build Status

quay.io/coreos/tectonic-console

The console is a more friendly kubectl in the form of a single page webapp. It also integrates with other services like monitoring, chargeback, ALM, and identity. Some things that go on behind the scenes include:

Quickstart
Dependencies:
  1. node.js >= 8 & yarn >= 1.3.2
  2. go >= 1.8 & glide >= 0.12.0 (go get github.com/Masterminds/glide) & glide-vc
  3. kubectl and a k8s cluster
  4. jq (for contrib/environment.sh)
  5. Google Chrome/Chromium >= 60 (needs –headless flag) for integration tests
Build everything:
ild.sh

Backend binaries are output to /bin.

Configure the application
Tectonic

If you have a working kubectl on your path, you can run the application with:

rt KUBECONFIG=/path/to/kubeconfig
ce ./contrib/environment.sh
n/bridge

The script in contrib/environment.sh sets sensible defaults in the environment, and uses kubectl to query your cluster for endpoint and authentication information.

To configure the application to run by hand, (or if environment.sh doesn't work for some reason) you can manually provide a Kubernetes bearer token with the following steps.

First get the secret ID that has a type of kubernetes.io/service-account-token by running:

ctl get secrets

then get the secret contents:

ctl describe secrets/<secret-id-obtained-previously>

Use this token value to set the BRIDGE_K8S_BEARER_TOKEN environment variable when running Bridge.

OpenShift

Registering an OpenShift OAuth client requires administrative privileges for the entire cluster, not just a local project. Run the following command to log in as cluster admin:

ogin -u system:admin

To run bridge locally connected to a remote OpenShift cluster, create an OAuthClient resource with a generated secret and read that secret:

rocess -f examples/tectonic-console-oauth-client.yaml | oc apply -f -
rt OAUTH_SECRET=$( oc get oauthclient tectonic-console -o jsonpath='{.secret}' )

If the CA bundle of the OpenShift API server is unavailable, fetch the CA certificates from a service account secret. Otherwise copy the CA bundle to examples/ca.crt:

et secrets -n default --field-selector type=kubernetes.io/service-account-token -o json | \
jq '.items[0].data."service-ca.crt"' -r | python -m base64 -d > examples/ca.crt
te: use "openssl base64" because the "base64" tool is different between mac and linux

Set the OPENSHIFT_API environment variable to tell the script the API endpoint:

rt OPENSHIFT_API="https://127.0.0.1:8443"

Finally run the Console and visit localhost:9000:

amples/run-bridge.sh
Docker

The builder-run.sh script will run any command from a docker container to ensure a consistent build environment. For example to build with docker run:

ilder-run.sh ./build.sh

The docker image used by builder-run is itself built and pushed by the script push-builder, which uses the file Dockerfile-builder to define an image. To update the builder-run build environment, first make your changes to Dockerfile-builder, then run push-builder, and then update the BUILDER_VERSION variable in builder-run to point to your new image. Our practice is to manually tag images builder images in the form Builder-v$SEMVER once we're happy with the state of the push.

Compile, Build, & Push Docker Image

(Almost no reason to ever do this manually, Jenkins handles this automation)

Build a docker image, tag it with the current git sha, and pushes it to the quay.io/coreos/tectonic-console repo.

Must set env vars DOCKER_USER and DOCKER_PASSWORD or have a valid .dockercfg file.

ild-docker-push.sh
Jenkins automation

Master branch:

Pull requests:

If changes are ever required for the Jenkins job configuration, apply them to both the regular console job and PR image job.

Hacking

See CONTRIBUTING for workflow & convention details.

See STYLEGUIDE for file format and coding style guide.

Dev Dependencies

go, glide, glide-vc, nodejs/yarn, kubectl

Frontend Development

All frontend code lives in the frontend/ directory. The frontend uses node, yarn, and webpack to compile dependencies into self contained bundles which are loaded dynamically at run time in the browser. These bundles are not commited to git. Tasks are defined in package.json in the scripts section and are aliased to yarn run <cmd> (in the frontend directory).

Install Dependencies

To install the build tools and dependencies:

 install

You must run this command once, and every time the dependencies change. node_modules are not commited to git.

Interactive Development

The following build task will watch the source code for changes and compile automatically. You must reload the page in your browser!

 run dev
Tests

Run all unit tests:

st.sh

Run backend tests:

st-backend.sh

Run frontend tests:

st-frontend.sh
Integration Tests

Integration tests are run in a headless Chrome driven by protractor. Requirements include Chrome, a working cluster, kubectl, and bridge itself (see building above).

Setup (or any time you change node_modules - yarn add or yarn install)

rontend && yarn run webdriver-update

Run integration tests:

 run test-gui

Run integration tests on an OpenShift cluster:

 run test-gui-openshift

This will include the normal k8s CRUD tests and CRUD tests for OpenShift resources. It doesn't include ALM tests since it assumes ALM is not set up on an OpenShift cluster.

Hacking Integration Tests

Remove the --headless flag to Chrome (chromeOptions) in frontend/integration-tests/protractor.conf.ts to see what the tests are actually doing.

Local Dex

Checkout and build dex.

./bin/dex serve ../../openshift/console/contrib/dex-config-dev.yaml

Run bridge with the following options:

n/bridge \
user-auth=oidc \
user-auth-oidc-issuer-url='http://127.0.0.1:5556' \
user-auth-oidc-client-id='example-app' \
user-auth-oidc-client-secret='ZXhhbXBsZS1hcHAtc2VjcmV0' \
base-address='http://localhost:9000/' \
kubectl-client-id='example-app' \
kubectl-client-secret='ZXhhbXBsZS1hcHAtc2VjcmV0'
Dependency Management

Dependencies should be pinned to an exact semver, sha, or git tag (eg, no ^).

Backend

Whenever making vendor changes:

  1. Finish updating dependencies & writing changes
  2. Commit everything except vendor/ (eg, server: add x feature)
  3. Make a second commit with only vendor/ (eg, vendor: revendor)

Add new backend dependencies:

  1. Edit glide.yaml
  2. ./revendor.sh

Update existing backend dependencies:

  1. Edit the glide.yaml file to the desired verison (most likely a git hash)
  2. Run ./revendor.sh
  3. Verify update was successful. glide.lock will have been updated to reflect the changes to glide.yaml and the package will have been updated in vendor.
Frontend

Add new frontend dependencies:

 add <package@version>

Update existing frontend dependencies:

 upgrade <package@version>
Supported Browsers

We support the latest versions of the following browsers:


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.