Name: cluster-operator
Owner: OpenShift
Description: null
Created: 2017-12-05 15:09:17.0
Updated: 2018-05-24 11:54:06.0
Pushed: 2018-05-24 11:54:03.0
Homepage: null
Size: 90160
Language: Go
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
oc cluster up
):/etc/sysconfig/docker
OPTIONS=
to include --insecure-registry 172.30.0.0/16
sudo systemctl enable docker
sudo systemctl start docker
sudo pip install kubernetes openshift
dnf install python2-libselinux
$HOME/go/src/github.com/openshift/cluster-operator
go get -u github.com/cloudflare/cfssl/cmd/...
origin/releases
(doesn't have to be 3.10):oc
from source.oc
binary somewhere in your path.oc cluster up --image="docker.io/openshift/origin"
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin admin
oc login -u admin -p password
$HOME/.aws/credentials
- your AWS credentials, default section will be used but can be overridden by vars when running the create cluster playbook.$HOME/.ssh/libra.pem
- the SSH private key to use for AWSansible-playbook contrib/ansible/deploy-devel-playbook.yml
oc start-build cluster-operator -n openshift-cluster-operator
eval $(minishift docker-env)
NO_DOCKER=1 make images
make integrated-registry-push
ansible-playbook contrib/ansible/create-cluster-playbook.yml
-e cluster_version
to use a real cluster version and provision an actual cluster in AWS. (see oc get clusterversions -n openshift-cluster-operator
for list of the defaults we create)-e cluster_name
, -e cluster_namespace
, or other variables you can override as defined at the top of the playbook.You can then check the provisioning status of your cluster by running oc describe cluster <cluster_name>
If you are actively working on controller code you can save some time by compiling and running locally:
oc scale -n openshift-cluster-operator --replicas=0 dc/cluster-operator-controller-manager
oc edit -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager
and add an argument for –controllers=-disableme or –controllers=c1,c2,c3 for just the controllers you want.oc delete -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager
make build
go install ./cmd/cluster-operator
bin/cluster-operator controller-manager --log-level debug --k8s-kubeconfig ~/.kube/config
--controllers clusterapi,machineset,etc
. Use –help to see the full list.The Cluster Operator uses its own Ansible image which layers our playbooks and roles on top of the upstream (https://github.com/openshift/openshift-ansible)[OpenShift Ansible] images. Typically our Ansible changes only require work in this repo. See the build/cluster-operator-ansible
directory for the Dockerfile and playbooks we layer in.
To build the cluster-operator-ansible image you can just run make images
normally.
WARNING: This image is built using OpenShift Ansible v3.10. This can be adjusted by specifying the CO_ANSIBLE_URL and CO_ANSIBLE_BRANCH environment variables to use a different branch/repository for the base openshift-ansible image.
You can run cluster-operator-ansible playbooks standalone by creating an inventory like:
v3:children]
ers
s
v3:vars]
ble_become=true
ble_ssh_user=centos
shift_deployment_type=origin
shift_release="3.10"
_url=openshift/origin-${component}:v3.10.0
shift_aws_ami=ami-833d37f9
ters]
d]
es]
You can then run ansible with the above inventory file and your cluster ID:
ansible-playbook -i ec2-hosts build/cluster-operator-ansible/playbooks/cluster-operator/node-config-daemonset.yml -e openshift_aws_clusterid=dgoodwin-cluster
We're using the Cluster Operator deployment Ansible as a testing ground for the kubectl-ansible modules that wrap apply and oc process. These roles are vendored in similar to how golang works using a tool called gogitit. The required gogitit manifest and cache are committed, but only the person updating the vendored code needs to install the tool or worry about the manifest. For everyone else the roles are just available normally and this allows us to not require developers to periodically re-run ansible-galaxy install.
Updating the vendored code can be done with:
contrib/ansible/
gitit sync
For OpenShift CI our roles template, which we do not have permissions to apply ourselves, had to be copied to https://github.com/openshift/release/blob/master/projects/cluster-operator/cluster-operator-roles-template.yaml. Our copy in this repo is authoritative, we need to remember to copy the file and submit a PR, and request someone run the make target for us whenever the auth/roles definitions change.