Name: container-builder-workshop
Owner: Google Cloud Platform
Description: null
Created: 2018-04-10 17:41:05.0
Updated: 2018-04-14 17:13:23.0
Pushed: 2018-04-14 17:13:17.0
Homepage: null
Size: 39
Language: Go
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
The included scripts are intended to demonstrate how to use Google Cloud Container Builder as a continuous integration system deploying code to GKE. This is not an official Google product.
The example here follows a pattern where:
There are 5 scripts included as part of the demo:
This lab shows you how to setup a continuous delivery pipeline for GKE using Google Cloud Container Builder. We?ll run through the following steps
export PROJECT=[[YOUR PROJECT NAME]]
# On Cloudshell
# export PROJECT=$(gcloud info --format='value(config.project)')
export CLUSTER=gke-deploy-cluster
export ZONE=us-central1-a
gcloud config set compute/zone $ZONE
ud services enable container.googleapis.com --async
ud services enable containerregistry.googleapis.com --async
ud services enable cloudbuild.googleapis.com --async
ud services enable sourcerepo.googleapis.com --async
gcloud container clusters create ${CLUSTER} \
--project=${PROJECT} \
--zone=${ZONE} \
--quiet
gcloud container clusters get-credentials ${CLUSTER} \
--project=${PROJECT} \
--zone=${ZONE}
For kubectl
commands against GKE youll need to give Container Builder Service Account container.developer role access on your clusters details.
ECT_NUMBER="$(gcloud projects describe \
$(gcloud config get-value core/project -q) --format='get(projectNumber)')"
ud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
--role=roles/container.developer
ctl create ns production
ctl apply -f kubernetes/deployments/prod -n production
ctl apply -f kubernetes/deployments/canary -n production
ctl apply -f kubernetes/services -n production
ctl scale deployment gceme-frontend-production -n production --replicas 4
ctl get pods -n production -l app=gceme -l role=frontend
ctl get pods -n production -l app=gceme -l role=backend
ctl get service gceme-frontend -n production
rt FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
http://$FRONTEND_SERVICE_IP/version
ud alpha source repos create default
init
config credential.helper gcloud.sh
remote add gcp https://source.developers.google.com/p/[PROJECT]/r/default
add .
commit -m "Initial Commit"
push gcp master
Ensure you have credentials available
ud auth application-default login
Branches
<<EOF > branch-build-trigger.json
riggerTemplate": {
"projectId": "${PROJECT}",
"repoName": "default",
"branchName": "[^(?!.*master)].*"
escription": "branch",
ubstitutions": {
"_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
"_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
ilename": "builder/cloudbuild-dev.yaml"
-X POST \
https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
--data-binary @branch-build-trigger.json
Master
<<EOF > master-build-trigger.json
riggerTemplate": {
"projectId": "${PROJECT}",
"repoName": "default",
"branchName": "master"
escription": "master",
ubstitutions": {
"_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
"_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
ilename": "builder/cloudbuild-canary.yaml"
-X POST \
https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
--data-binary @master-build-trigger.json
Tag
<<EOF > tag-build-trigger.json
riggerTemplate": {
"projectId": "${PROJECT}",
"repoName": "default",
"tagName": ".*"
escription": "tag",
ubstitutions": {
"_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
"_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
ilename": "builder/cloudbuild-prod.yaml"
-X POST \
https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
--data-binary @tag-build-trigger.json
Review triggers are setup on the Build Triggers Page
The following submits a build to cloud builder and deploys the results to a user's namespace.
ud container builds submit \
--config builder/cloudbuild-local.yaml \
--substitutions=_VERSION=someversion,_USER=$(whoami),_CLOUDSDK_COMPUTE_ZONE=${ZONE},_CLOUDSDK_CONTAINER_CLUSTER=${CLUSTER} .
Development branches are a set of environments your developers use to test their code changes before submitting them for integration into the live site. These environments are scaled-down versions of your application, but need to be deployed using the same mechanisms as the live environment.
To create a development environment from a feature branch, you can push the branch to the Git server and let Cloud Builder deploy your environment.
Create a development branch and push it to the Git server.
checkout -b new-feature
In order to demonstrate changing the application, you will be change the gceme cards from blue to orange.
Step 1 Open html.go and replace the two instances of blue with orange.
Step 2 Open main.go and change the version number from 1.0.0 to 2.0.0. The version is defined in this line:
const version string = “2.0.0”
Step 1
Commit and push your changes. This will kick off a build of your development environment.
add html.go main.go
commit -m "Version 2.0.0"
push gcp new-feature
Step 2
After the change is pushed to the Git repository, navigate to the Build History Page user interface where you can see that your build started for the new-feature branch
Click into the build to review the details of the job
Step 3
Once that completes, verify that your application is accessible. You should see it respond with 2.0.0, which is the version that is now running.
Retrieve the external IP for the production services.
It can take several minutes before you see the load balancer external IP address.
ctl get service gceme-frontend -n new-feature
Once an External-IP is provided store it for later use
rt FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=new-feature services gceme-frontend)
http://$FRONTEND_SERVICE_IP/version
Congratulations! You've setup a pipeline and deployed code to GKE with cloud builder.
The rest of this example follows the same pattern but demonstrates the triggers for Master and Tags.
Now that you have verified that your app is running your latest code in the development environment, deploy that code to the canary environment.
Step 1 Create a canary branch and push it to the Git server.
checkout master
merge new-feature
push gcp master
Again after you?ve pushed to the Git repository, navigate to the Build History Page user interface where you can see that your build started for the master branch
Click into the build to review the details of the job
Step 2
Once complete, you can check the service URL to ensure that some of the traffic is being served by your new version. You should see about 1 in 5 requests returning version 2.0.0.
rt FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
e true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
You can stop this command by pressing Ctrl-C
.
Congratulations!
You have deployed a canary release. Next you will deploy the new version to production by creating a tag.
Now that your canary release was successful and you haven't heard any customer complaints, you can deploy to the rest of your production fleet.
Step 1 Merge the canary branch and push it to the Git server.
tag v2.0.0
push gcp v2.0.0
Review the job on the the Build History Page user interface where you can see that your build started for the v2.0.0 tag
Click into the build to review the details of the job
Step 2 Once complete, you can check the service URL to ensure that all of the traffic is being served by your new version, 2.0.0. You can also navigate to the site using your browser to see your orange cards.
rt FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
e true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
You can stop this command by pressing Ctrl-C
.
Congratulations!
You have successfully deployed your application to production!