starkandwayne/bucc-walk-thru-aws

Name: bucc-walk-thru-aws

Owner: Stark & Wayne

Description: This is an example repository that compliaments a walk-thru video of provisioning AWS networking, a public Jumpbox, a private BOSH/UAA/CredHub/Concourse (BUCC), and an example 5-node ZooKeeper cluster.

Created: 2018-03-14 11:47:18.0

Updated: 2018-03-16 21:31:26.0

Pushed: 2018-03-16 21:31:25.0

Homepage: https://github.com/starkandwayne/bucc

Size: 46

Language: HCL

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Walk thru of BUCC on AWS

This is an example repository that compliaments a walk-thru video of provisioning AWS networking, a public Jumpbox, a private BOSH/UAA/CredHub/Concourse (BUCC), and an example 5-node ZooKeeper cluster.

All the state files created from terraform and bosh are listed in .gitignore. After going thru this walk thru, you would copy + paste liberally into your own private Git repository that would allow you to commit your state files.

Clone this repo
clone https://github.com/starkandwayne/bucc-walk-thru-aws
ucc-walk-thru-aws
Configure and terraform AWS
nvs/aws/aws.tfvars{.example,}

Populate envs/aws/aws.tfvars with your AWS API credentials.

Create a key pair called bucc-walk-thru

The private key will automatically be downloaded by your browser. Copy it into envs/ssh/bucc-walk-thru.pem:

/Downloads/bucc-walk-thru.pem envs/ssh/bucc-walk-thru.pem

Allocate an elastic IP and store the IP in envs/aws/aws.tfvars at jumpbox_ip = "<your-ip>". This will be used for your jumpbox/bastion host later.

/aws/bin/update

This will create a new VPC, a NAT to allow egress Internet access, and two subnets - a public DMZ for the jumpbox, and a private subnet for BUCC.

Deploy Jumpbox

This repository already has a submodule to jumpbox-deployment, and some wrapper scripts. You are ready to go.

submodule update --init
/jumpbox/bin/update

This will create a tiny EC2 VM, with a 64G persistent disk, and a jumpbox user whose home folder is placed upon that large persistent disk.

Looking inside envs/jumpbox/bin/update you can see that it is a wrapper script to use bosh create-env. Variables from the terraform output are consumed via the envs/jumpbox/bin/jumpbox-vars-from-terraform.sh helper script.

 create-env src/jumpbox-deployment/jumpbox.yml \
 src/jumpbox-deployment/aws/cpi.yml \
 <(envs/jumpbox/bin/vars-from-terraform.sh) \
.

If you ever need to enlarge the disk, edit envs/jumpbox/operators/persistent-homes.yml, change disk_size: 65_536 to a larger number, and run envs/jumpbox/bin/update again. That's the beauty of bosh create-env.

SSH into Jumpbox

To SSH into jumpbox we will need to store the private key of the jumpbox into a file, etc. There is a helper script for this:

/jumpbox/bin/ssh

You can see that the jumpbox user's home directory is placed on the /var/vcap/store persistent disk:

box/0:~$ pwd
/vcap/store/home/jumpbox
SOCKS5 magic tunnel thru Jumpbox
/jumpbox/bin/socks5-tunnel

The output will look like:

ting SOCKS5 on port 9999...
rt BOSH_ALL_PROXY=socks5://localhost:9999
rt CREDHUB_PROXY=socks5://localhost:9999
thorized use is strictly prohibited. All access and activity
ubject to logging and monitoring.
Deploying BUCC
ce <(envs/bucc/bin/env)
 up --cpi aws --spot-instance

This will create envs/bucc/vars.yml stub for you to populate. Fortunately, we have a helper script to get most of it:

/bucc/bin/vars-from-terraform.sh > envs/bucc/vars.yml

Now run bucc up again:

 up

BUT… you are about download a few hundred GB of BOSH releases, AND THEN upload them thru the jumpbox to your new BUCC/BOSH VM on AWS. Either it will take a few hours or you can move to the jumpbox and run the commands there.

So let's use the jumpbox. For your convenience, there is a nice wrapper script which uploads this project to your jumpbox, runs bucc up, and then downloads the modified state files created by bucc up/bosh create-env back into this project locally:

/bucc/bin/update-upon-jumpbox

Inside the SSH session:

/workspace/walk-thru
/bucc/bin/update

To access your BOSH/CredHub, remember to have your SOCKS5 magic tunnel running in another terminal:

/jumpbox/bin/socks5-tunnel

After it has bootstrapped your BUCC/BOSH from either the jumpbox or your local machine:

rt BOSH_ALL_PROXY=socks5://localhost:9999
rt CREDHUB_PROXY=socks5://localhost:9999

ce <(envs/bucc/bin/env)

 env

If you get a connection error like below, your SOCKS5 tunnel is no longer up. Run it again.

hing info:
rforming request GET 'https://10.10.1.4:25555/info':
Performing GET request:
  Retry: Get https://10.10.1.4:25555/info: dial tcp [::1]:9999: getsockopt: connection refused

Instead, the output should look like:

g environment '10.10.1.4' as client 'admin'

      bucc-walk-thru
      71fec1e3-d34b-4940-aba8-e53e8c848dd1
ion   264.7.0 (00000000)
      aws_cpi
ures  compiled_package_cache: disabled
      config_server: enabled
      dns: disabled
      snapshots: disabled
      admin

eeded

The envs/bucc/bin/update script also pre-populated a stemcell:

sh stemcells
g environment '10.10.1.4' as client 'admin'

                                     Version  OS             CPI  CID
-aws-xen-hvm-ubuntu-trusty-go_agent  3541.9   ubuntu-trusty  -    ami-02f7c167 light

And it pre-populates the cloud config with our two subnets and with vm_types that all use AWS spot instances:

sh cloud-config

ypes:
oud_properties:
ephemeral_disk:
  size: 25000
instance_type: m4.large
spot_bid_price: 10
Deploy something

Five-node cluster of Apache ZooKeeper:

ce <(envs/bucc/bin/env)

 deploy -d zookeeper <(curl https://raw.githubusercontent.com/cppforlife/zookeeper-release/master/manifests/zookeeper.yml)
Upgrade Everything
rc/bucc
checkout develop
pull


/bucc/bin/update-upon-jumpbox
Backup & Restore

You can start cutting backups of your BUCC and its BOSH deployments immediately.

Read https://www.starkandwayne.com/blog/bucc-bbr-finally/ to learn more about the inclusion of BOSH Backup & Restore (BBR) into your BUCC VM.

Backup within Jumpbox

BBR does not currently support SOCKS5, so we will request the backup from inside the jumpbox.

/jumpbox/bin/ssh
lain
r -p ~/workspace/backups
/workspace/backups
ce <(~/workspace/walk-thru/envs/bucc/bin/env)
 bbr backup
Restore within Jumpbox

Our backup is meaningless if we haven't tried to restore it. So, let's do it.

From within the jumpbox session above, destroy the BUCC VM:

 down

Now re-deploy it without any fancy pre-populated stemcells etc:

 up

When it comes up, it is empty. It has no stemcells nor deployments.

sh deployments
  Release(s)  Stemcell(s)  Team(s)  Cloud Config

ployments

But if you check the AWS console you'll see the ZooKeeper cluster is still running. Let's restore the BOSH/BUCC data.

/workspace/backups/
_backup=$(find . -type d -regex ".+_.+Z" | sort -r | head -n1)
 bbr restore --artifact-path=${last_backup}

Our BOSH now remembers its ZooKeeper cluster:

sh deployments
g environment '10.10.1.4' as client 'admin'

       Release(s)       Stemcell(s)                                     Team(s)  Cloud Config
eeper  zookeeper/0.0.7  bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3541.9  -        latest
Sync State Files

In the example above we rebuilt our BUCC/BOSH from within the Jumpbox. This means that our local laptop does not have the updated state files. From your laptop, sync them back:

/jumpbox/bin/rsync from . walk-thru
Destroy Everything

To discover all your running deployments

ce <(envs/bucc/bin/env)

 deployments

To delete each one:

 delete-deployment -d zookeeper

To destroy your BUCC/BOSH VM and its persistent disk, and the orphaned disks from your deleted zookeeper deployment:

 clean-up --all
 down

To destroy your jumpbox and its persistent disk:

/jumpbox/bin/update delete-env

To destroy your AWS VPC, subnets, etc:

nvs/aws
 destroy


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.