Name: cf-mysql-deployment
Owner: Cloud Foundry
Description: null
Created: 2017-01-26 18:34:13.0
Updated: 2018-05-22 12:15:37.0
Pushed: 2018-05-15 23:04:18.0
Homepage: null
Size: 105
Language: Shell
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Registering the Service Broker
Deregistering the Service Broker
This repo contains a BOSH 2 manifest that defines tested topologies of cf-mysql-release.
It serves as the reference for the compatible release and stemcell versions.
This repo takes advantage of new features such as:
Please refer to BOSH documentation for more details. If you're having troubles with the pre-requisites, please contact the BOSH team for help (perhaps on slack).
A deployment of BOSH
A deployment of Cloud Foundry, final release 193 or greater
Instructions for installing BOSH and Cloud Foundry can be found at http://docs.cloudfoundry.org/.
Routing release v0.145.0 or later is required to register the proxy and broker routes with Cloud Foundry:
upload release https://bosh.io/d/github.com/cloudfoundry-incubator/cf-routing-release?v=0.145.0
Standalone deployments (i.e. deployments that do not interact with Cloud Foundry) do not require the routing release.
The latest final release expects the Ubuntu Trusty (14.04) go_agent stemcell version 2859 by default. Older stemcells are not recommended. Stemcells can be downloaded from http://bosh.io/stemcells; choose the appropriate stemcell for your infrastructure (vsphere esxi, aws hvm, or openstack kvm).
You can use a pre-built final release or build a dev release from any of the branches described in Getting the Code.
Final releases are stable releases created periodically for completed features. They also contain pre-compiled packages, which makes deployment much faster. To deploy the latest final release, simply check out the master branch. This will contain the latest final release and accompanying materials to generate a manifest. If you would like to deploy an earlier final release, use git checkout <tag>
to obtain both the release and corresponding manifest generation materials. It's important that the manifest generation materials are consistent with the release.
If you'd like to deploy the latest code, build a release yourself from the develop branch.
Build the development release.
~/workspace/cf-mysql-release
t checkout release-candidate
scripts/update
sh2 create-release
Upload the release to your bosh environment:
sh2 -e YOUR_ENV upload-release
Prior to deployment, the operator should define three subnets via their infrastructure provider. The MySQL release is designed to be deployed across three subnets to ensure availability in the event of a subnet failure.
In order to route requests to both proxies, the operator should create a load balancer. Manifest changes required to configure a load balancer can be found in the proxy documentation. Once a load balancer is configured, the brokers will hand out the address of the load balancer rather than the IP of the first proxy.
There are two ways to configure a load balancer, either automatically through your IaaS or by supplying static IPs for the proxies
In order for the MySQL deployment to attach the proxy instances to your configured load balancer, you need to use the proxy-elb.yml opsfile. This opsfile requires a vm_extension in your cloud-config which references your load balancer and also defines the specific requirements for your IaaS. You'll need to consult your IaaS documentation as well as your BOSH CPI documentation for the specifics of the cloud_properties
definitions for use in your vm_extension
. You can read more specifics about configuration of the proxies here.
If you would like to use a custom load balancer, you can manually configure your proxies to use static IP addresses which your load balancer can point to. To do that, create an operations file that looks like the following, with static IPs that make sense for your network:
pe: replace
th: /instance_groups/name=proxy/networks
lue:
- name: default
static_ips:
- 10.10.0.1
- 10.10.0.2
The number of mysql nodes should always be odd, with a minimum count of three, to avoid split-brain. When the failed node comes back online, it will automatically rejoin the cluster and sync data from one of the healthy nodes.
The MariaDB cluster nodes are configured by default with 10GB of persistent disk. This can be configured using an operations file to change instance_groups/name=mysql/persistent_disk
and properties/cf_mysql/mysql/persistent_disk
, however your deployment will fail if this is less than 3GB.
There are two proxy instances. The second proxy is intended to be used in a failover capacity. In the event the first proxy fails, the second proxy will still be able to route requests to the mysql nodes.
There are also two broker instances. The brokers each register a route with the router, which load balances requests across the brokers.
New deployments will work “out of the box” with little additional configuration. There are two mechanisms for providing credentials to the deployment:
-l <path-to-vars-file>
(see below for more
information on variable files).--vars-store <path-to-vars-store-file>
to let the CLI generate secure passwords
and write them to the provided vars store file.By default the deployment manifest will not deploy brokers, nor try to register routes for the proxies with a Cloud Foundry router. To enable integration with Cloud Foundry, operations files are provided to add brokers and register proxy routes.
If you require static IPs for the proxy instance groups, these IPs should be
added to the networks
section of the cloud-config as well as to an operations file
which will use these IPs for the proxy instance groups. See below for more
information on operations files.
\
my-director \
cf-mysql \
ploy \
workspace/cf-mysql-deployment/cf-mysql-deployment.yml \
<path-to-operations-file>
If you are upgrading an existing deployment of cf-mysql-release with a manifest that does not take advantage of these new features, for example if the manifest was generated via the spiff templates and stubs provided in the cf-mysql-release repository, then be aware:
z1
, z2
, and z3
. If your
cloud-config doesn't have those AZs, it will result in an error.jobs
and static IPs to their new BOSH 2 instance_groups
. See the section below for more information.--vars-store
is not recommended as it will result in credentials being rotated which can cause issues. \
my-director \
my-deployment \
ploy \
workspace/cf-mysql-deployment/cf-mysql-deployment.yml \
<path-to-deployment-name-operations> \
o <path-to-additional-operations>] \
<path-to-vars-file> \
l <path-to-additional-vars-files>]
Refer to these docs on migrating from a BOSH 1 style manifest, then create an ops file to mix in those migrations into the base deployment manifest. See below for an example:
pe: replace
th: /instance_groups/name=mysql/migrated_from?
lue:
name: mysql_z1
az: z1
name: mysql_z2
az: z2
name: mysql_z3
az: z3
pe: replace
th: /instance_groups/name=mysql/networks
lue:
name: default
static_ips:
- 10.10.0.1
- 10.10.0.2
- 10.10.0.3
Additional example operations files used for common configurations of cf-mysql-release
(e.g. adding a broker for
Cloud Foundry integration) can be found in the operations
directory. See the README in that directory for a description of which (combinations) of files to use for enabling each common feature set.
The manifest template is not intended to be modified; any changes you need to make should be added to operations files.
The syntax for operations files is detailed here.
Operations files can be provided at deploy-time as follows:
\
ploy \
<path-to-operations-file>
Variables files are a flat-format key-value yaml file which contains sensitive information such as passwords, ssl keys/certs etc.
They can be provided at deploy-time as follows:
\
ploy \
<path-to-vars-file>
We provide a default set of variables intended for a local bosh-lite environment here.
Use this as an example for your environment-specific variables file.
By default, this deployment assumes that some variables (e.g. nats) are provided
by cross-deployment links from a deployment named cf
.
This will be true if Cloud Foundry was deployed via
cf-deployment.
If you wish to disable cross-deployment links, use the
disable-cross-deployment-links.yml
operations file.
Disabling cross-deployment links will require these values to be provided
manually (e.g. by passing -v nats={...}
to the bosh deploy
command).
By default, applications cannot to connect to IP addresses on the private network,
preventing applications from connecting to the MySQL service.
To enable access to the service, create a new security group for the IP
configured in your manifest for the property jobs.cf-mysql-broker.mysql_node.host
.
Note: This is not required for CF running on bosh-lite, as these application groups are pre-configured.
Add the rule to a file in the following json format; multiple rules are supported.
stination": "10.10.163.1-10.10.163.255",
otocol": "all"
stination": "10.10.164.1-10.10.164.255",
otocol": "all"
stination": "10.10.165.1-10.10.165.255",
otocol": "all"
Create a security group from the rule file.
create-security-group p-mysql rule.json
Enable the rule for all apps
bind-running-security-group p-mysql
Security group changes are only applied to new application containers; existing apps must be restarted.
After registering the service broker, the MySQL service will be visible in the Services Marketplace; using the CLI, run cf marketplace
.
sh2 -e YOUR_ENV -d cf-mysql run-errand broker-registrar
First register the broker using the cf
CLI. You must be logged in as an admin.
create-service-broker p-mysql BROKER_USERNAME BROKER_PASSWORD URL
BROKER_USERNAME
and BROKER_PASSWORD
are the credentials Cloud Foundry will use to authenticate when making API calls to the service broker. Use the values for manifest properties jobs.cf-mysql-broker.properties.auth_username
and jobs.cf-mysql-broker.properties.auth_password
.
URL
specifies where the Cloud Controller will access the MySQL broker. Use the value of the manifest property jobs.cf-mysql-broker.properties.external_host
. By default, this value is set to p-mysql.<properties.domain>
(in spiff: "p-mysql." .properties.domain
).
For more information, see Managing Service Brokers.
The smoke tests are useful for verifying a deployment. The MySQL Release contains an “smoke-tests” job which is deployed as a BOSH errand.
Run the smoke tests via bosh errand as follows:
sh2 -e YOUR_ENV -d cf-mysql run-errand smoke-tests
The following commands are destructive and are intended to be run in conjuction with deleting your BOSH deployment.
sh2 -e YOUR_ENV -d cf-mysql run-errand deregister-and-purge-instances
Run the following:
purge-service-offering p-mysql
delete-service-broker p-mysql