Name: openstack-ansible-host-prep
Owner: CyVerse
Description: Ansible to set up hosts for OpenStack Ansible Deployment (OSA/OSAD)
Created: 2016-06-08 21:52:03.0
Updated: 2017-12-01 10:28:49.0
Pushed: 2017-11-23 00:59:23.0
Homepage: null
Size: 568
Language: null
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
The OpenStack-Ansible project has specific requirements for host layout and networking. This project, OSA Host Prep, automates most of this required configuration, prior to running OpenStack Ansible.
This is confirmed working for OpenStack Newton on Ubuntu 16.04. Your mileage may vary if you try deploying other versions/distros.
This Ansible code will handle most of OSA's requirements, but you should still familiarize yourself with the host layout and networking requirements.
Overview: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview.html Installation requirements: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview-requirements.html
This repository leverages the OSA host layout exactly, execpt for the following differences:
Deployment Host
requires identical host networking as all OSA nodes, so instead of using a separate machine, we use one of the Infrastructure Control Plane Hosts
, i.e. infra1
.Infrastructure Control Plane Hosts
which resides on the host-level operating system of those hostsElasticsearch + Kibana
, and neither does this project. This is not required, but if desired, it must be implemented separately.Block Storage Host
, so account for that.Host networking: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview-network-arch.html
Configuring the switching fabric between hosts is up to you, but is straightforward. Suggestions:
Get Ansible on your deployment host.
su
get install software-properties-common
add-repository ppa:ansible/ansible
get update
sslib is also required to set root passwords
get install ansible python-passlib
Ensure that you can SSH to all of the target hosts using SSH key authentication, and that you have accepted their host keys into your known_hosts file. In other words, if you haven't done so already, generate an SSH keypair on your deployment host, copy it to each of the target hosts' authorized_keys files, and test passwordless SSH connection from the deployment host to each target hosts.
Clone this repo to your deployment host, and populate the Ansible inventory file (ansible/inventory/hosts
) with the actual hostnames and IP addresses of your target hosts. If you want to use a separate inventory file that is stored elsewhere, change line 17
of the ansible/ansible.cfg
file to point to that host file, e.g.:
file = <your-private-repo-here>/ansible/inventory/hosts
Prepare host networking by determining interfaces and IP addresses, and populating the ansible/inventory/group_vars/all
with IPs and other variables. If you are storing your hosts file somewhere else, consider storing the group_vars folder alongside it.
You can use the following commands to retrieve the network interfaces and IP addresses of your target hosts, for reference:
ble target-hosts -m shell -a "ip addr show" > all-interfaces.txt
all-interfaces.txt | grep -v "$(cat all-interfaces.txt | grep 'lo:' -A 3)"
Set OSA_VERSION in group_vars/all to the appropriate branch or tag of the Openstack-Ansible project, or leave the default.
~~Create apt-mirror
by editing the mirror
host group in the ansible/inventory/hosts
in this directory and running the playbook below.~~ This role currently deploys a broken sources.list to target hosts running Ubuntu 16.04. Skip this step until the role is fixed. An APT mirror is not strictly required.
ble-playbook playbooks/apt-mirror.yml
Create and set credentials for all nodes for cases where manual console login is required to fix networking. (If there is a problem with the networking configuration, you need a “backdoor” into systems.)
ble-playbook playbooks/host_credentials.yml
Set up host networking for VLAN tagged interfaces and Linux Bridges.
--skip-tags restart-networking
and manually checking hosts to ensure proper configuration, then running ansible target-hosts -m shell -a "ifdown -a && ifup -a"
to bounce the interfaces.ble-playbook playbooks/configure_networking.yml
Test basic connectivity after network configuration
Basic Tests from the deployment host
ble target-hosts -m ping
ble target-hosts -m shell -a "ip a | grep -v 'lo:' -A 3"
ble target-hosts -m shell -a "ifconfig | grep br-mgmt -A 1 | grep inet"
Further manual testing (Login to a node to test bridges)
ere X = low range and Y = high range.
ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.236.${X}-${Y}
ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.240.${X}-${Y}
ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.244.${X}-${Y}
172.29.${subnet}.Z
as needed if your IP range is non-contiguousinterface="br-mgmt" ; subnet="236" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
interface="br-vxlan" ; subnet="240" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
interface="br-storage" ; subnet="244" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
```
If using LVM backing for Cinder, manually partition Block-storage
node's LVM Volume:
n/parted /dev/sd<device> -s mklabel gpt
n/parted /dev/sd<device> -s mkpart primary 0% 100%
n/parted /dev/sd<device> -s set 1 lvm on
n/parted /dev/sd<device> -s p
Configure and prep all nodes including deployment node for OSA deploy
cinder-volumes
be sure to enable (i.e. uncomment) the CINDER_PHYSICAL_VOLUME.create_flag
in ansible/inventory/group_vars/all
ble-playbook playbooks/configure_targets.yml
To ensure a easy install, be sure to disable ufw
or any other firewall like iptables
on all OpenStack nodes BEFORE deploying OSA, as it could cause the install to hang, or fail.
ble target-hosts -m shell -a "ufw disable"
If SSH on the hosts are configured with a port other than port 22
, this ~/.ssh/config
must be used on the deployment host. Replace all fields containining < >
and <SSH-PORT>
sections
172.29.236.<IP-RANGE-HERE>?
User root
Port <SSH-PORT>
172.29.236.<INDIVIDUAL-IP-HOST-HERE>
User root
Port <SSH-PORT>
*
User root
Port 22
Login to deployment node, and start filling out the configuration files (or symlink to files stored somewhere else if you already have them)
etc/openstack_deploy/
penstack_user_config.yml.example openstack_user_config.yml
openstack_user_config.yml
Follow documentation to populate configuration files here: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/configure.html
Begin filling out configuration file with br-mgmt
IPs for each host to be used. DO NOT use the host's physical IP address.
Fill out openstack_user_config.yml
and user_variables.yml
Generate OpenStack Credentials found here: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/configure.html
opt/openstack-ansible/scripts
on pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
Configure HAProxy found here: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/configure-haproxy.html#making-haproxy-highly-available (update this for Newton!)
From here, this guide more-or-less follows the OSA installation docs. We probably shoudln't maintain parallel documentation.
Check syntax of configuration files: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/configure-configurationintegrity.html
opt/openstack-ansible/playbooks/
stack-ansible setup-infrastructure.yml --syntax-check --ask-vault-pass
Hosts file
Run Foundation Playbook: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/install-foundation.html#running-the-foundation-playbook Consider --skip-tags=mail
if you already have sendmail installed and don't want Postfix (e.g. at CyVerse).
openstack-ansible setup-hosts.yml --ask-vault-pass
Run infrastructure playbook found here: http://docs.openstack.org/developer/openstack-ansible/install-guide/install-infrastructure.html#running-the-infrastructure-playbook
openstack-ansible setup-infrastructure.yml --ask-vault-pass
Manually verify that the infrastructure was set up correctly (Mainly a verification of Galera): http://docs.openstack.org/developer/openstack-ansible/install-guide/install-infrastructure.html#verify-the-database-cluster
sr/local/bin/openstack-ansible.rc
ble galera_container -m shell -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
ls | grep galera
attach -n infra1_galera_container-XXXXXXX
l -u root -p
status like 'wsrep_cluster%';
That command should display a numeric cluster size equal to the amount of infra-nodes used.
Do not proceed if the galera
cluster size is not equal to the amount of infra-nodes used, as it could cause deployment issues. Be sure to resolve before proceeding to the next step.
Run the playbook
to setup OpenStack found here: http://docs.openstack.org/developer/openstack-ansible/install-guide/install-openstack.html#running-the-openstack-playbook
openstack-ansible setup-openstack.yml --ask-vault-pass
Now that you have a running cloud, other things need to be set up in order to use OpenStack.
For steps on how to do this, see post-deployment.
OSA uses dynamically created groups of hosts and containers for targeting. To see a list of groups, run the following from the deployment host:
ce /opt/ansible-runtime/bin/activate
/openstack-ansible/scripts/inventory-manage.py -G
Check /openstack/log/ansible-logging
on the deployment host. :)
lxc-attach
to the rsyslog container on your logging server, and look in /var/log/log-storage. Everything that logs to rsyslog, including most of the services that OSA sets up, will end up here.
http://docs.openstack.org/developer/openstack-ansible/newton/developer-docs/ops-lxc-commands.html
5 minimum required nodes, 1 optional node for cinder LVM (if not using Ceph)
Control Plane
Logging
Compute
Cinder
You must run OSA from a deployment host which has access to all subnets and VLANs in your deployment. This can be one of the primary infrastructure / control plan hosts.
Networking diagram and description found here: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/overview-hostnetworking.html
In order to deploy OpenStack using OSA, 4 total VLAN Tags are required.
Role | Description | Number
— | — | —
Native | Native tag used within your subnet | 100
Container | Tag for container management network | 101
Tunnel | Tag for tunneling network | 102
Storage (Optional) | Tag for cinder
storage network | 103
Required Network bridges
bond0
/Primary eth adapter
)bond0
/Primary eth adapter
)bond1
/Secondary eth adapter
)bond1
/Secondary eth adapter
)This configuration may be on only a subset of containers, where as some of them will only have a single interface
DHCP agent + L3 Agent and Linux Bridge
cinder-volumes
and lxc
Interface | IP | Bridge Interface | Manual? — | — | — | — eth{primary} | 10.1.1.10 | N/A | Yes lxcbr0 | Unnumbered | eth{primary} | No br-mgmt | 192.168.100.10 | eth{primary}.101 | Yes br-vxlan | 192.168.95.10 | eth{secondary}.102 | Yes br-storage | 192.168.85.10 | eth{primary}.103 | Yes br-vlan | Unnumbered | eth{secondary} | Yes
/etc/network/interfaces
lo
e lo inet loopback
ce /etc/network/interfaces.d/*
/etc/network/interfaces.d/device-eth
imary Interface / Bond
eth{primary}
e eth{primary} inet static
dress 10.1.1.10
scription management interface
oadcast 10.1.1.255
teway 10.1.1.1
tmask 255.255.255.0
twork 10.1.1.0
s-nameservers 8.8.8.8
s-search domain.com
ntainer Management VLAN
eth{primary}.101
e eth{primary}.101 inet manual
an-raw-device eth{primary}
ntainer Storage VLAN
eth{primary}.103
e eth{primary}.103 inet manual
an-raw-device eth{primary}
condary Interface / Bond
eth{secondary}
e eth{secondary} inet manual
ip link set dev $IFACE up
wn ip link set dev $IFACE down
ntainer Tunnel VLAN
eth{primary}.102
e eth{primary}.102 inet manual
an-raw-device eth{secondary}
/etc/network/interfaces.d/device-bridges
ntainer management bridge
br-mgmt
e br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports eth{primary}.101
address 192.168.100.10
netmask 255.255.255.0
dns-nameservers 8.8.8.8
enStack Networking VXLAN (tunnel/overlay) bridge
br-vxlan
e br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references tagged interface
bridge_ports eth{primary}.102
address 192.168.95.10
netmask 255.255.255.0
enStack Networking VLAN bridge
br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port references untagged interface
bridge_ports eth{secondary}
orage bridge (optional)
br-storage
e br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Bridge port reference tagged interface
bridge_ports eth{primary}.103
address 192.168.85.10
netmask 255.255.255.0
Hostname | Interface | IP | Bridge
— | — | — | —
external_lb_vip
| N/A | 10.1.1.2 | eth{primary} network
internal_lb_vip
| N/A | 192.168.100.10 | br-mgmt IP
infra_control_plane_host | eth{primary} | 10.1.1.10 | N/A
| br-mgmt | 192.168.100.10 | eth{primary}.101
| br-storage | 192.168.95.10 | eth{primary}.103
| br-vxlan | 192.168.85.20 | eth{secondary}.102
| br-vlan | Unnumbered | eth{secondary}
| eth{secondary} | Unnumbered | N/A
| infra1_container | 192.168.100.10 | br-mgmt
Install package dependencies
In this repo, use configure_targets.yml
playbook
ble-playbook configure_targets.yml -e "SSH_PUBLIC_KEY='ssh-rsa AAAA...'"
Set up NTP
Find the latest stable TAG: https://github.com/openstack/openstack-ansible/releases and verify that the selected tag corresponds with the version of OS one wishes to deploy. One may see something similar to this: meta:series: liberty
in the release notes.
Clone repo on deploy host (This can be done via Ansible, or on one of the “Infrastructure Control Plane Host”)
clone -b 12.0.9 https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
Run bootstrap
opt/openstack-ansible
pts/bootstrap-ansible.sh
Re-run Ansible Playbook to include changes for block-storage node
Run Ansible Playbook to set up Bare-Metal host credentials, SSH keys and root passwords. (Very important to configure root password, so that recovery of configuration is still possible from the host's console)
nsible
ble-playbook playbooks/host_credentials.yml
Configure Bare-Metal host networking for OSA setup (VLAN tagged interfaces and LinuxBridges). At this point, you MUST modify the hosts
file AND group_vars/all
variables under TARGET_HOST_NETWORKING
, TARGET_HOSTS
and CINDER_PHYSICAL_VOLUME
sections.
ble-playbook playbooks/configure_networking.yml
Run Playbook to set up an Ubuntu apt-mirror
using a completely separate host (could use a target-host, but it is not recommended)
ble-playbook playbooks/apt-mirror.yml --skip-tags "update"
ally execute the command below on apt-mirror host, since it could take a very long time (upwards of 4 hours)
apt-mirror -c apt-mirror
Prepare hosts for OSA Deployment. This Playbook configures the Deployment host AND OSA Target-hosts. (Ensure that hosts
and group_vars/all
are filled out and accurate)
ble-playbook playbooks/configure_targets.yml
Manually copy and enable configuration file for OSA
etc/openstack_deploy/ && cp openstack_user_config.yml.example openstack_user_config.yml