cyverse/openstack-ansible-host-prep

Name: openstack-ansible-host-prep

Owner: CyVerse

Description: Ansible to set up hosts for OpenStack Ansible Deployment (OSA/OSAD)

Created: 2016-06-08 21:52:03.0

Updated: 2017-12-01 10:28:49.0

Pushed: 2017-11-23 00:59:23.0

Homepage: null

Size: 568

Language: null

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

OpenStack Ansible Host Prep

The OpenStack-Ansible project has specific requirements for host layout and networking. This project, OSA Host Prep, automates most of this required configuration, prior to running OpenStack Ansible.

This is confirmed working for OpenStack Newton on Ubuntu 16.04. Your mileage may vary if you try deploying other versions/distros.

Issues/todo/questions
Deployment requirements

This Ansible code will handle most of OSA's requirements, but you should still familiarize yourself with the host layout and networking requirements.

Overview: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview.html Installation requirements: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview-requirements.html

Host Layout

This repository leverages the OSA host layout exactly, execpt for the following differences:

Host-Layout

Host Networking

Host networking: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/overview-network-arch.html

Configuring the switching fabric between hosts is up to you, but is straightforward. Suggestions:

Deployment Guide
Prepare for OpenStack Deploy
  1. Get Ansible on your deployment host.

     su
    get install software-properties-common
    add-repository ppa:ansible/ansible
    get update
    sslib is also required to set root passwords
    get install ansible python-passlib
    

    Ensure that you can SSH to all of the target hosts using SSH key authentication, and that you have accepted their host keys into your known_hosts file. In other words, if you haven't done so already, generate an SSH keypair on your deployment host, copy it to each of the target hosts' authorized_keys files, and test passwordless SSH connection from the deployment host to each target hosts.

  2. Clone this repo to your deployment host, and populate the Ansible inventory file (ansible/inventory/hosts) with the actual hostnames and IP addresses of your target hosts. If you want to use a separate inventory file that is stored elsewhere, change line 17 of the ansible/ansible.cfg file to point to that host file, e.g.:

    file       = <your-private-repo-here>/ansible/inventory/hosts
    
  3. Prepare host networking by determining interfaces and IP addresses, and populating the ansible/inventory/group_vars/all with IPs and other variables. If you are storing your hosts file somewhere else, consider storing the group_vars folder alongside it.

    You can use the following commands to retrieve the network interfaces and IP addresses of your target hosts, for reference:

    ble target-hosts -m shell -a "ip addr show" > all-interfaces.txt
    
    all-interfaces.txt | grep -v "$(cat all-interfaces.txt | grep 'lo:' -A 3)"
    
  4. Set OSA_VERSION in group_vars/all to the appropriate branch or tag of the Openstack-Ansible project, or leave the default.

  5. ~~Create apt-mirror by editing the mirror host group in the ansible/inventory/hosts in this directory and running the playbook below.~~ This role currently deploys a broken sources.list to target hosts running Ubuntu 16.04. Skip this step until the role is fixed. An APT mirror is not strictly required.

    ble-playbook playbooks/apt-mirror.yml
    
  6. Create and set credentials for all nodes for cases where manual console login is required to fix networking. (If there is a problem with the networking configuration, you need a “backdoor” into systems.)

    ble-playbook playbooks/host_credentials.yml
    
  7. Set up host networking for VLAN tagged interfaces and Linux Bridges.

    1. One might consider running the below command with the CLI argument: --skip-tags restart-networking and manually checking hosts to ensure proper configuration, then running ansible target-hosts -m shell -a "ifdown -a && ifup -a" to bounce the interfaces.
    ble-playbook playbooks/configure_networking.yml
    
  8. Test basic connectivity after network configuration

    1. Basic Tests from the deployment host

      ble target-hosts -m ping
      
      ble target-hosts -m shell -a "ip a | grep -v 'lo:' -A 3"
      
      ble target-hosts -m shell -a "ifconfig | grep br-mgmt -A 1 | grep inet"
      
    2. Further manual testing (Login to a node to test bridges)

      ere X = low range and Y = high range.
      ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.236.${X}-${Y}
      ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.240.${X}-${Y}
      ow-last-octet-ip>;Y=<high-last-octet-ip>;nmap -sP 172.29.244.${X}-${Y}
      

    For the following, remove/add 172.29.${subnet}.Z as needed if your IP range is non-contiguous

    interface="br-mgmt" ; subnet="236" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
    
    interface="br-vxlan" ; subnet="240" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
    
    interface="br-storage" ; subnet="244" ; for i in 172.29.${subnet}.{X..Y} 172.29.${subnet}.Z;do echo "Pinging host on ${interface}: $i"; ping -c 3 -I $interface $i;done
    ```
    
  9. If using LVM backing for Cinder, manually partition Block-storage node's LVM Volume:

    n/parted /dev/sd<device> -s mklabel gpt
    n/parted /dev/sd<device> -s mkpart primary 0% 100%
    n/parted /dev/sd<device> -s set 1 lvm on
    n/parted /dev/sd<device> -s p
    
  10. Configure and prep all nodes including deployment node for OSA deploy

    1. If this role is to take care of LVM creation for cinder-volumes be sure to enable (i.e. uncomment) the CINDER_PHYSICAL_VOLUME.create_flag in ansible/inventory/group_vars/all
    ble-playbook playbooks/configure_targets.yml
    
  11. To ensure a easy install, be sure to disable ufw or any other firewall like iptables on all OpenStack nodes BEFORE deploying OSA, as it could cause the install to hang, or fail.

    ble target-hosts -m shell -a "ufw disable"
    
Deploy OpenStack using OpenStack-Ansible
  1. If SSH on the hosts are configured with a port other than port 22, this ~/.ssh/config must be used on the deployment host. Replace all fields containining < > and <SSH-PORT> sections

     172.29.236.<IP-RANGE-HERE>?
    User root
    Port <SSH-PORT>
    
     172.29.236.<INDIVIDUAL-IP-HOST-HERE>
        User root
        Port <SSH-PORT>
    
     *
        User root
        Port 22
    
  2. Login to deployment node, and start filling out the configuration files (or symlink to files stored somewhere else if you already have them)

    etc/openstack_deploy/
    penstack_user_config.yml.example openstack_user_config.yml
    openstack_user_config.yml
    
  3. Follow documentation to populate configuration files here: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/configure.html

  4. Begin filling out configuration file with br-mgmt IPs for each host to be used. DO NOT use the host's physical IP address.

  5. Fill out openstack_user_config.yml and user_variables.yml

  6. Generate OpenStack Credentials found here: http://docs.openstack.org/developer/openstack-ansible/newton/install-guide/configure.html

    opt/openstack-ansible/scripts
    on pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
    
  7. Configure HAProxy found here: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/configure-haproxy.html#making-haproxy-highly-available (update this for Newton!)

    From here, this guide more-or-less follows the OSA installation docs. We probably shoudln't maintain parallel documentation.

  8. Check syntax of configuration files: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/configure-configurationintegrity.html

    opt/openstack-ansible/playbooks/
    
    stack-ansible setup-infrastructure.yml --syntax-check --ask-vault-pass
    
  9. Hosts file

  10. Run Foundation Playbook: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/install-foundation.html#running-the-foundation-playbook Consider --skip-tags=mail if you already have sendmail installed and don't want Postfix (e.g. at CyVerse).

     openstack-ansible setup-hosts.yml --ask-vault-pass
    
  11. Run infrastructure playbook found here: http://docs.openstack.org/developer/openstack-ansible/install-guide/install-infrastructure.html#running-the-infrastructure-playbook

     openstack-ansible setup-infrastructure.yml --ask-vault-pass
    
  12. Manually verify that the infrastructure was set up correctly (Mainly a verification of Galera): http://docs.openstack.org/developer/openstack-ansible/install-guide/install-infrastructure.html#verify-the-database-cluster

    sr/local/bin/openstack-ansible.rc
    ble galera_container -m shell -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"
    
    
    
    ls | grep galera
    
    attach -n infra1_galera_container-XXXXXXX
    
    l -u root -p
    
     status like 'wsrep_cluster%';
    
     That command should display a numeric cluster size equal to the amount of infra-nodes used.
    
  13. Do not proceed if the galera cluster size is not equal to the amount of infra-nodes used, as it could cause deployment issues. Be sure to resolve before proceeding to the next step.

  14. Run the playbook to setup OpenStack found here: http://docs.openstack.org/developer/openstack-ansible/install-guide/install-openstack.html#running-the-openstack-playbook

     openstack-ansible setup-openstack.yml --ask-vault-pass
    
What now?

Now that you have a running cloud, other things need to be set up in order to use OpenStack.

For steps on how to do this, see post-deployment.

Troubleshooting Tips
Dynamic Groups

OSA uses dynamically created groups of hosts and containers for targeting. To see a list of groups, run the following from the deployment host:

ce /opt/ansible-runtime/bin/activate
/openstack-ansible/scripts/inventory-manage.py -G
Viewing Logs
Viewing Ansible run logs

Check /openstack/log/ansible-logging on the deployment host. :)

Viewing logs from services provisioned by OSA

lxc-attach to the rsyslog container on your logging server, and look in /var/log/log-storage. Everything that logs to rsyslog, including most of the services that OSA sets up, will end up here.

LXC Container commands

http://docs.openstack.org/developer/openstack-ansible/newton/developer-docs/ops-lxc-commands.html

Deploying OpenStack Liberty – everything below should be either worked into above sections or deprecated
Hosts

5 minimum required nodes, 1 optional node for cinder LVM (if not using Ceph)

Control Plane
Logging
Compute
Cinder

You must run OSA from a deployment host which has access to all subnets and VLANs in your deployment. This can be one of the primary infrastructure / control plan hosts.

Networking

Networking diagram and description found here: http://docs.openstack.org/developer/openstack-ansible/liberty/install-guide/overview-hostnetworking.html

Required VLAN Tags

In order to deploy OpenStack using OSA, 4 total VLAN Tags are required.

Role | Description | Number — | — | — Native | Native tag used within your subnet | 100 Container | Tag for container management network | 101 Tunnel | Tag for tunneling network | 102 Storage (Optional) | Tag for cinder storage network | 103

Target Host

Required Network bridges

Controller Container Networking Layout

This configuration may be on only a subset of containers, where as some of them will only have a single interface

Neutron Controller Container

DHCP agent + L3 Agent and Linux Bridge

Installation Requirements
Target hosts
Cinder Target Host (Linus)
Security
Networking Reference Architecture

Interface | IP | Bridge Interface | Manual? — | — | — | — eth{primary} | 10.1.1.10 | N/A | Yes lxcbr0 | Unnumbered | eth{primary} | No br-mgmt | 192.168.100.10 | eth{primary}.101 | Yes br-vxlan | 192.168.95.10 | eth{secondary}.102 | Yes br-storage | 192.168.85.10 | eth{primary}.103 | Yes br-vlan | Unnumbered | eth{secondary} | Yes

Physical Interfaces Files
IP Layout

Hostname | Interface | IP | Bridge — | — | — | — external_lb_vip | N/A | 10.1.1.2 | eth{primary} network internal_lb_vip | N/A | 192.168.100.10 | br-mgmt IP infra_control_plane_host | eth{primary} | 10.1.1.10 | N/A | br-mgmt | 192.168.100.10 | eth{primary}.101 | br-storage | 192.168.95.10 | eth{primary}.103 | br-vxlan | 192.168.85.20 | eth{secondary}.102 | br-vlan | Unnumbered | eth{secondary} | eth{secondary} | Unnumbered | N/A | infra1_container | 192.168.100.10 | br-mgmt

Configure Targets
  1. Install package dependencies

    1. In this repo, use configure_targets.yml playbook

      ble-playbook configure_targets.yml -e "SSH_PUBLIC_KEY='ssh-rsa AAAA...'"
      
  2. Set up NTP

Set up Deployment Host
  1. Find the latest stable TAG: https://github.com/openstack/openstack-ansible/releases and verify that the selected tag corresponds with the version of OS one wishes to deploy. One may see something similar to this: meta:series: liberty in the release notes.

  2. Clone repo on deploy host (This can be done via Ansible, or on one of the “Infrastructure Control Plane Host”)

    clone -b 12.0.9 https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
    
  3. Run bootstrap

    opt/openstack-ansible
    pts/bootstrap-ansible.sh
    
Prepare Target Hosts

Re-run Ansible Playbook to include changes for block-storage node

  1. Run Ansible Playbook to set up Bare-Metal host credentials, SSH keys and root passwords. (Very important to configure root password, so that recovery of configuration is still possible from the host's console)

    nsible
    ble-playbook playbooks/host_credentials.yml
    
  2. Configure Bare-Metal host networking for OSA setup (VLAN tagged interfaces and LinuxBridges). At this point, you MUST modify the hosts file AND group_vars/all variables under TARGET_HOST_NETWORKING, TARGET_HOSTS and CINDER_PHYSICAL_VOLUME sections.

    ble-playbook playbooks/configure_networking.yml
    
  3. Run Playbook to set up an Ubuntu apt-mirror using a completely separate host (could use a target-host, but it is not recommended)

    ble-playbook playbooks/apt-mirror.yml --skip-tags "update"
    
    ally execute the command below on apt-mirror host, since it could take a very long time (upwards of 4 hours)
    
     apt-mirror -c apt-mirror
    
  4. Prepare hosts for OSA Deployment. This Playbook configures the Deployment host AND OSA Target-hosts. (Ensure that hosts and group_vars/all are filled out and accurate)

    ble-playbook playbooks/configure_targets.yml
    
  5. Manually copy and enable configuration file for OSA

    etc/openstack_deploy/ && cp openstack_user_config.yml.example openstack_user_config.yml
    

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.