rcbops/CephAIO

Name: CephAIO

Owner: rcbops

Description: Creates an AIO for ceph using LXC containers for Rackspace Public Cloud

Created: 2017-06-19 20:07:44.0

Updated: 2017-06-23 19:17:16.0

Pushed: 2017-09-12 15:00:20.0

Homepage:

Size: 23

Language: Shell

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

CephAIO

Requires: Ubuntu 16.04 and Ansible 2.3.0 or higher

Creates an AIO for ceph using LXC containers

The goal of this project was to create an all in one enviroment of ceph that resembled what a production ceph cluster may look like. Each container is treated as a “server”.

---------------------------------------------------------------
ntainer |    IP     |                  Type                   |
---------------------------------------------------------------
phadmin | 10.0.3.51 | Deployment Host, ceph monitoring host   |
phmon   | 10.0.3.52 | ceph monitoring host                    |
phosd1  | 10.0.3.53 | ceph osd host                           |
phosd2  | 10.0.3.54 | ceph osd host                           |
phosd3  | 10.0.3.55 | ceph osd host                           |
phrgw   | 10.0.3.56 | ceph rados gateway host                 |
---------------------------------------------------------------

Known to Work on:

ckspace Public Cloud
- 15 GB Compute v1   
- Ubuntu 16.04 LTS (Xenial Xerus) (PVHVM)  
- 75 GB Standard SATA block device attached as /dev/xvdb
Boot Strap the Host

Use bootstrap.sh to stage the host:

Add ansible repo and run update package lists

 apt-add-repository ppa:ansible/ansible
 apt-get update

Install ansible

 apt-get install -y ansible

Create a passwordless ssh-key and copy the key to itself

 $USER -c "echo |ssh-keygen -t rsa"
/root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

Disable password entry

-i '/PasswordAuthentication yes/c\PasswordAuthentication no' /etc/ssh/sshd_config##
To Create the CephAIO
ephAIO
 bootstrap.sh
ble-playbook -i inventory build.yml
To Teardown

ansible-playbook -i inventory teardown.yml

After using the teardown playbook, ensure that the ceph disk has been completely wiped. Ceph leaves some residual data that interferes with future deployments. The teardown playbook currently uses shred to perform the data wipe (this takes awhile), but I am pretty sure that removing the block device and adding a new one would accomplish the same end result.

Clean Up

ansible-playbook -i inventory cleanup.yml

If the containers and drives are created but the installation of ceph fails then cleanup.yml will stop and destory the containers then remove the 3 partitions from the external drive and then run the ' shred ` command. This provides the ability to reset the host back to a clean state if an error installing ceph happens.

Tests

The directory tests contains information on how to test the success of some installed components.

TO DO:


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.