Name: internal-provisioning
Owner: Science For Life Laboratory
Description: Ansible repository for deployment stuff
Created: 2014-11-17 13:25:06.0
Updated: 2017-10-10 08:18:25.0
Pushed: 2017-10-10 08:18:19.0
Size: 127
Language: Python
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
This repository contains playbooks to deploy the preprocessing and NAS infrastructure at National Genomics Infrastructure.
Please refer to ansible documentation for more details about Ansible concepts and how it works.
The repository is structured following this structure
uction # inventory file for production servers
e # inventory file for stage environment
p_vars/
ll # here we assign variables shared between all systems
essing.yml # playbook for preprocessing servers
yml # playbook for nas'es
s/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- Main file for any given role
templates/ # <-- Any configs or scripts that uses variables set in the recipe
ntp.conf.j2 # <------- templates end in .j2
files/ # <-- Any configs or scripts that require no modifications by the recipe
vars/ #
main.yml # <-- Variables uniquely associated with this role
In order to deploy any of the playbooks, clone this repository to a machine that
is able to ssh
to the servers you want to deploy. Usually your local machine has enough access rights.
NOTE: Install ansible on your machine via:
udo pip install ansible
You don't want to mess around the production servers until you are sure that everything works, don't you? Okay that's easy to solve. Either deploy on the staging server, or (if none are available) create a virtual mini-cluster using vagrant and configure it like this
To deploy a new preprocessing server, execute the processing
playbook. The requirements for deploying are:
sudo
to the relevant production user accountIf requirements are met, simply use:
nsible-playbook processing.yml -u <your_username> -i <staging_servers | production_servers> --ask-vault-pass
Currently the playbooks for deploying NASes are under development. That means it's up to you to make sure that everything looks good before you replace the production cluster.
In order to deploy a new NAS, execute the nas
playbook.
Same requirement as for deploying a preprocessing server apply.
Additionally the NASes use 2-factor authentication. So have fun with that.
In theory the same command as for deploying preprocessing servers should apply.
nsible-playbook nas.yml -u <your_username> -i <staging_servers | production_servers> --ask-vault-pass
For typical deployment there are certain problem areas one is likely to run into, which are all easily mitigated:
htop
is useful for checking the activity of the bcl2fastq
software.vars/main.yml
accordinglyhome/<user>:
a/taca.yaml Automatically applied configuration file to all TACA commands
h_profile Loads $PATHs, environment and syntax highlighting
ig/ Configuration files to $PATHs, 10x chromium demultiplexing, and supervisord
Log files from TACA, supervisord and flowcell transfers
All installed software applied by the preprocessing cluster
In short the roles provide the following features:
Creates a ssh key.
Creates the directories config, log and .taca
Sets a custom bash_profile file and a custom paths.sh file (to initialize $PATH)
“Installs” and starts supervisord
Creates sequencing archive directories
Copies processing unique supervisord configuration
Starts cronjobs, which at the time of writing only concern TACA
Creates the .irods and nosync directories.
Creates configurations for logrotate, lsyncd, supervisord, taca and irodsEnv
Starts cronjobs, which at the time of writing concern taca storage
and logrotate
Downloads and installs the mentioned software.
Additionally the miniconda role creates the master
venv if it does not already exist.
In order to run a playbook on a server with 2-factor authentication enabled you have 2 options:
Clone this repository onto the server and run the playbook locally
using --connection=local
Slightly more complicated: Tunnel the connection through an already opened connection using ControlMaster ssh option:
~/.ssh/config
file and add the following to create a master connection
per server: *
rolMaster auto
rolPath ~/.ssh/cm_socket/%r@%h:%p
~/.ansible.cfg
configuration file to use that ControlMaster
connection:_connection]
args = -o ControlMaster=auto -o ControlPath=<your_home_directory>/.ssh/cm_socket/%r@%h:%p
Open a regular connection to the server, like ssh user@server
, enter the vault token and your password.
Keep the connection opened.
On a separate terminal, run the playbook as usual.