iPlantCollaborativeOpenSource/irods4-upgrade-env

Name: irods4-upgrade-env

Owner: iPlant Collaborative Open Source

Description: A collection of docker containers that mimic our production environment intended to test our iRODS 4 upgrade process

Created: 2015-02-23 21:10:31.0

Updated: 2017-08-16 18:31:44.0

Pushed: 2017-08-16 18:31:42.0

Homepage:

Size: 166

Language: C++

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

irods4-upgrade-env

This is a collection of docker containers that mimic CyVerse's production iRODS grid. It is intended to be the grid used to test CyVerse's process for upgrading production grid to iRODS 4.

CyVerse's current production deployment of iRODS

CyVerse's production iRODS grid uses a patched version of iRODS 3.3.1. It consists of an IES that isn't also a resource server. The ICAT database is hosted on a dedicated postgres DBMS running on a separate server. There are currently 19 resource servers, spread across 5 institutions.

CyVerse's production iRODS zone, iplant, consists of 20 resources and 3 resource groups. The default resource group, iplantRG, acts as the default resource. It consists of a pool of resources randomly chosen for new data. Currently, the pool consists of the resources lucyResand pennyRes. The aegisRG group contains remote replicas of data written to /iplant/home/shared/aegis collection. It consists of the resources aegisASU1Res and aegisNAU1Res. A resource is chosen randomly for new data. The iclimateRG resource group contains only the aegisUA1Res resource.

There is a largely, one-to-one mapping between resource servers and resources with one exception. The server shelby hosts both the apolloResc and shelbyRes resources. We planned to move the files in apolloResc into shelbyRes before the migration, but we ran out of time.

The aegisUA1Res is special. Besides being the sole member of the iclimateRG resource group, all new data destinated for the /iplant/home/shared/aegis collection. This is the data that is replicated to the aegisRG group.

The production zone depends on the existence of an AMQP message broker. This broker is a RabbitMQ broker service running on its own server. The zone uses rules to push messages to the broker. The exchange is configured by a script deployed with iRODS and called by the rules.

The test grid

The test grid consists of nine containers.

There is a container, irods_icommands_run_1, that acts as a client for interacting with the test grid.

Because one random resource group is already represented, the representation of the aegisRG only contains one resource.

Requirements

This requires docker compose version 1.3.3 or newer.

Usage

This collection of container is intented to be managed by docker compose. However, because of the intricacies of the interactions between the containers during start up, docker compose needs a little help to orchastrate start up.

Before running the container collection, the containers need to be built. Use the build.sh script.

build.sh

When the build is finished, docker images should show the following.

cker images
SITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
s_icommands     latest              a50b673757ed        2 minutes ago       582.9 MB
s_ies           latest              ccae523913a9        2 minutes ago       878.2 MB
s_dbms          latest              5a1ec381ac94        5 minutes ago       264.3 MB
s_rs            latest              19f76bb08bc9        5 minutes ago       709.7 MB
s_server        latest              9adc848ce02b        5 minutes ago       709.7 MB
s_base          latest              61d6c5903260        6 minutes ago       523.7 MB
gres            9.3                 4489c15e5c90        2 weeks ago         264.3 MB
os              6                   72703a0520b7        3 weeks ago         190.6 MB

To bring up the collection of containers, use the up.sh script.

up.sh

Once the containers have been brought, you should see the following processes running.

cker-compose -p irods ps
  Name                     Command               State            Ports          
--------------------------------------------------------------------------------
s_aegisasu1_1   /bootstrap.sh                    Up       1247/tcp               
s_aegisua1_1    /bootstrap.sh                    Up       1247/tcp               
s_amqp_1        /docker-entrypoint.sh rabb ...   Up       15672/tcp, 5672/tcp    
s_data_1        /true                            Exit 0                          
s_dbms_1        /docker-entrypoint.sh postgres   Up       5432/tcp               
s_hades_1       /bootstrap.sh                    Up       1247/tcp               
s_ies_1         /bootstrap.sh                    Up       0.0.0.0:1247->1247/tcp 
s_lucy_1        /bootstrap.sh                    Up       1247/tcp               
s_snoopy_1      /bootstrap.sh                    Up       1247/tcp   

The IES can be connected to via localhost on port 1247. The admin user is ipc_admin and has a password of password.

There is also an icommands container that can be launched with the client.sh script. Optionally, the name of the user to connect as can be provided as the first argument. If no argument is provided, the script will attempt to connect with the admin account, ipc_admin.

ient.sh

To bring down the collection of containers, use the down.sh script.

wn.sh

Notes

Because of the need for bidirection communication between the IES and the resource server, containers need to be able to talk to each through IP ports on the docker0 interface. To allow this, make sure your firewall allows communication between ports on the docker0 interface.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.