AAFC-BICoE/snakemake-mothur

Name: snakemake-mothur

Owner: Biological Informatics CoE @ Agriculture and Agri-Food Canada

Owner: Biological Informatics CoE @ Agriculture and Agri-Food Canada

Description: Example of a snakemake workflow implementation of the mothur MiSEQ SOP

Created: 2016-02-16 16:32:24.0

Updated: 2018-03-21 00:58:17.0

Pushed: 2016-11-07 18:40:38.0

Homepage: null

Size: 133

Language: Python

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

snakemake-mothur

Example of a snakemake workflow implementation of the mothur MiSeq SOP. Please see http://www.mothur.org/wiki/MiSeq_SOP

This workflow has some errors in the last several commands. I ran into an issue with remove.groups() where it claimed to create a file that was never made. Unfortunately I did not have time to correct these problems.
Everything up to remove.groups was tested and ran correctly on the AAFC biocluster with both DRMAA and qsub.

Table of Contents
Workflow

Here is an image of the current workflow implemented in the Snakefile. This image was generated using snakemake.

mothur workflow

Requirements

I used Python 3.5.1 and Snakemake 3.6.0

Install Instructions

Install Python and snakemake with sudo priviledges

do apt-get install python3
do apt-get install python3-pip  # may be installed with Python 3.4+
p3 install snakemake
If you don't have sudo priviledges, it's still possible but a bit more involved

1) Download the Python source

et https://www.python.org/ftp/python/3.5.1/Python-3.5.1.tgz
nzip Python-3.5.1.tgz
r -xf Python-3.5.1.tar

2) Compile it to a local directory and add it to your path

 Python-3.5.1
configure --prefix=/home/<your username>/Python35
ke && make altinstall
port PATH=/home/<username>/Python35/bin:$PATH 

Notes:

3) Make a virtualenv and install snakemake

dir workdir
 workdir
thon3.5 -m venv smenv
urce env/bin/activate
p3 install snakemake
Getting this repo

Download a copy of this repo manually, or with git
> git clone https://github.com/tomsitter/snakemake-mothur

Getting the test data

Data files are excluded from this repository because many files they are too large for GitHub. Links to the test data can be found in the MiSeq SOP wiki linked at the top of this page.

All files should be placed in the data/ folder of this repository.

 snakemake-mothur/data
et http://www.mothur.org/w/images/d/d6/MiSeqSOPData.zip
et http://www.mothur.org/w/images/9/98/Silva.bacteria.zip
et http://www.mothur.org/w/images/5/59/Trainset9_032012.pds.zip

zip MiSeqSOPData.zip
zip Silva.bacteria.zip
zip Trainset9_032012.pds.zip

Mothur wants to look in the working directory for the sequences, so copy everything from the folder here > cp MiSeq_SOP/* .

This gives me the following files


cluster.json
commands.txt
config.json
dag.final.png
data
??? F3D0_S188_L001_R1_001.fastq
??? F3D0_S188_L001_R2_001.fastq
??? F3D141_S207_L001_R1_001.fastq
??? F3D141_S207_L001_R2_001.fastq
??? F3D142_S208_L001_R1_001.fastq
??? F3D142_S208_L001_R2_001.fastq
??? F3D143_S209_L001_R1_001.fastq
??? F3D143_S209_L001_R2_001.fastq
??? F3D144_S210_L001_R1_001.fastq
??? F3D144_S210_L001_R2_001.fastq
??? F3D145_S211_L001_R1_001.fastq
??? F3D145_S211_L001_R2_001.fastq
??? F3D146_S212_L001_R1_001.fastq
??? F3D146_S212_L001_R2_001.fastq
??? F3D147_S213_L001_R1_001.fastq
??? F3D147_S213_L001_R2_001.fastq
??? F3D148_S214_L001_R1_001.fastq
??? F3D148_S214_L001_R2_001.fastq
??? F3D149_S215_L001_R1_001.fastq
??? F3D149_S215_L001_R2_001.fastq
??? F3D150_S216_L001_R1_001.fastq
??? F3D150_S216_L001_R2_001.fastq
??? F3D1_S189_L001_R1_001.fastq
??? F3D1_S189_L001_R2_001.fastq
??? F3D2_S190_L001_R1_001.fastq
??? F3D2_S190_L001_R2_001.fastq
??? F3D3_S191_L001_R1_001.fastq
??? F3D3_S191_L001_R2_001.fastq
??? F3D5_S193_L001_R1_001.fastq
??? F3D5_S193_L001_R2_001.fastq
??? F3D6_S194_L001_R1_001.fastq
??? F3D6_S194_L001_R2_001.fastq
??? F3D7_S195_L001_R1_001.fastq
??? F3D7_S195_L001_R2_001.fastq
??? F3D8_S196_L001_R1_001.fastq
??? F3D8_S196_L001_R2_001.fastq
??? F3D9_S197_L001_R1_001.fastq
??? F3D9_S197_L001_R2_001.fastq
??? HMP_MOCK.v35.fasta
??? Mock_S280_L001_R1_001.fastq
??? Mock_S280_L001_R2_001.fastq
??? silva.bacteria
?   ??? silva.bacteria.fasta
?   ??? silva.bacteria.gg.tax
?   ??? silva.bacteria.ncbi.tax
?   ??? silva.bacteria.pcr.8mer
?   ??? silva.bacteria.pcr.fasta
?   ??? silva.bacteria.rdp6.tax
?   ??? silva.bacteria.rdp.tax
?   ??? silva.bacteria.silva.tax
?   ??? silva.gold.ng.fasta
??? stability.batch
??? stability.files
??? trainset9_032012.pds.fasta
??? trainset9_032012.pds.tax
preprocess.mothur2.Snakefile
preprocess.mothur.Snakefile
README.md
Snakefile
Running

Please read the official snakemake tutorial and documentation

This guide assumes you have read them.

Navigate to the directory containing the Snakefile

To see what commands will be run (and check your syntax), perform a dry-run

snakemake -npr

Note: If you are using the run: directive in any of your rules (opposed to the shell: directive), you will be missing the actually command in this printout because it is not interpreted until runtime.

You can run this pipeline locally by just typing snakemake

Or, you can submit it to the cluster using DRMAA or just qsub. With qsub:

snakemake --cluster "qsub -S /bin/bash" --jobs 32

I found I needed to add “-S /bin/bash” in order to initialize the PATH to run the command sI wanted

Alternatively, you can run using DRMAA.

DRMAA

Python 3.5 requires an environmental variable DRMAA_LIBRARY_PATH on the head node (the node running snakemake). I used

> export DRMAA_LIBRARY_PATH=$SGE_ROOT/lib/linux-x64/libdrmaa.so

In the following example we are using drmaa to run jobs on up to 32 nodes of the cluster. In order to get a shell, I had to pass some additional parameters through to qsub:

snakemake -j32 --drmaa ' -b n -S /bin/bash'

Note the leading whitespace in the parameters string! This is essential!
The above command tells qsub that these are not binary commands, and that I want the /bin/bash shell

You can pass additional parameters to qsub such as the number of slots/threads, where {threads} is defined in each snakemake rule.

snakemake -j32 --drmaa ' -n {threads} -b n -S /bin/bash'

snakemake can automatically figure out which rules can be run in parallel, and will schedule them at the same time (if -j is > 1).

The following is a qstat taken directly after running the above command. As you can see snakemake submitted two jobs in parallel (make_contigs and pcr – see the graph at the top of this file).

ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-------------------------------------------------------------------------------------------------------------
228 0.00000 snakejob.m bctraining03 qw    03/23/2016 15:06:22                                    8        
229 0.00000 snakejob.p bctraining03 qw    03/23/2016 15:06:23                                    8        
Cluster Configuration

Snakemake allows you to create a JSON configuration file for running files on a cluster. See the documentation for cluster configuration

Additional Information

Specifying Queue

You can also specify which queue each job should run in in the snakemake rule. This can be useful if you have tasks which need certain cluster requirements.

 example_rule:
put: somefile.fasta
tput: someotherfile.result
rams: queue=all.q
ell: "cmd {input} > {output}"

> snakemake -j --drmaa ' -q {params.queue} -b n -S /bin/bash'

Parsing Intermediate Results

Instead of shell commands, you can run a python script. I used this in one example to parse a file from the previous rule and pass the contents into the following rule.

rt os
 collections import namedtuple 

ary = namedtuple('Summary', ['start', 'end'])

parse_summary(wildcards):
filename = "{dataset}.summary.txt".format(dataset=dataset)
if os.path.isfile(filename):
    with open(filename) as f:
        for line in f:
            if line.startswith("Median"):
                _, start, end, *extra = line.split('\t')
                return Summary(start=start, end=end)
return Summary(start=0, end=0)

 screen2:
version: "1.36.1"
input:
    fasta = "{dataset}.trim.contigs.good.unique.align".format(dataset=dataset),
    count = "{dataset}.trim.contigs.good.count_table".format(dataset=dataset),
    summary = "{dataset}.trim.contigs.good.unique.summary".format(dataset=dataset),
output:
    "{dataset}.trim.contigs.good.unique.good.align".format(dataset=dataset),
    "{dataset}.trim.contigs.good.good.count_table".format(dataset=dataset),
run:
    summary = parse_summary(input.summary)
    cmd = "mothur \"#screen.seqs(fasta={},count={},summary={},start={},end={},maxhomop={})\"".format(
           input.fasta, input.count, input.summary, summary.start, summary.end, config["maxhomop"])
    os.system(cmd)

Creating Workflow Diagrams

To automatically generate an image like the one at the top of this file, you can run:

> snakemake --dag | dot -Tpdf > dag.pdf

Things to look into I didn't have time for

There are a number of snakemake command line flags that aren't adequately documented but may be very useful. They are:

sources [NAME=INT [NAME=INT ...]], --res [NAME=INT [NAME=INT ...]]
                    Define additional resources that shall constrain the
                    scheduling analogously to threads (see above). A
                    resource is defined as a name and an integer value.
                    E.g. --resources gpu=1. Rules can use resources by
                    defining the resource keyword, e.g. resources: gpu=1.
                    If now two rules require 1 of the resource 'gpu' they
                    won't be run in parallel by the scheduler.

mediate-submit, --is
                    Immediately submit all jobs to the cluster instead of
                    waiting for present input files. This will fail,
                    unless you make the cluster aware of job dependencies,
                    e.g. via: $ snakemake --cluster 'sbatch --dependency
                    {dependencies}. Assuming that your submit script (here
                    sbatch) outputs the generated job id to the first
                    stdout line, {dependencies} will be filled with space
                    separated job ids this job depends on.
bscript SCRIPT, --js SCRIPT
                    Provide a custom job script for submission to the
                    cluster. The default script resides as 'jobscript.sh'
                    in the installation directory.

See Also

Caveats

This repo demonstrates a very minimal amount of the language and features of snakemake.

One large omittance is how you can specify filenames for input into the pipeline. Please see the tutorial and documentation for various other (more powerful and flexible) methods for this.

There are also many more directives that can help you track rules and specify how they work (e.g. logging, benchmarking, resource usage)


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.