Donnerstag, 30. November 2017

bedtools intersect for large genomes works with the -sorted option

When trying to intersect to bed files for regions of the barley genome, I got the following error:


bedtools intersect -a A_input.bed -b B_input.bed 
ERROR: Received illegal bin number 37453 from getBin call.
ERROR: Unable to add record to tree.


This seems to be due to the very large size of the barley chromosomes, which are up to almost 770 Mbp long. Curiously, when intersecting with the -sorted option, bedtools can handle the files. Using the -sorted option is recommended anyways, because it makes bedtools intersect faster and more memory efficient.

So, after sorting the files with

sort -k 1,1 -k2,2n input.bed

or the slower

bedtools sort input.bed

the intersection can be accomplished with

bedtools intersect -sorted -a A_input.bed -b B_input.bed  

Donnerstag, 22. Juni 2017

Run a jupyter notebook on a compute-cluster, use it locally




For those of you who want to run a jupyter-notebook on a cluster-server, so that you can run large processes or submit cluster-jobs directly from within the notebook:


- start a shell
- type "ssh -N -f -L localhost:8888:localhost:8889 urusrname@hpc002" to open a "tunnel" between the server and your computer
don't worry if you don't see anything. The tunnel is there, it's just invisible (magic).
- then, in the same shell type "ssh urusrname@hpc002" to login to the same server
- now start a jupyter-notebook, but without opening it in the browser. Instead make it use the tunnel "jupyter-notebook --no-browser --port=8889"
You get an output like this:


"    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8889/?token=e76a6b5e3b22d1ae1cc985be277c2d81e120faf10fa0014a
"
Open a browser and copy paste the link into the browser. Exchange localhost:8889 by localhost:8888


Voila.

Inspired by
https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh

Mittwoch, 15. Februar 2017

Install R locally with anaconda to use ballgown for RNA-Seq

Just a mail I just send to a couple of colleagues, explaining how I installed the RNA-Seq analysis package ballgown on a unxi debian "Jessie".
Should in general work for unix.



Hello,

after mapping the RNA-Seq reads with Hisat2
and calculating transcripts and readcounts with stringtie
it is time to analyse the data with 

ballgown
https://github.com/alyssafrazee/ballgown


First it needs to be installed.
This doesn't work straight-forward, because you can not install it in
the default R. At least for me it didn't work.

This is how I did it:

install anaconda (this automatically installs a local R version in
/home/ries/anaconda2/)
 
https://docs.continuum.io/anaconda/install


install very important R packages:
> I found the command to install a number of famous R packages: conda
> install -c r r-essentials

it also automatically installs the bioconductor installer,
which can then be used to install ballgown:

start your local R:
/home/ries/anaconda2/bin/R

and from within R:
source("http://bioconductor.org/biocLite.R")
biocLite("ballgown")


Good bye from
ries@home

Dienstag, 22. November 2016

GATK use-cases: Getting started with Queue and IntelliJ IDEA

If you use the GATK, at some point you might want to start

using Queue

to

a) build pipelines, thus saving time and effort while rerunning your analysis and making them less error-prone
b) use a compute cluster like LSF or Univa Grid Engine to distribute your jobs, potentially speeding up your pipeline many-fold.


I started out with just downloading the 'Queue.jar' from GATK and writing my qscripts in a simple text editor. Most of my time was lost on finding the right functions or classes to use and debugging the qscripts.

To efficiently write your own qscripts you should use the gatk development version in combination with the IntelliJ IDEA.  Properly set up, IntelliJ is tremendously helpful, by automatically generating import statements, suggesting valid class functions,  code highlighting and much more.

I had a hard time making my first steps to a working qscript, so I give some examples (see the other GATK use-cases).

Here is a

short summary of how to set up the development environment.


I am on Debian 8. I know it works likewise on 7 (since our compute cluster has 7 installed), but generally recommend the latest stable release of your Unix operating system as well as GATK (GATK 3.x that is. This all will not work on the upcomming GATK 4 release).

You also need java 8. openjdk works for me as well as oracle java.
If you need java 7 for most of your applications, than export java 8 temporarily to your path. For me it looks like this (shell command):


export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export PATH=$JAVA_HOME/bin:$PATH



Although it is 'only for developers' use the GATK-protected repository!

So get the latest GATK-protected release here:
https://github.com/broadgsa/gatk-protected.

And the latest IntelliJ IDEA here:
https://www.jetbrains.com/idea/

If you don't already have i, get and install maven:
http://maven.apache.org/install.html

Check if Java home is set correctly to java 8:



mvn --version

Now go to the gatk-protected directory, and compile with:


mvn clean
mvn verify

This will compile the whole thing, and produce a GenomeAnalysisTK.jar and Queue.jar in


gatk-protected/target/

Later on, when you write your own classes, you need to recompile your version of gatk, for your changes to take effect. This can aghain be done with --verify, or faster with "mvn  -Ddisable.shadepackage verify". MORE INFORMATION

Now to set up the IntelliJ IDEA 

(modified from here):

  • Run mvn test-compile in your git clone's root directory
  • Open IntelliJ File -> import project, select your git clone directory, then click "ok" On the next screen, select "import project from external model", then "maven", then click "next" Click "next" on the next screen without changing any defaults -- in particular:
    • DON'T check "Import maven projects automatically" 
    • DON'T check "Create module groups for multi-module maven projects"
  • On the "Select Profiles" screen, make sure private and protected ARE checked, then click "next".
  • On the next screen, the "gatk-aggregator" project should already be checked for you -- if not, then check it.
  • Click "next".
  • Select the 1.8 SDK, then click "next". 
  • Select an appropriate project name (can be anything), then click "next" (or "finish", depending on your version of IntelliJ).
  • Click "Finish" to create the new IntelliJ project.
It should look something like this:


Using qscripts

 I haven't found a recommendation for this, but suggest you create your new qscripts (scala scripts) in

~/gatk-protected/protected/gatk-queue-extensions-distribution/src/main/qscripts/org/broadinstitute/gatk/queue/qscripts/

Putting them in other folders eventually results in them beeing "hard coded"
during compilation, so that code changes won't take effect until you compile the whole thing again.

To make this more clear:
In general, you have to compile only after creating new java or scala classes, like a new filter or walker, not for a pipeline qscript.

Now you are good to go.
To get an overview (or rather a glimpse)  read the recent threads on GATK development and pipelining, but make sure to go through the comments. Many of the threads refer to old GATK versions, when ant was used instead of maven, and before the sting-to-gatk-renaming, so keep that in mind.

For starters, you could play around with the HaplotypeCaller qscript in

~/gatk-protected/protected/gatk-queue-extensions-distribution/src/main/qscripts/org/broadinstitute/gatk/queue/qscripts/examples/ExampleHaplotypeCaller.scala

You only need a reference sequence and an accordingly mapped bam file to test it. Ideally, starting it (on a LSF cluster) would be as easy as


java  -jar ~/gatk-protected/target/Queue.jar -S ~/gatk-protected/protected/gatk-queue-extensions-distribution/src/main/qscripts/org/broadinstitute/gatk/queue/qscripts/examples/ExampleHaplotypeCaller.scala -O raw.vcf -R ref.fa -I reads.bam  -run  

If this works, you can start playing around with the HaplotypeCaller Parameters, or processing multiple input files at once ('-I 1.bam -I 2.bam').

Donnerstag, 13. Oktober 2016

GATK use-cases: Implementing a new read filter

Another day, another GATK hack.

Stacks, the tool I use to analyze my RAD-Seq data has the constraint, that it should only be run with reads of the exact same length to calculate the rad tags. Depending on your setup, this concerns forward and reverse reads (in case of a double cut RAD-Seq), or forward reads alone and the reverse reads are not used to calculate RAD stacks. So why not just filter for reads of the exact length?
Well, most read filtering tools accept a range of length for both reads.  So the whole read-pair is filtered out, if one of the reads doesn't match the length criteria, even if the other read was perfectly fine.
What I needed was a filter, that made it possible to filter on criteria that were different for the two respective reads of a pair. This could rather easily be done with python and ngs_plumbing to filter the fastq files before mapping. But I wanted to filter the reads after mapping. Here's why:
One of the major advantages of paired reads over single reads is the more accurate mapping, because the information of both reads of a pair is considered to find the best matching position. Therefore people (me included) tend to use
only the properly paired reads for their analysis. By filtering the unmapped reads from the fastq files, I lose both reads of a pair, if the forward read is filtered out due to the length constraints. In contrast, when filtering after the mapping, it is okay to retain the reverser read, because the information of the forward read has already been used to find the best matching position. Thereby, I lose only half of the reads by filtering after than before the mapping.


Since I use GATK and Queue a lot I set myself on implementing such a read filter in GATK.

If you have gatk-protected-3.6 and IntelliJ IDEA installed (using java8), you're ready to go.

First, I created a new java class "forwardReadLengthFilter.java" in


gatk-protected/public/gatk-engine/src/main/java/org/broadinstitute/gatk/engine/filters/




 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
package org.broadinstitute.gatk.engine.filters;
import htsjdk.samtools.SAMRecord;
import org.broadinstitute.gatk.utils.commandline.Argument;

/**
 * Filter out forward reads based on length
 *
 * <p>This filter is useful for running on only forward reads (the first of a pair) that are longer (or shorter) than the given threshold sizes.
 *  Implemented, because for the stacks RAD-Seq analysis pipeline, all forward reads need to have the exact same length.</p>
 *
 * <h3>Usage example</h3>
 *
 * <pre>
 *     java -jar GenomeAnalysisTk.jar \
 *         -T ToolName \
 *         -R reference.fasta \
 *         -I input.bam \
 *         -o output.file \
 *         -rf ReadLength \
 *         -minForwardRead 50 \
 *         -maxForwardRead 101
 * </pre>
 *
 * @author ries
 * @version 0.1
 */

public class forwardReadLengthFilter extends ReadFilter {
    @Argument(fullName = "maxForwardReadLength", shortName = "maxForwardRead", doc = "Discard forward reads with length greater than the specified value", required = true)
    private int maxReadLength;

    @Argument(fullName = "minForwardReadLength", shortName = "minForwardRead", doc = "Discard forward reads with length shorter than the specified value", required = true)
    private int minReadLength = 1;

    public boolean filterOut(SAMRecord read) {
        // check the length
        return (((read.getReadLength() > maxReadLength) || (read.getReadLength() < minReadLength)) && read.getFirstOfPairFlag());
    }
}

By importing the SAMRecord, you can use all the nice getter Methods, to check the properties of your reads. I only needed to check if the read was the first read of a read pair (getFirstOfPairFlag()) and if it is to long (getReadLength() > maxReadLength) or to short (getReadLength() < minReadLength). But if you want to design your own filters (and you surely do) have a look here:

SAM record methods

The maxReadLength and minReadLength values are commited to the read filter via the @Argument lines.

All the real filtering is done by the ReadFilter class we extended.

To really use the filter, it has to be compiled and incorporated into the Queue.jar file. This is done by calling from the terminal


1
mvn clean; mvn verify


in the gatk-protected folder.

This takes a while (so have a Martini), but finally the new Queue.jar can be found in:




1
 ../gatk-protected/target/Queue.jar

So weit so gut.

 Now I had to actually use the new filter. The easiest way for this is to run a "PrintReads" instance with the filter: I created a "FilterUniqBAM.scala" (never mind the name) in a directory where I store my RAD-Seq related scripts:


1
/home/ries/gatk-protected/public/gatk-queue-extensions-public/src/main/scala/org/broadinstitute/gatk/queue/extensions/RADseq/filterUniq/FilterUniq2.scala





 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
package org.broadinstitute.gatk.queue.extensions.RADseq.filterUniq
import org.broadinstitute.gatk.queue.QScript
import org.broadinstitute.gatk.queue.extensions.gatk._
import org.broadinstitute.gatk.queue.util.QScriptUtils
/**
  * Created by ries on 10/12/16.
  */

class FilterUniqBAM extends QScript{
  @Input(doc="File containing a list of input SAM or BAM files to analyze. Files must be coordinate sorted.", shortName = "I", fullName = "input_bam_files", required = true)
  //var bamFiles: Seq[File] = Nil
  var bamFiles: File = _

  @Input(doc="The reference file for the bam files.", shortName="R", required=true)
  var referenceFile: File = _

  @Argument(fullName = "maxForwardReadLength", shortName = "maxForwardRead", doc="Discard forward reads with length greater than the specified value", required=false)
  var maxForwardLength: Int = 1;

  @Argument(fullName = "minForwardReadLength", shortName = "minForwardRead", doc="Discard forward reads with length shorter than the specified value", required=false)
  var minForwardLength: Int = 100000;

  def script() {

    val bamFilesList = QScriptUtils.createSeqFromFile(bamFiles)
    for (bamFile <- bamFilesList){
      val filterBam = new PrintReads with forwardReadLength

      filterBam.input_file = Seq(bamFile)
      filterBam.reference_sequence = referenceFile
      filterBam.out = swapExt(bamFile,".bam",".uniq.bam")
      filterBam.maxForwardReadLength = maxForwardLength
      filterBam.minForwardReadLength = minForwardLength
      add(filterBam)
    }
  }

}




The "with" statement allows to directly set the filter arguments via "maxForwardReadLength" and
"minForwardReadLength" (remember, we defined those in the forwardReadLengthFilter.java class).

Finally, to run the scala script on the cluster:



1
java -jar ~/gatk-protected/target/Queue.jar -S  /home/ries/gatk-protected/public/gatk-queue-extensions-public/src/main/scala/org/broadinstitute/gatk/queue/extensions/RADseq/filterUniq/FilterUniqBAM.scala -bsub  -I P2_A04_a.rmdup.bam -R /biodata/irg/grp_stich/RAD-Seq_ries/data/B.napus_reference/referencesequence.fa -run --maxForwardReadLength 90 --minForwardReadLength 88

This generated me a file "P2_A04_a.rmdup.uniq.bam" with forward exactly 89bp in length, and reverse reads of all kinds of length. Keep in mind, that some of the reverse reads lost their forward pair, though. This means, that they were properly paired when mapped to the reference sequence, but are not anymore.


Cheers,

Dave


Mittwoch, 12. Oktober 2016

GATK use-cases: removing duplicate reads with queue

I have been using GATK for some years, and somehow managed to glue a queue-based bam -> vcf pipeline together, I decided that it is time to introduce myself to the wonderful world of queue development.

So here are my fickle attempts to become a GATK developer. One at a time.

Today I needed to remove duplicate reads from a number of bam files of a RAD-Seq experiment. Removing duplicate reads is not in general recommended for RAD-Seq experiments, but in some cases it is:
"Because of the mechanical shearing step, this method can also be used to identify PCR duplicates in sequence data generated using original RADseq with paired-end sequencing" (Andrews 2016, DOI: 10.1038/nrg.2015.28)
 Since I have hundreds of files, I wanted to distribute the work on our Dell cluster, running "IBM Platform LSF 8.3.0.196409" (bsub).

So, to use GATKs Queue, I had to write a scala script "removeDuplicates.scala".

The main points were:
  • MarkDuplicates has to be imported from the picard extension of queue
  • the input is a file containing a list of bam files
  • duplicates are not merely marked, but removed, thus the name of the script
The implementation of  "removeDuplicates.scala" is as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
package org.broadinstitute.gatk.queue.extensions.RADseq

import org.broadinstitute.gatk.queue.QScript
import org.broadinstitute.gatk.queue.extensions.picard.MarkDuplicates
import org.broadinstitute.gatk.queue.util.QScriptUtils

/**
  * Created by ries on 10/12/16.
  */
class removeDuplicates extends QScript{
  @Input(doc="File containing a list of input SAM or BAM files to analyze. Files must be coordinate sorted.", shortName = "I", fullName = "input_bam_files", required = true)
  //var bamFiles: Seq[File] = Nil
 var bamFiles: File = _
  @Argument(doc="If true do not write duplicates to the output file instead of writing them with appropriate flags set.", shortName = "remdup", fullName = "remove_duplicates", required = false)
  var REMOVE_DUPLICATES: Boolean = false

  @Argument(doc = "Maximum number of file handles to keep open when spilling read ends to disk.  Set this number a little lower than the per-process maximum number of file that may be open.  This number can be found by executing the 'ulimit -n' command on a Unix system.", shortName = "max_file_handles", fullName ="max_file_handles_for_read_ends_maps", required=false)
  var MAX_FILE_HANDLES_FOR_READ_ENDS_MAP: Int = -1

  @Argument(doc = "This number, plus the maximum RAM available to the JVM, determine the memory footprint used by some of the sorting collections.  If you are running out of memory, try reducing this number.", shortName = "sorting_ratio", fullName = "sorting_collection_size_ratio", required = false)
  var SORTING_COLLECTION_SIZE_RATIO: Double = -1


  def script() {

    val bamFilesList = QScriptUtils.createSeqFromFile(bamFiles)
    for (bamFile <- bamFilesList){
      val dedupedBam = new MarkDuplicates

      dedupedBam.input = Seq(bamFile)
      dedupedBam.output = swapExt(bamFile,".bam",".rmdup.bam")
      dedupedBam.metrics = swapExt(bamFile,".bam",".rmdup.metrics")
      dedupedBam.REMOVE_DUPLICATES = true
      add(dedupedBam)
    }
  }

}  


You can run the script like this:


java -jar Queue.jar -S  removeDuplicates.scala -bsub  -I bamlist.input -run

For each input file, it will create a file with the duplicates removed ending in ".rmdup.bam" as well es a metrics file ending in ".rmdup.metrics" and a file containing job specific information ending in ".out". The bam file is indexed for convenience.

For the example input file

"P1_H08_b.bam",

it will generate

"P1_H08_b.rmdup.bam", "P1_H08_b.rmdup.bai", "P1_H08_b.rmdup.metrics", and "P1_H08_b.rmdup.bam.out".


Regarding the input: Normally, these scripts want each input bam file fed seperately with the "-I" option. If you have large numbers of files this can become tedious. So I decided to change the input behavior. The input can now also be a text file containing one bam a row. For example:

bamlist.input


P1_A07_a.bam
P1_A07_b.bam
P1_A08_a.bam
P1_A08_b.bam
P1_A09_a.bam
P1_A09_b.bam
P1_A10_a.bam
P1_A10_b.bam
P1_A11_a.bam
P1_A11_b.bam
P1_A12_a.bam
P1_A12_b.bam
P1_B07_a.bam
P1_B07_b.bam
P1_B08_a.bam
P1_B08_b.bam
P1_B09_a.bam
P1_B09_b.bam

All the files named in bamlist.input will be processed in parallel, provided the cluster has some resources to spare.
The old behavior with '-I P1_A07_a.bam -I P1_A07_b.bam ...' and so on still works.


Thats it.

Mittwoch, 31. August 2016

How to make an image with fading round edges in GIMP

For a pirate-themed Party, I wanted to send some nicely printed
invitation cards. They should be something special. One of the features I wanted were photos of me and my friends dressed up as pirates on the card.
But they should blur into the background.
Took me some time to figure out how to do it in Gimp, but here:

Open the photo.
Filters->Decor->Old Photo
Layer->Transparency->Add alpha channel
Elipse select tool
Select->Invert
Select->Feather
delete
Then copy and paste the photo as a new layer to the card.

As a background, I used my local area from
http://www.yarrmaps.com/