23andMe and life insurance

I get lots of queries from blog readers (am always surprised people read this drivel), mostly about concerns regarding 23andMe and whether this will or could affect life insurance policies. I always answer in a non-commited, got to make your own mind up, kind of way. Prompted by the latest caller, I thought I’d ask the wisdom of the crowds so turned to Twitter. You can read the storify of the exchange but a precis follows:

Note: this does not constitute formal legal advice! I provide this information merely as a comment on the current situation within the UK.

It turns out that UK insurers (I have not investigated other jurisdictions) have a voluntary moratorium with the UK Government against the use of predictive genetic testing in insurance cases that is valid until 2019. Please read the full text of the moratorium, but I include two sections of particular relevance here:

21. Insurers agree to the following:

a. customers will not be asked, nor will they be put under pressure, to take a predictive genetic test to obtain insurance cover;

b. customers who have taken a predictive test before the date of this Concordat will be treated in the same way as customers taking tests under the terms of the
Concordat;

c. customers will not be required to disclose any of the following:

i. a predictive genetic test result from a test taken after the insurance cover has started, for as long as that cover is in force;
ii. the predictive test result of another person, such as a blood relative; or
iii. a predictive or diagnostic test result acquired as part of clinical research (for example, the 100,000 Genomes Project) To avoid doubt,  customers may be asked to disclose details of any symptoms, diagnosis or treatment received outside of the clinical research programme, even if those relate to a condition they found out about through the research programme.

d. customers making relevant insurance applications will be required to disclose a predictive genetic test result only if all of the following apply;

i. the customer is seeking insurance cover above the financial limits set out in the Moratorium;
ii. the test has been assessed by a panel of experts and approved by Government. To date, the only test that people are required to disclose
under the agreement is for Huntington’s Disease for life insurance where the insured sum is over £500,000. Any change to the list of approved
tests would be notified on the ABI and Department of Health websites

iii. the insurer asks the customer to disclose the information.

And…

26. The terms of the Moratorium are as follows.

I. Customers will not be required to disclose the results of predictive genetic tests for policies up to £500,000 of life insurance, or £300,000 for critical illness insurance, or paying annual benefits of £30,000 for income protection insurance (the ‘financial limits’).

II. When the cumulative value of insurance exceeds the financial limits, insurers may seek information about, and customers must disclose, tests approved with the Government for use for a particular insurance product, subject to the restrictions in the Concordat.

III. The Government will announce and the ABI will publish on its website the date of the next review which will be three years before the expiry date of the current Moratorium.

Thanks to @ewanbirney and @TheABB for pointing me at this information.

Advertisement

OK OK, stop badgering! I’ll try Python…

Everybody goes on about Python these days. In Bioinformatics it’s one of the two “must” know languages (along with R), often praised in comments designed to lambast my beloved Perl. So, I thought i’d have a go on the flying circus. I previously wrote about a multi-threaded Perl application and thought that a fun, simple exercise would be to recreate that in Python.

The code performs a silly but intensive task – counting the number of CpG sites in the human genome. The “p” in CpG is biological shorthand indicating that the C and G bases are on the same strand of the genome, so we are literally looking for string matches to “CG” in a 3 billion character string. This is particularly amenable to parallelisation as those 3 billion characters are split across 24 files, one for each chromosome as well as the mitochondrial genome. The answer, as defined in the previous post, is 27,999,538 and the best Perl I could write comes to that conclusion in a little over 2 seconds (2.285s to be exact, which is a bit faster than that original post as some hardware updates have occurred since then).

My first Python attempt was to simply recreate the final multi-threaded Perl code as closely as I could, except it turns out that Python has some issues under the hood that prevent threaded applications working as you would expect. Having said that the code works, but takes a paltry 59.7 seconds to run.

import glob
import threading
from queue import Queue

def countcpg(filename):
  with open (filename, "r") as myfile:
    data=myfile.read().replace('\n', '')
  index = 0
  count = 0
  while index > len(data):
    index = data.find('CG', index)
    if index == -1: # means no match i.e. got to end of string
      break
        index += 2
    count += 1
  with lock:
    global tot_cpg
    tot_cpg += count

dir = "/mnt/nas_omics/cpg"
files = glob.glob(dir+'/*.fa')
tot_cpg = 0
lock = threading.Lock()

def worker():
  while True:
    f = q.get()
    countcpg(f)
    q.task_done()

q = Queue()
for i in range(24):
  t = threading.Thread(target=worker)
  t.daemon = True
  t.start()

for f in files:
  q.put(f)

q.join()
print(tot_cpg)

Attempt number two, then, was to fall back on a more simple parallelised approach using map and the multiprocessing package which “offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads”. This roars in at 5.381 seconds, which is good enough for me. Maybe some Python pro’s will suggest improvements and point out obvious flaws in what i’ve done.

import glob
from multiprocessing import Pool

def countcpg(filename):
  with open (filename, "r") as myfile:
    data=myfile.read().replace('\n', '')
  index = 0
  count = 0
  while index < len(data):
    index = data.find('CG', index)
    if index == -1: # means no match i.e. got to end of string
      break
    index += 2
    count += 1
  return(count)

# global vals:
dir = "/mnt/nas_omics/cpg"
files = glob.glob(dir+'/*.fa')
tot_cpg = 0

# using multiprocessing:
pool = Pool()
res = pool.map(countcpg, files)
print(sum(res))

However, the point of this was not a contest to see which language is faster, but rather to provide a learning opportunity. And it was useful – I get the gist of Python now. It’s pretty superficial, but I liked the cleanliness of the python code and that it’s more object oriented than base perl so quite intuitive. But, the indentation! I guess that takes some getting used to. I also keep hearing about iPython too, which i’ll have a go with one day.

Does this all add up to a compelling drive to switch over then? Maybe, but it seems to me that both Perl and Python have stiff competition from R as an everyday scripting and analysis language – the rise of the Hadleyverse and Shiny, coupled with BioConductor makes it a fabulous language to get things done in Bioinformatics. I wouldn’t want to put it up against either Python or Perl in a speed contest though…

Visualising stranded RNA-seq data with Gviz/Bioconductor

Update: Sep 2015 – in response to a reader’s question I have updated the code to a completely reproducible example based on public data. In addition, I hacked the import function so that it will plot data from stranded libraries in either orientation.

Gviz is a really great package for visualising genomics data in R. Recently I have been looking at stranded RNA-seq data, which provides the ability to differentiate sense and antisense expression from a genomic locus thanks to the way in which you generate the libraries (I won’t go into all that here). Most aligners are strand aware so retain this useful information, but there aren’t many (any?) well defined approaches for detecting antisense expression nor for visualising it (in R). So, here I set out to use Gviz for the visualisation of stranded RNA-seq data. The figure demonstrates the results with a particularly nice example from some rat data.

A quick google got me to this post on the BioC mailing list from the Gviz author in which he provides a function to separate the reads in a BAM based on strand. This is not the same thing as working with stranded data – stranded data still has reads aligning to both strands, it’s the orientation of the pairs that determines which strand of the genome was being expressed. But, the post was a useful starter.

To get Gviz to plot stranded data we have to define a new import function to pass to the DataTrack constructor as the default pays no heed to strand. The function requires a path to a BAM file (with index in same directory) and a GRanges object that provides the location in the BAM file we are interested in. We use Rsamtools to read and parse the BAM file for the reads, setting specific flags that assess the orientation of each read and separate them accordingly. For the library I have, the forward/top/5′-3′ strand has reads in the orientation F2R1 and the reverse/bottom/3′-5′ strand is F1R2. For the former, this means the first read of the pair is always on the bottom/reverse of the double stranded template that was sequenced (not the genome!). In TopHat parlance, this is “fr-secondstrand”. So to get all reads from RNA produced by the forward/top/5′-3′ (F2R1 orientation) we scan the BAM file twice with the following flags:

# For the F2:
scanBamFlag(isUnmappedQuery = FALSE, isProperPair = TRUE, isFirstMateRead = FALSE, isMinusStrand = FALSE)

# For the R1:
scanBamFlag(isUnmappedQuery = FALSE, isProperPair = TRUE, isFirstMateRead = TRUE, isMinusStrand = TRUE)

We then combine these such that all pairs of reads originating from the forward/top/5′-3′ strand are together in one GRanges. This is repeated for the reverse/bottom/3′-5′ strand with reads in the F1R2 orientation and we can then calculate coverage over the region of interest and return a GRanges ready for plotting as shown in the figure. Read on for a reproducible example.

Strand specific coverage can be plotted in a Gviz data track using a custom import function. The top track contains the per-base coverage with reads from the forward strand in blue and reads from the reverse strand in purple. The bottom track shows the genomic context of the reads with exons in thick lines, introns as thin lines and the direction of transcription indicated by the arrows on the thin lines. Here the two genes are on opposite strands and are transcribed toward each other, making for a nice example.

Strand specific coverage can be plotted in a Gviz data track using a custom import function. The top track contains the per-base coverage with reads from the forward strand in blue and reads from the reverse strand in purple. The bottom track shows the genomic context of the reads with exons in thick lines, introns as thin lines and the direction of transcription indicated by the arrows on the thin lines. Here the two genes are on opposite strands and are transcribed toward each other, making for a nice example.

To use this in Gviz we create a custom importFunction for the DataTrack constructor. The code of this import function, strandedBamImport, is in this GitHub gist. What follows should be reproducible, but first you will need to download and index a stranded BAM file. The following will grab data from cardiac fibroblasts from ENCODE, but beware it’s an 11G file:

wget https://www.encodeproject.org/files/ENCFF680CQU/@@download/ENCFF680CQU.bam -O ENCFF680CQU.bam
samtools index ENCFF680CQU.bam

Now source the import function and set up the Gviz tracks in R. You need to change the path to the bam file on your system and you need to set the global variable “libType” to one of “fr-firststrand” or “fr-secondstrand”. If you don’t know what orientation your library is in, you can use infer_experiment.py from RSeQC to find out.

library(Gviz)
library(biomaRt)

# source the custom import function:
source("https://gist.githubusercontent.com/sidderb/e485c634b386c115b2ef/raw/e4a7fba665246764bb6953d23ab4d95d56c6f450/strandedBamImport")

# the ENCODE data was aligned to hg19, so we connect to the appropriate ensembl biomart:
mart = useMart(biomart="ENSEMBL_MART_ENSEMBL", host="grch37.ensembl.org", path="/biomart/martservice", dataset="hsapiens_gene_ensembl")

# Specify coordinate of the locus we want to plot, in this case the TGFB1 locus:
myChr   = 19
myStart = 41807492
myEnd   = 41859816

# Now we create the track with the gene model:
biomTrack = BiomartGeneRegionTrack(genome="hg19", biomart=mart, chromosome=myChr, start=myStart, end=myEnd, showId=T, geneSymbols=T, rotate.title=TRUE, col.line=NULL, col="orange", fill="orange",filters=list(biotype="protein_coding"), collapseTranscripts=FALSE )

### EDIT ###
bamFile = "/path/to/ENCFF680CQU.bam" 

### EDIT ###
# Set to one of "fr-firststrand" or "fr-secondstrand"
libType = "fr-secondstrand"

# Next create the data track using the new import function:
dataTrack = DataTrack(bamFile, genome="hg19", chromosome=myChr, importFunction=strandedBamImport, stream=TRUE, legend=TRUE, col=c("cornflowerblue","purple"), groups=c("Forward","Reverse"))

# Finally, plot the tracks:
plotTracks(list(dataTrack,biomTrack), from = myStart, to = myEnd, type="hist", col.histogram=NA, cex.title=1, cex.axis=1, title.width=1.2)

The figure below is what you should generate with the above code.

Stranded_RNAseq_Human_TGFB1

Analysing squash performance using fitbit data

I previously wrote about accessing step data in R using the fitbit API and at the time said I could think of little to do with the data that was not available via the fitbit website. Since then I’ve rejoined the local squash league and thought that I might be able to learn something useful from an integration of my step counts and performance at squash. The hope is that whatever I learn can be used to inform my game plan and so improve my performance; it certainly can’t hurt!

In that first post I used the fitbit API to access daily step totals, but that isn’t going to be too useful here – I need the data for the period of the squash game itself. It turns out that fitbit don’t let any old programmer access the “intra-day” data, presumably to stop you ripping off their premium account. However, they were willing to let me have access to the intra-day data via the API on the basis that this is a personal project. Kudos fitbit. I won’t bother showing any code here as it’s all in R and simply an extension of my previous post, but you can place an intra-day GET request which will either return the whole day’s data in 1 or 15 minute bins. You can also specify a particular time period that you want.

Luckily for this purpose I’m sad enough to track the date, time, result and score of every game of squash I play. I’ve been playing for 3 months now and have data for 11 matches; 6 wins and 5 losses. There were also a few games during which I didn’t wear my tracker (wins of course!) for various silly reasons but I have excluded them from the analysis.

Sheet1

Below I plot the per minute step count for each game (transparent lines) coloured green for a win and red for a loss. I then fit a loess curve to the data for each outcome (win or lose) which is shown by the bold dashed lines.

per_min_squash_steps

What does it tell me?

  • I seem to peak at around 100 steps per minute, which seems like a lot but I don’t have anything/body to compare this to. Would love to see what a professional game looks like.
  • There are four dramatic reductions in step counts; one at 8 minutes, two at about 28 minutes and then another at 36 minutes. The first I think represents a quick injury break (sprained ankle), the second two correspond to games I won quickly by 3-0 so the dip is whilst we shake hands and decide to play on for the rest of the court time. The third dip is a game I lost 1-3, so clearly I put up enough of a fight to make the game last a little bit longer than a 3-0 drubbing.
  • The fitted lines reveal that I maintain a higher intensity for longer in the games that I win. At 30 minutes in this is about an extra 10 steps per minute. If I’m doing 100 steps per minute then this is a 10% upping of my intensity level.
  • My intensity levels start higher when I win and drop faster when I’m losing.

There is more to be had form this data; for example the above does not take into account the winning/losing margin. To demonstrate, the plot below shows the total number of steps I make per match (regardless of the outcome) bucketed by the number of games (squash is first to 3) that my opponent won. It’s clear that when my opponent wins no games its a bit of a walkover and I put less effort in. Conversely when the game is close and my opponent wins one or two games I have to put in a bit of extra effort to win. And as we saw above, when I lose I put less effort in.

So, the lesson – fight harder and faster!

steps_opp_perf

Q: How many CpG’s in the genome? A: multi-threaded Perl

This started out as a serious question. Whilst analysing some RRBS data I needed to know what percentage of the CpG’s (that is loci where a C nucleotide is followed immediately by a G) in the human genome were covered by our data. The question then, is just how many CpG’s are there in the human genome? “A ha!” I thought – just the kind of question that Perl was built to answer.

To lay out the task for you, the genome as I have it is split across 24 fasta files – one for each chromosome and another for the mitochondrial DNA. These files can be obtained from Ensembl here* and have a combined size of ~3Gb. I need to flaunt my hardware specs for any comparison you might make – we’re running a 2*6 core 2GHz cpu (giving 24 threads) with 96Gb RAM. Obviously this is linux (Red Hat) and i’m using the latest version of Perl (v5.18.1) compiled with multi-threading support enabled. I’m also working off a NAS, so network bandwidth might hold me back.

My first effort was actually quite speedy, I thought, especially as at that point I wasn’t trying to break any speed records. In essence, I slurp each file in turn into memory, replace all the newlines \n (so am essentially making one dirty long string) and then use a regular expression $chr_seq =~ /CG/g; to search for the CpG’s and bung each occurrence into an array. I confess I initially tried $chr_seq =~ /C\n?G, but this was noticeably sluggish. I then count the number of matches in the array, add it to the running total and hey presto a final answer of 27,999,538 is arrived at in just 38 seconds.

use strict ;

my $dir = $ARGV[0] ;
my $tot_cpg = 0 ;

opendir(DIR, $dir) or die $!;
while (defined(my $chr = readdir(DIR))) {
  next unless $chr =~ /\.fa/ ;
  local $/=undef;
  open GENOME,"<$dir$chr" or die $! ;
  my $chr_seq = <GENOME> ;
  close GENOME ;
  $chr_seq =~ s/\n//g ;
  my @matches = $chr_seq =~ /CG/g;
  my $cpg = scalar @matches;
  $tot_cpg = $tot_cpg + $cpg ;
}
closedir(DIR);

print $tot_cpg ;

Now, 38 seconds is pretty fast. If you consider the average rate of reading to be 250 words a minute and that there are 5 letters in the average word, then a person can read 1,250 characters a minute. There are roughly 3 billion characters in the human genome so it would take approximately 2,999,998,750 minutes (or 5,703 years) to read. So, 38 seconds is nothing. But, Perl can do it even faster; we haven’t even considered parallelising the process yet.

#### multi threaded – 18 secs
Perl, if compiled with the -Dusethreads option, can run multiple processes in parallel. This means we could go from sequential processing of the chromosome files (e.g. do chr 1, then chr 2 etc) to processing them all at the same time. Here we do the counting as before, but this time it’s wrapped in a subroutine count. As we read through all the files in the directory we create a new thread passing it a reference to the sub as well as the file that thread is to process: my $thr = threads->create(\&count,$dir.$chr) ;. We have to keep track of the threads we create so that we can wait for them to finish and harvest the data they return. Luckily I have 24 files to process and 24 threads available to use. We now arrive at our answer in just 18 seconds, a more than 2 fold increase in speed.

use strict ;
use threads ;

my $dir = $ARGV[0] ;
my $tot_cpg = 0 ;
my @threads ;

opendir(DIR, $dir) or die $! ;
while (defined(my $chr = readdir(DIR))) {
	next unless $chr =~ /\.fa/ ;
	my $thr = threads->create(\&count,$dir.$chr) ;
	push @threads, $thr ;

}
close DIR ;

foreach (@threads) {
	my $cpg = $_->join;
	$tot_cpg = $tot_cpg+$cpg ;
}

print $tot_cpg ;

sub count {
	$/=undef;
	open GENOME, "<$_[0]" or die $! ;
	my $chr_seq = <GENOME>;
	close GENOME;
	$chr_seq =~ s/\n//g ;
	my @matches = $chr_seq =~ /CG/g;
	return(scalar @matches) ;
}

#### remove regexs – 3 secs
18 seconds is pretty blazing, but now I’ve got the bug and wonder just what else I can optimise. The compilation and use of regexs, although highly optimised in Perl, is quite slow so they had to go. I tried all sorts of ways of using split or unpack but in the end settled on index which determines the position of a substring in string. I also switched from using a substitution to remove the newlines to using the translation operator tr. We now count the CpG’s in an astonishing 3 seconds!

sub count {
	$/=undef;
	open GENOME, "<$_[0]" or die $! ;
	my $chr_seq = <GENOME>;
	close GENOME;
	
    ### replace s///g with tr//d:
    $chr_seq =~ tr/\n//d ;

	### replace regex with index:
	my $pos = -1;
 	my $c = 0 ;
 	while (($pos = index($chr_seq,'CG',$pos)) > -1) {
 		$c++ ;
 		$pos++ ;
 	}
	return($c) ;
}


>time perl countGpGinGenome.pl ./data/
27999538

real 0m3.136s
user 0m27.759s
sys 0m14.491s

3 seconds! Now, i’m not a good enough programmer to go much quicker but I still think there are a few areas that could be optimised further. For example, getting rid of the newline replacement altogether would be sensible. But, it doesn’t really matter. This started out as a biological question and got a bit geeky (too much so if i’m honest) but I learnt so much through this exercise about threaded programming and perl in general that it was worth it. Also, the next time I do RRBS on a new species I won’t waste any time finding out how many CpG’s there are!

*Only use the files matching: Homo_sapiens.GRCh37.72.dna.chromosome.*.fa where * = 1..22|X|MT

Accessing FitBit data in R

I caught a pretty amazing episode of Horizon (the BBC’s in depth science program in the UK) a while back called “The future of medicine is apps“. The programme explored the health benefits of giving people data about their body, health and lifestyle. The more extreme examples included the tracking of the England rugby team during training which allows the coaches to predict injury/flu before the player’s are aware of it, and the professor that monitored the level of every metabolite in his blood every day and was able to diagnose himself with Chron’s disease prior to any symptoms. At the more practical level were the people who simply track their activity levels each day. The theory goes that if you are aware via a direct data feed of what you are doing (or not doing I suppose) then you are able to make changes to your lifestyle for the better. Being a just a bit of a geek I was inspired to get myself an activity tracker and see what it was to collect some data on myself.

I settled on a FitBit Flex, which is essentially a pedometer that you wear on your wrist and which tracks activity (steps) as well as sleep patterns. I have to say it works really well and I am mightily addicted to trying to meet my activity goal each day – currently set to the default of 10,000 steps. FitBit provide a fairly slick website to display all the data you collect but, unfortunately, if you want to download the data and do any kind of analysis yourself you have to pay a pretty exorbitant subscription fee. Luckily, you can get at your data via their API if you have the know-how so I decided have a go in R.

First off you have to register an “app” with FitBit (mine is called StepTrack!) in order to get the credentials needed for authentication. I used the httr package for the OAuth authentication and data retrieval.

library(httr)

token_url = "http://api.fitbit.com/oauth/request_token"
access_url = "http://api.fitbit.com/oauth/access_token"
auth_url = "http://www.fitbit.com/oauth/authorize"
key = "my_key"
secret = "my_secret"

fbr = oauth_app('StepTrack',key,secret)
fitbit = oauth_endpoint(token_url,auth_url,access_url)
token = oauth1.0_token(fitbit,fbr)
sig = sign_oauth1.0(fbr, token=token$oauth_token, token_secret=token$oauth_token_secret)

# get all step data from my first day of use to the current date:
steps = GET("http://api.fitbit.com/1/user/-/activities/steps/date/2013-08-24/today.json",sig)

The data is returned as json, which can then be plotted to your hearts content. In the plot you can see a five day gap – I went on holiday and forgot the charger!

> steps
Response [http://api.fitbit.com/1/user/-/activities/steps/date/2013-08-24/today.json]
Status: 200
Content-type: application/json;charset=UTF-8
{"activities-steps":[{"dateTime":"2013-08-24","value":"5455"},{"dateTime":"2013-08-25","value":"11822"},{"dateTime":"2013-08-26","value":"11692"},{"dateTime":"2013-08-27","value":"17028"},{"dateTime":"2013-08-28","value":"10225"},{"dateTime":"2013-08-29","value":"8632"},{"dateTime":"2013-08-30","value":"9920"},{"dateTime":"2013-08-31","value":"9321"},{"dateTime":"2013-09-01","value":"13581"},{"dateTime":"2013-09-02","value":"7465"},{"dateTime":"2013-09-03","value":"0"},{"dateTime":"2013-09-04","value":"0"},{"dateTime":"2013-09-05","value":"0"},{"dateTime":"2013-09-06","value":"0"},{"dateTime":"2013-09-07","value":"335"},{"dateTime":"2013-09-08","value":"9239"},{"dateTime":"2013-09-09","value":"17059"}]}

step_plot

Admittedly I am struggling to come up with ideas of what to do with the data that FitBit doesn’t provide already through their website. But, its the principal of the thing – I should be able to get at my data and now I can. For all of the data shown in the above I was on holiday and in general much more active than when i’m plonked in my desk at work. It will be interesting to see what my daily step count is on a normal working day and whether knowing this will push me on to go for a run at lunch time or take the very long route to the sandwich shop. Being a very competitive person, I suspect it will.

UPDATE: 4th April 2014 – @asrowe has made a nice comparison of two trackers (fitbit and jawbone) here.

iRefR – PPI data access from R

I use a lot of protein-protein interaction (PPI) data as biological networks represent the systems within which our genes and proteins of interest function. There are many sources of PPI data, including BioGRID and IntAct etc. A recent effort has emerged that attempts to pull all of these databases together to provide a single standardised access point to PPI data; iRefIndex. Currently they integrate data from 13 different PPI databases. Why are there 13 in the first place? Because each has their own biological area of interest or specific criteria for curation (manual vs text-mined etc). This post is an opportunity for me to have a play with the R package for iRefIndex (iRefR).

The following code downloads the current version of iRefIndex for human (other species are available):

> library("iRefR")
> library(stringr)
> iref = get_irefindex(tax_id="9606",iref_version="12.0",data_folder="/path/to/save/iRefIndex")

The resulting object is a dirty great 250Mb data frame in MITAB format, or PSI-MITAB2.6 to be exact, the format specs of which can be found here. This seems a little unwieldily to hold in memory to me, so eventually i’ll subset it to something more manageable;

> print(object.size(iref),units="Mb")
248.8 Mb

First though, some summary stats on the info contained within. The data contains a total of 533,551 interactions, 248,215 of which are unique;

> dim(iref)
[1] 533551 54

Duplications in the data arise when an interaction is taken from multiple source databases or papers etc. The irigrid column contains a unique identifier where any given pair of interactants will always have the same identifier.

> length(unique(iref$irigid))
[1] 248215

iRefIndex counts as human any interaction where just one member is a human protein; e.g. if a viral protein interacts with a human protein this is counted as human. Here i’m just interested in human-human interactions, so;

> human_human_list = data.frame(iref$taxa,iref$taxb)
> tmp = do.call(`paste`, c(unname(human_human_list), list(sep=".")))
> iref_human = iref[tmp == "taxid:9606(Homo sapiens).taxid:9606(Homo sapiens)" | tmp == "-.taxid:9606(Homo sapiens)",]

> dim(iref_human)
[1] 489796 54

> length(unique(iref_human$irigid))
[1] 220435

To subset the data frame to a more memory friendly set of data i’m going to keep just a two column data frame of protein names for each unique interaction. As I’m going to plot some graphs later I also want to keep the biologist friendly HUGO name for each protein, which involves a bit of fiddling as this is not a field in its own right within MITAB, enter stringr:

> mA = str_locate(iref_human$aliasA, perl("hgnc:.*?\\|"))
> hugoA = str_sub(iref_human$aliasA,mA[,1]+5,mA[,2]-1)
> mB = str_locate(iref_human$aliasB, perl("hgnc:.*?\\|"))
> hugoB = str_sub(iref_human$aliasB,mB[,1]+5,mB[,2]-1)
> x = data.frame(iref_human$X.uidA,iref_human$uidB,hugoA,hugoB,iref_human$irigid)
> colnames(x) = c("uidA","uidB","hugoA","hugoB","irigid")
> dim(x)
[1] 489796 4

> head(x)
uidA uidB hugoA hugoB irigid
1 uniprotkb:O75554 uniprotkb:Q07666 WBP4 KHDRBS1 1139443
2 uniprotkb:Q13425 uniprotkb:B7Z6M3 SNTB2 DGKZ 1650951
3 uniprotkb:O75554 uniprotkb:Q8N684-3 WBP4 CPSF7 668917
4 uniprotkb:P60468 uniprotkb:Q9H0F7 SEC61B ARL6 658508
5 uniprotkb:O75554 uniprotkb:Q15233-2 WBP4 NONO 1338471
6 uniprotkb:O95816 uniprotkb:Q66LE6 BAG2 PPP2R2D 1478060

> length(unique(x$irigid))
[1] 220435

> print(object.size(x),units="Mb")
15.5 Mb

Now I want to build a PPI network from this data using iGraph. First, we hack together a data frame of node annotations, in this case the hugo name associated with each unique protein identifier. We then combine this with the interaction data into an igraph object:

> v = unique(data.frame(c(as.character(x$uidA),as.character(x$uidB)),c(as.character(x$hugoA),as.character(x$hugoB))))
> colnames(v) = c("uid","hugo")
> ppi.graph = graph.data.frame(x[,c(1:2,5)],vertices=v,directed=F)
> ppi.graph
IGRAPH UN-- 31476 489796 --
+ attr: name (v/c), hugo (v/c), irigid (e/n)

Our graph (or network) has 31,476 nodes (proteins) and 489,796 edges (or interactions). Each node has two annotations; the uid in “name” and the hugo symbol. Each edge has one annotation, the irigid.

Now, say we do an experiment and identify a bunch of genes/proteins and we want to see if they interact with each other at the protein level. For example this might be the output of a gene expression experiment or all genes genetically associated with a disease. Here, for simplicity, i’ll just use a random selection of proteins from the network.

> myGenes = sample(as.character(v$uid),10)
> myGenes = myGenes[!is.na(myGenes)]

We can now extract a subgraph containing our experimentally identified proteins and those that connect them together. In the following code we first get all of the neighbours (adjoining nodes) of our genes from the main graph. We define order = 1 to specify that we will allow a distance of 1 interaction from our genes. We then subset the edges from the main graph to those where both nodes are in our index of neighbours. This gives us just the interactions between our genes and their immediate neighbours. We then build a graph in the same manner as above, using the edge list and a vertice meta-data data-frame.

> order = 1
> edges = get.edges(ppi.graph, 1:(ecount(ppi.graph)))
> neighbours.vid = unique(unlist(neighborhood(ppi.graph,order,which(V(ppi.graph)$name %in% myGenes))))
> rel.vid = edges[intersect(which(edges[,1] %in% neighbours.vid), which(edges[,2] %in% neighbours.vid)),]
> neighbour.names = data.frame(V(ppi.graph)[neighbours.vid]$name,V(ppi.graph)[neighbours.vid]$hugo, stringsAsFactors=F)
> names(neighbour.names) = c("name","hugo")
> rel = as.data.frame(cbind(V(ppi.graph)[rel.vid[,1]]$name, V(ppi.graph)[rel.vid[,2]]$name), stringsAsFactors=F)
> names(rel) = c("from","to")
> subgraph = graph.data.frame(rel, directed=F, vertices=neighbour.names)
> subgraph = simplify(subgraph)
> subgraph
IGRAPH UN-- 81 239 --
+ attr: name (v/c), hugo (v/c)

And finally, we plot the network with red nodes to indicate proteins from our experimentally identified list of genes and yellow for the rest.

ind = which(V(subgraph)$name %in% myGenes)
cols = rep("yellow",vcount(subgraph))
cols[ind] = "red"
plot(subgraph,layout=layout.fruchterman.reingold,vertex.size=5,vertex.color=cols,vertex.label=NA)

graph

It’s at this point that the fun really starts from a biology point of view – we can mine our network for key hubs, overlay our favourite experimental data or compare to it networks for other sets of input genes, etc. The possibilities are endless, and I for one feel that all of our experimental data should be interpreted within the context of the interaction network. Of course there are limitations; our networks will likely never be complete as we only have PPI data for proteins that have been studied. But, having a human interactome with nigh on 500,000 interactions is a good start…

The murky corners of my genome – 23andMe and the Ensembl VEP

I recently got my genotype data from 23andMe. The most exciting finding is that i’m slightly more than average Neanderthal (3% versus the 2.7% average). I always wondered how they calculated this percentage, presuming it was a measure of the SNPs common between myself and the Hairy Caveman. It turns out that it’s a little more complicated – they calculate how far you are from the “Neanderthal axis”, which is a line that links neanderthals and the average of 246 whole African genomes (whom have no neanderthal ancestry) on a PCA plot. But, I digress, see this paper for all the details.

This post was really inspired by this blog from Neil Saunders in which he describes how to run your 23andMe SNP data through Ensembl’s Variant Effect Predictor (VEP). I have largely followed the method he outlines in order to take a closer look at the SNPs in my genome and what they’re up to.

The first thing to enthuse upon is the VEP itself – what a fab tool. Running locally with default settings it took <15 mins to crunch through the 960,613 SNPs in the data. It produces a pretty nice HTML page with summary counts of variant consequence, chromosome distribution etc.

So what are my SNPs up to? Just over 50% of my SNPs are intronic, another 8% intergenic etc. These are quite hard to interpret, so i’ll ignore the non-coding mutations for now. The pie below summarises the consequences of the mutations within coding regions. Alarmingly, it seems I have 98 stop gain mutations and 15,041 missense mutations in 3,687 genes! These are mutations where the variant could have an effect on protein function, either through premature abortion of mRNA translation or by switching the amino acid coded for at that location. In the first case, the truncated protein is probably not produced due to nonsense mediated decay, but there will still be less protein than there should be. The missense mutations will not all have an impact – only those that change a key functional domain of the protein are likely to. Even then, the degree of impact will be mitigated by many things including functional redundancy with other proteins etc.

coding_vars

The most obvious thing to confirm is the functional impact they are likely to have. This can be done with SIFT and PolyPhen predcitions, both of which the VEP calculates for you. However, it quickly becomes apparent that the VEP default settings don’t get you very much useful information past a classification of the variant consequences. But, there are many options available to pass to the VEP in order to get it to calculate all sorts of information. The following flags turn on sift and polyphen predictions and the global MAF from 1000 genomes:

--sift p --polyphen p --gmaf --fork 10

Happily, the authors also provided a --everything flag which returns, well, everything, including sift and polyphen predictions of the variant’s functional impact and the MAF etc. As you can imagine this takes a lot longer to run! It’s sensible to undertake a quick bit of jiggery pokery to subset the original VCF file to just the variants that cause a stop gain or missense mutation.

Quickly browsing the VEP summary HTML it’s apparent that PolyPhen and SIFT think some of my stop gains/missense mutations are going to have a damaging effect on protein function:

sift_polyphen

The 621 variants for which both PolyPhen and Sift predict a deleterious/damaging conseqeunce are found in 270 genes. Add that to the 46 genes that have gained a premature stop codon and I’m short 316 fully functional genes! This is by no means abnormal however – the 1000 genomes project estimates that we all have putative loss of function variants in 250-300 genes.

And, it’s not all bad news – it seems that one of my mutations, rs497116, is a well known stop gain in caspase 12 (CASP12). The A allele (which I have) is dominant in European populations but less so in those of African descent. The variant leads to a truncated inactive form of Caspase 12, which is protective against sepsis – the full length protein renders the carrier susceptible to an over the top immune response to bacterial infection.

What’s more I haven’t explored my genotype at these locations – am I heterozygous or homozygous? If heterozygous then I have a “spare”, perfectly normal copy of the gene that will hopefully compensate for the damaged one (leaving aside compound heterozygosity). If homozygous, then I’m potentially the human version of a knock out mouse! I also want to know what the frequency of these mutations are in the general population, their Minor Allele Frequency (MAF). If, like the Caspase 12 example above, most of the mutations are highly penetrant in the general population the chances are they don’t have such drastic consequences. I think I’ll keep all of that to myself though…

A shiny app to display the human body map dataset

There was quite a lot of buzz around when the guys from Rstudio launched Shiny, a new web framework for R that promises to “make it super simple for R users like you to turn analyses into interactive web applications that anyone can use”.  Indeed, it looks really impressive.

So, in order to give Shiny a test I thought i’d analyse and then create a front end to the Illumina human body map data.  This should be quite some test for Shiny as R is slow and clunky for all but the smallest of data sets.   I wanted the application to allow the user to enter a gene and have returned 1) a gene level plot of the tissue distribution, 2) details of all the isoforms detected for that gene and, 3) the expression of each isoform in any given tissue.

For those who haven’t heard of this dataset; it’s RNA-seq data that has been generated by Illumina on a HiSeq2000 from 16 different, healthy, human tissues and freely downloadable from the above link.  The libraries were prepared using poly-A selected mRNA and sequenced as both 50bp paired-end or 75bp single end reads.  No replicates unfortunately.  Here I will use just the paired-end reads, of which there are 70-80 million pairs per sample.  The raw reads were aligned with TopHat and assembled with Cufflinks.

To save having to code up all of the visualisations I wanted from scratch I decided to use the cummeRbund package (from the TopHat/Cufflinks authors) which has some awesome ggplot2 based functions for generating track-like images from a cufflinks/cuffdiff based analysis.  The trade off here is that cummeRbund maintains a huge (19Gb for this dataset) SQLite database in the background and is S.L.O.W. The stats for the data loaded into R:

CuffSet instance with:
16 samples
105441 genes
335696 isoforms
190081 TSS

Right, Shiny. I won’t go into detail – you can read the tutorial/docs yourself.  But, suffice to say it’s dead simple with no CSS, javascript or HTML to worry about. The only downside of this is that the layout and style of the page is largely fixed (unless you want to get your hands really dirty).  Also key is that Shiny is reactive, i.e. if any of the input variables change, any functions that rely on those will automatically update themselves, as will functions that rely on those and so on.

The first task was to get the input form hooked up to the server code, which literally just requires you to specify a text input box and a submit button:

textInput("gene", label="Gene", value = ""),
>submitButton("Submit")

Then in the server code, specify a function that waits for the gene variable to be defined by the user and does something.  In this case it gets the data out of the cuffdiff database for the input gene and plots the FPKM of that gene in each of the 16 tissues:

output$genePlot = reactivePlot(function() {
myGene = getGene(cuff,input$gene)
if (is.null(myGene))
return(plot(1,type="n",bty="n",yaxt="n",xaxt="n",ylab="",xlab=""))
x = data.frame(
tissue = fpkm(myGene)$sample_name,
fpkm=fpkm(myGene)$fpkm
)
print(ggplot(x,aes(fill=tissue,x=tissue,y=fpkm)) + geom_bar(position="dodge", stat="identity") +
labs(title=annotation(myGene)$gene_short_name))
})

I rolled my own barplot here as the cummeRbund version is quite clunky.  The result for PATE1 (Prostate And Testes Expressed 1) looks like this:

PATE1

Next up, another function to create track-esque plots of the isoforms found for the input gene, which also includes some nice visual touches like the ideogram and the Ensembl annotated isoforms:

PATE1

You’ll notice a drop down box has appeared under the input form. This is for the final hurrah – select a tissue and the expression of different isoforms in that tissue will be plotted:

PATE1

OK OK, so it’s not very well laid out etc – but for a first pass I think its great, and the ggplot2 graphics make up for it a little bit. I should also point out that it is not just slow, but very slow! If I stopped calling out to Ensembl, or ditched cummeRbund all together this could be improved.

The gist is on GitHub (https://gist.github.com/4672051) but I don’t provide the expression data (it’s huge!). I’m pretty sure that it should work with only minor tweaking for any RNA-seq data set analysed with a TopHat -> Cufflinks -> CuffDiff pipeline.

Global quantification of mammalian gene expression control

I’ve chosen as my first topic this paper:

“Global quantification of mammalian gene expression control”  Nature 473, 337-342, 2011

I’ve chosen this paper for several reasons. One, it’s cool.  Two, it was the last thing I read and, three, it tackles a question I was concerned with during my PhD studies albeit on a much larger scale and in mammals rather than bacteria.  In prokaryotes the fundamental biological processes of transcription and translation are coupled.  There is no nuclear membrane to divide the two and so as soon as an mRNA transcript is produced (even as it’s being produced!) ribosomes bind and initiate translation.  Therefore there is a strong correlation between the amount of mRNA and the quantity of it’s resulting protein in prokaryotes.  Obviously the rate of decay for both the transcript and the protein leads to cases where this is not true, but as a general rule it seems to hold up ok.

In eukaryotes its a whole different ball game.  For one, the nucleus separates the physical process of transcription from translation.  Higher organisms also have much more complex processes to prepare mRNA for translation – the removal of introns for a start – which are not present in prokaryotes.  Because of this it has always been hard to categorically state that a gene’s transcript levels are truly reflective of it’s protein level.  This causes a problem.  For a variety of reasons, most of which are technological and financial, modern molecular biology is largely based on the measurement of mRNA levels from which the state of a cell is inferred. However, proteins are the real functional unit of a cell and if we can’t be sure that the mRNA levels actually reflect their concentration then we can’t be sure of the cellular state as a whole.

We’ve been waiting for a systematic comparison of mRNA and protein levels on a global scale to tease apart this relationship.  The technological limitations that held us back are now being overcome and this paper is the first (to my knowledge) to provide such a comprehensive comparison.  The authors have not only quantified the levels of both mRNA (with high throughput sequencing) and protein (liquid chromatography coupled with tandem mass spec) but have also been able to generate half lives (the rate of decay/turnover) for both as well.  To do this they grew their cells (murine fibroblasts and then later a human breast cancer cell line) in media that contained labeled amino acids and a nucleoside analogue that allowed the team to differentiate newly synthesised mRNA and protein from the pre-existing.  A ratio of the new and pre-existing concentrations compared to total RNA/protein allowed the group to calculate half lives.  In total they have data for 5,028 mRNA-protein pairs.  It is worth noting that they were able to collect mRNA and protein data in the same cells (literally the same cells, not just the same cell type) which means the data is entirely compatible. Further, the method does not use any destructive chemicals to inhibit transcription or translation in order to calculate half lives meaning the cell remains intact and functioning normally throughout the experiment.

I guess the headline result is that they show that the correlation between mRNA and protein levels is approx. 0.4.  Although this is not massive, it’s greater than anyone had predicted in the past and is good news for all of us mRNA observers out there.  Next the group were able to construct a mathematical model that allowed them to explore the contribution of the four main processes involved (the synthesis and degradation of mRNA and protein).  They discover that the rate of translational initiation by the ribosome is the most fundamental check on protein abundance and not the rate of mRNA transcription, another reminder not to focus solely on the rate of transcription.

They classify proteins based on their mRNA and protein stabilities and find that those which are stable as both mRNA and protein are enriched for some fundamental cellular processes such as translation and metabolism.  Those which are unstable both as mRNA and protein are involved in signalling and regulatory systems (including epigenetic mechanisms).  Those with unstable proteins but stable mRNAs are concerned with functions such as cellular defence where the protein needs to be produced rapidly hence the pre-existing pool of mRNA.  This is largely as expected and indicates that the regulation of protein production evolved in a resource contrained environment and has adapted to fit the needs of the cell in an energy efficient optima.  The authors explore numerous other aspects which I won’t go into here. 

It is important to note that these experiments were conducted on a large, non-synchronised population of cells and as such the results reflect the average over the cell cycle.  It will be the case that at the level of the individual cell a particular protein may have quite different synthesis/degradation characteristics.  Nevertheless such a resource will now be invaluable to scientists looking to create systems biology models of cellular pathways where quantities and synthesis/turnover rates are required for accurate computation.  It will be the case that the data shown here derived from mouse fibroblasts will not be applicable to many models but at least they will allow us to move on from our current uninformed guestimates.