With the cost of sequencing a human genome dropping below $1,000, population-scale sequencing has become feasible. With projects that sequence more than 10,000 genomes becoming commonplace, there is a strong need for genome analysis tools that can scale across distributed computing resources while providing reduced analysis cost. Simultaneously, these tools must provide programming interfaces and deployment models that are easily usable by biologists.
In this dissertation, we describe the ADAM system for processing large genomic datasets using distributed computing. ADAM provides a decoupled stack-based architecture that can accommodate many data formats, deployment models, and data access patterns. Additionally, ADAM defines schemas that describe common genomic datatypes. ADAM’s schemas and programming models enable the easy integration of disparate genomic datatypes and datasets into a single analysis.
To validate the ADAM architecture, we implemented an end-to-end variant calling pipeline using ADAM’s APIs. To perform parallel alignment, we developed the Cannoli tool, which uses ADAM’s APIs to automatically parallelize single node aligners. We then implemented GATK-style alignment refinement as part of ADAM. Finally, we implemented a biallelic genotyping model, and novel reassembly algorithms in the Avocado variant caller. This pipeline provides state-of-the-art SNV calling accuracy, along with high (97%) INDEL calling accuracy. To further validate this pipeline, we reanalyzed 270 samples from the Simons Genome Diversity Dataset.