Hadoop Mapreduce Thesis Statement
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce.
An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.
Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Background on Hadoop/MapReduce/HBase
Due to new computational challenges (e.g., in next-generation sequencing [1,2] ), high performance computing (HPC) has become increasingly important in bioinformatics data analysis. HPC typically involves distribution of work across a cluster of machines which access a shared file system, hosted on a storage area network. Work parallelization has been implemented via such programming APIs as the Message Passing Interface (MPI) and, more recently, Hadoop’s MapReduce API. Another computer architecture/service model now being explored is cloud computing [3-5]. In brief, cloud computing equals HPC + web interface + ability to rapidly scale up and down for on-demand use. The server side is implemented in data centers operating on clusters, with remote clients uploading possibly massive data sets for analysis in the Hadoop framework or other parallelized environments operating in the data center.
Hadoop [6-9] is a software framework that can be installed on a commodity Linux cluster to permit large scale distributed data analysis. No hardware modification is needed other than possible changes to meet minimum recommended RAM, disk space, etc. requirements per node (e.g., see Cloudera's guidelines ). The initial version of Hadoop was created in 2004 by Doug Cutting (and named after his son’s stuffed elephant). Hadoop became a top-level Apache Software Foundation project in January 2008. There have been many contributors, both academic and commercial (Yahoo being the largest such contributor), and Hadoop has a broad and rapidly growing user community [11,12].
Components - Hadoop provides the robust, fault-tolerant Hadoop Distributed File System (HDFS), inspired by Google's file system , as well as a Java-based API that allows parallel processing across the nodes of the cluster using the MapReduce paradigm. Use of code written in other languages, such as Python and C, is possible through Hadoop Streaming, a utility which allows users to create and run jobs with any executables as the mapper and/or the reducer. Also, Hadoop comes with Job and Task Trackers that keep track of the programs’ execution across the nodes of the cluster.
Data locality – Hadoop tries to automatically colocate the data with the computing node. That is, Hadoop schedules Map tasks close to the data on which they will work, with "close" meaning the same node or, at least, the same rack. This is a principal factor in Hadoop’s performance. In April 2008 a Hadoop program, running on 910-node cluster, broke a world record, sorting a terabyte of data in less than 3.5 minutes. Speed improvements have continued as Hadoop has matured .
Fault-tolerant, shared-nothing architecture - tasks must have no dependence on each other, with exception of mappers feeding into reducers under Hadoop control. Hadoop can detect task failure and restart programs on other healthy nodes. That is, node failures are handled automatically, with tasks restarted as needed. A single point of failure currently remains at the one name node for the HDFS file system.
Reliability – data is replicated across multiple nodes; RAID storage is not needed.
Programming support - unlike, for example, parallel programming using MPI, data flow is implicit and handled automatically; it does not need coding. For tasks fitting the MapReduce paradigm, Hadoop simplifies the development of large-scale, fault-tolerant, distributed applications on a cluster of (possibly heterogeneous) commodity machines.
MapReduce paradigm – Hadoop employs a Map/Reduce execution engine [15-17] to implement its fault-tolerant distributed computing system over the large data sets stored in the cluster's distributed file system. This MapReduce method has been popularized by use at Google, was recently patented by Google for use on clusters and licensed to Apache , and is now being further developed by an extensive community of researchers .
There are separate Map and Reduce steps, each step done in parallel, each operating on sets of key-value pairs. Thus, program execution is divided into a Map and a Reduce stage, separated by data transfer between nodes in the cluster. So we have this workflow: Input → Map() → Copy()/Sort() → Reduce() →Output. In the first stage, a node executes a Map function on a section of the input data. Map output is a set of records in the form of key-value pairs, stored on that node. The records for any given key – possibly spread across many nodes – are aggregated at the node running the Reducer for that key. This involves data transfer between machines. This second Reduce stage is blocked from progressing until all the data from the Map stage has been transferred to the appropriate machine. The Reduce stage produces another set of key-value pairs, as final output. This is a simple programming model, restricted to use of key-value pairs, but a surprising number of tasks and algorithms will fit into this framework. Also, while Hadoop is currently primarily used for batch analysis of very large data sets, nothing precludes use of Hadoop for computationally intensive analyses, e.g., the Mahout machine learning project described below.
HDFS file system – There are some drawbacks to HDFS use. HDFS handles continuous updates (write many) less well than a traditional relational database management system. Also, HDFS cannot be directly mounted onto the existing operating system. Hence getting data into and out of the HDFS file system can be awkward.
In addition to Hadoop itself, there are multiple open source projects built on top of Hadoop. Major projects are described such below.
Hive  is a data warehouse framework built on top of Hadoop, developed at Facebook, used for ad hoc querying with an SQL type query language and also used for more complex analysis. Users define tables and columns. Data is loaded into and retrieved through these tables. Hive QL, a SQL-like query language, is used to create summaries, reports, analyses. Hive queries launch MapReduce jobs. Hive is designed for batch processing, not online transaction processing – unlike HBase (see below), Hive does not offer real-time queries.
Pig  is a high-level data-flow language (Pig Latin) and execution framework whose compiler produces sequences of Map/Reduce programs for execution within Hadoop. Pig is designed for batch processing of data. Pig’s infrastructure layer consists of a compiler that turns (relatively short) Pig Latin programs into sequences of MapReduce programs. Pig is a Java client-side application, and users install locally – nothing is altered on the Hadoop cluster itself. Grunt is the Pig interactive shell.
Mahout and other expansions to Hadoop programming capabilities
Hadoop is not just for large-scale data processing. Mahout  is an Apache project for building scalable machine learning libraries, with most algorithms built on top of Hadoop. Current algorithm focus areas of Mahout: clustering, classification, data mining (frequent itemset), and evolutionary programming. Obviously, the Mahout clustering and classifier algorithms have direct relevance in bioinformatics - for example, for clustering of large gene expression data sets, and as classifiers for biomarker identification. In regard to clustering, we may note that Hadoop MapReduce-based clustering work has also been explored by, among others, M. Ngazimbi (2009 M.S. thesis ) and by K. Heafield at Google (Hadoop design and k-Means clustering ). The many bioinformaticians that use R may be interested in the “R and Hadoop Integrated Processing Environment” (RHIPE), S. Guhi’s Java package  that integrates the R environment with Hadoop so that it is possible to code MapReduce algorithms in R. (Also note the IBM R-based Ricardo project ). For the growing community of Python users in bioinformatics, Pydoop , a Python MapReduce and HDFS API for Hadoop that allows complete MapReduce applications to be written in Python, is available. These are samplings from the large number of developers working on additional libraries for Hadoop. One last example in this limited space: the new programming language Clojure , which is predominantly a functional language, e.g., a dialect of Lisp that targets the Java Virtual Machine, has been given a library (author S. Sierra ) to aid in writing Hadoop jobs.
Cascading  is a project providing a programming API for defining and executing fault tolerant data processing workflows on a Hadoop cluster. Cascading is a thin, open source Java library that sits on top of the Hadoop MapReduce layer. Cascading provides a query processing API that allows programmers to operate at a higher level than MapReduce, and to more quickly assemble complex distributed processes, and schedule them based on dependencies.
Lastly, an important Apache Hadoop-based project is HBase , which is modeled on Google's BigTable database . HBase adds a distributed, fault-tolerant scalable database, built on top of the HDFS file system, with random real-time read/write access to data. Each HBase table is stored as a multidimensional sparse map, with rows and columns, each cell having a time stamp. A cell value at a given row and column is by uniquely identified by (Table, Row, Column-Family:Column, Timestamp) → Value. HBase has its own Java client API, and tables in HBase can be used both as an input source and as an output target for MapReduce jobs through TableInput/TableOutputFormat. There is no HBase single point of failure. HBase uses Zookeeper , another Hadoop subproject, for management of partial failures.
All table accesses are by the primary key. Secondary indices are possible through additional index tables; programmers need to denormalize and replicate. There is no SQL query language in base HBase. However, there is also a Hive/HBase integration project [34,35] that allows Hive QL statements access to HBase tables for both reading and inserting. Also, there is the independent HBql project (author P. Ambrose ) to add a dialect of SQL and JDBC bindings for HBase.
A table is made up of regions. Each region is defined by a startKey and EndKey, may live on a different node, and is made up of several HDFS files and blocks, each of which is replicated by Hadoop. Columns can be added on-the-fly to tables, with only the parent column families being fixed in a schema. Each cell is tagged by column family and column name, so programs can always identify what type of data item a given cell contains. In addition to being able to scale to petabyte size data sets, we may note the ease of integration of disparate data sources into a small number of HBase tables for building a data workspace, with different columns possibly defined (on-the-fly) for different rows in the same table. Such facility is also important. (See the biological integration discussion below.)
In addition to HBase, other scalable random access databases are now available. HadoopDB [37,38] is a hybrid of MapReduce and a standard relational db system. HadoopDB uses PostgreSQL for db layer (one PostgreSQL instance per data chunk per node), Hadoop for communication layer, and extended version of Hive for a translation layer. Also, there are non-Hadoop based scalable alternatives also based on the Google BigTable concept, such as Hypertable , and Cassandra . And there are other so-called noSQL scalable dbs of possible interest: Project Voldemort, Dynamo (used for Amazon’s Simple Storage Service (S3)), and Tokyo Tyrant, among others. However, these non-Hadoop and non-BigTable database systems lie outside of our discussion here.
Use of Hadoop and HBase in Bioinformatics
Use in next-generation sequencing
The Cloudburst software  maps next-generation short read sequencing data to a reference genome for SNP discovery and genotyping. Cloudburst was created by Michael C. Schatz at the University of Maryland (UMD). Schatz’s Cloudburst paper , published in May 2009, put Hadoop “on the map” in bioinformatics. Following release of Cloudburst, Schatz and colleagues at UMD and at Johns Hopkins University (e.g., B. Langmead) have developed a suite of algorithms that employ Hadoop for analysis of next generation sequencing data:
1) Crossbow [43,44] uses Hadoop for its calculations for whole genome resequencing analysis and SNP genotyping from short reads.
2) Contrail  uses Hadoop for de novo assembly from short sequencing reads (without using a reference genome), scaling up de Brujin graph construction.
3) Myrna [46,47] uses Bowtie [48,49], another UMD tool for ultrafast short read alignment, and R/Bioconductor  for calculating differential gene expression from large RNA-seq data sets. When running on a cluster, Myrna uses Hadoop. Also, Myrna can be run in the cloud using Amazon Elastic MapReduce .
Cloud computing results - Amazon Elastic Compute Cloud (EC2)  and Amazon Elastic MapReduce are Web services that provide resizable compute capacity in the cloud. Among other batch processing software, they provide Hadoop  . Myrna was designed to function in Elastic MapReduce as well as on a local Hadoop-based cluster. Obviously, Langmead et al. believe that cloud computing is a worthwhile computing framework, and they report their results using such in . Also, Schatz has tested Crossbow on EC2 and believes that running on EC2 can be quite cost effective . (Note: non-commercial services such as the IBM/Google Cloud Computing Initiative  are also available to researchers.) Also, Indiana University (IU) researchers have performed comparisons  between MPI, Dryad (Microsoft [56,57]), Azure (Microsoft), and Hadoop MapReduce, measuring relative performance using three bioinformatics applications. This work was summarized by Judy Qui of IU at BOSC 2010 . The flexibility of clouds and MapReduce come off quite well in the IU testing, suggesting “they will become preferred approaches”.
Use in other bioinformatics domains
In addition to next-gen sequencing, Hadoop and HBase have been applied to other areas in bioinformatics. M. Gaggero and colleagues in the Distributed Computing Group at the Center for Advanced Studies, Research and Development in Sardinia, have reported on implementing BLAST and Gene Set Enrichment Analysis (GSEA) in Hadoop . BLAST was implemented using a Python wrapper for the NCBI C++ Toolkit and Hadoop Streaming to build an executable mapper for BLAST. GSEA was implemented using rewritten functions in Python and used with Hadoop Streaming for the MapReduce version. They are now working on development of Biodoop , a suite of parallel bioinformatics applications based upon Hadoop, said suite consisting of three qualitatively different algorithms: BLAST, GSEA and GRAMMAR. They deem their results “very promising”, with MapReduce being a “versatile framework”.
In other work, Andrea Matsunaga and colleagues at the University of Florida have created CloudBLAST , a parallelized version of the NCBI BLAST2 algorithm (BLAST 2.2.18) using Hadoop. Their parallelization approach segmented the input sequences and ran multiple instances of the unmodified NCBI BLAST2 on each segment, using the Hadoop Streaming utility. Results across multiple input sets were compared against the publicly available version of mpiBLAST, a leading parallel version of BLAST. CloudBLAST exhibited better performance while also having advantages in simpler development and sustainability. Matsunaga et al. conclude that for applications that can fit into the MapReduce paradigm, use of Hadoop brings significant advantages in terms of management of failures, data, and jobs.
In other work, Hadoop has been used for multiple sequence alignment . In regard to HBase use, Brian O’Connor of University of North Carolina at Chapel Hill recently described the use of HBase as a scalable backend for the SeqWare Query Engine  at the BOSC 2010 meeting. Recent work on the design of the Genome Analysis Toolkit at the Broad Institute has created a framework that supports MapReduce programming in bioinformatics [64,65]. Hadoop has also emerged as an enabling technology for large-scale graph processing, which is directly relevant to topological analysis of biological networks. Lin & Schatz have recently reported on improving the capabilities of Hadoop-based programs in this area .
As to future work not yet reported: starting in August 2010, A. Tiwari is maintaining a list of Hadoop/MapReduce applications in bioinformatics on his blog site .
Use in scientific cloud computing, biological data integration and knowledgebase construction
The U.S. Department of Energy (DOE) is exploring scientific cloud computing in the Magellan project , a joint research effort of the National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory, and of the Leadership Computing Facility at Argonne National Laboratory (ANL). Hadoop and HBase have been installed on a cluster at NERSC (40 nodes reserved for Hadoop, soon to double), and studies have been run using Hadoop in Streaming mode for BLAST computations. NERSC is evaluating the use of solid state (flash) storage on the Hadoop nodes . Also, the DOE Joint Genome Institute has performed contig extension work using Hadoop on the NERSC cluster. The Hadoop cluster at ANL, now undergoing testing, will be available for researchers in late 2010 [69,70]. Users interested in using clouds for their research may fill out the Magellan Cloud Computing statement of interest form .
At the Environmental Molecular Sciences Laboratory, a national user facility located at DOE's Pacific Northwest National Laboratory (PNNL), we wish to develop a scientific data management system that will scale into the petabyte range, that will accurately and reliably store data acquired from our various instruments, and that will store the output of analysis software and relevant metadata. As a pilot project for such an effort, work started in August 2010 on a prototype data repository, i.e., a workspace for integration of high-throughput transcriptomics and proteomics data. This database will have the capacity to store very large amounts of data from mass spectrometry-based proteomics experiments as well as from next-gen high throughput sequencing platforms. The author (RCT) is building the pilot database on a 25-node cluster using Hadoop and HBase as the framework. In addition to such data warehousing / data integration work, we may envisage using Hadoop and HBase for the design of large knowledgebases operating on a cluster across the distributed file system. The U.S. Dept. of Energy is funding work on construction of large biological knowledgebases , and Kandinsky, a 68-node, 1088-core Linux cluster (64 GB RAM, 8Tb disk per node) running Hadoop (Cloudera distribution, under CentOS 5) and HBase was set up in 2010 at Oak Ridge National Laboratory as an exploratory environment [73,74]. Cloudburst has been installed as a sample Hadoop-based application, and the cluster is open to use by researchers wishing to conduct preliminary work towards knowledgebase construction and towards support of grant proposals for such.
Hadoop and its associated open source projects have a diverse and growing community in bioinformatics of both users and developers, as can be seen from the large number of projects described above. A concluding point, extracted from preliminary work for the Hadoop/HBase based PNNL project, follows Dean & Ghemawat . That is, for much bioinformatics work not only is the scalability permitted by Hadoop and HBase important, but also of consequence is the ease of integrating and analyzing various large, disparate data sources into one data warehouse under Hadoop, in relatively few HBase tables.
AWS: Amazon Web Services; API: application programming interface; BLAST: Basic Local Alignment Search Tool; BOSC 2010: Bioinformatics Open Source Conference, July 2010; DOE: U.S. Dept. of Energy; EC2: Elastic Compute Cloud; GSEA: Gene Set Enrichment Analysis; GB: gigabytes; HPC: High performance computing; HDFS: Hadoop Distributed File System; IU: Indiana University; JDBC: Java DataBase Connectivity; MPI: Message-Passing Interface standard for programming parallel computers; NCBI: National Center for Biotechnology Information; PNNL: Pacific Northwest National Laboratory, U.S. Dept. of Energy; S3: Simple Storage Service
The author has no competing interests.
RCT was sole author.
- Editorial. Gathering clouds and a sequencing storm. Nature Biotechnology. 2010;28(1):1. doi: 10.1038/nbt0110-1.[PubMed][Cross Ref]
- Baker M. Next-generation sequencing: adjusting to data overload. Nature Methods. 2010;7(7):495–499. doi: 10.1038/nmeth0710-495.[Cross Ref]
- Sansom C. Up in a cloud? Nature Biotechnology. 2010;28(1):13–15. doi: 10.1038/nbt0110-13.[PubMed][Cross Ref]
- Stein L. The case for cloud computing in genome informatics. Genome Biology. 2010;11:207. doi: 10.1186/gb-2010-11-5-207.[PMC free article][PubMed][Cross Ref]
- Schatz MC, Langmead B, Salzberg SL. Cloud computing and the DNA data race. Nature Biotechnology. 2010;28:691–693. doi: 10.1038/nbt0710-691.[PMC free article][PubMed][Cross Ref]
- Hadoop - Apache Software Foundation project home page. http://hadoop.apache.org/
- Lam C, Warren J. Hadoop in Action. Manning Publications; 2010.
- Venner J. Pro Hadoop. New York: A Press; 2009.
- White T. Hadoop: The Definitive Guide. Sebastopol: O'Reilly Media; 2009.
- Cloudera recommendations on Hadoop/HBase cluster capacity planning. http://www.cloudera.com/blog/2010/08/hadoophbase-capacity-planning/
- Hadoop user listing. http://wiki.apache.org/hadoop/PoweredBy
- Henschen D. Emerging Options: MapReduce, Hadoop: Young, But Impressive. Information Week. 2010;24
- Ghemawat S, Gobioff H, Leung S-T. 19th ACM Symposium on Operating Systems Principles. Lake George, NY: ACM Press; 2003. The Google file system.
- Hadoop Sorts a Petabyte in 16.25 Hours and a Terabyte in 62 Seconds (using Jim Gray's sort benchmark, on Yahoo's Hammer cluster of ~3800 nodes) http://developer.yahoo.com/blogs/hadoop/posts/2008/07/apache_hadoop_wins_terabyte_sort_benchmark/
- Dean J, Ghemawat S. Sixth Symposium on Operating System Design and Implementation: 2004; San Francisco, CA. Usenix Association; 2004. MapReduce: Simplified data processing on large clusters.
- Dean J, Ghemawat S. MapReduce: A Flexible Data Processing Tool. Communications of the ACM. 2010;53(1):72–77. doi: 10.1145/1629175.1629198.[Cross Ref]
- Can Your Programming Language Do This? (MapReduce concept explained in easy-to-understand way) http://www.joelonsoftware.com/items/2006/08/01.html
- Google blesses Hadoop with MapReduce patent license. http://www.theregister.co.uk/2010/04/27/google_licenses_mapreduce_patent_to_hadoop/
- The First International Workshop on MapReduce and its Applications (MAPREDUCE'10) - June 22nd, 2010 HPDC'2010, Chicago, IL, USA. http://graal.ens-lyon.fr/mapreduce/
- Hive - Apache Software Foundation project home page. http://hadoop.apache.org/hive/
- Pig - Apache Software Foundation project home page. http://pig.apache.org/
- Mahout - Apache Software Foundation project home page. http://lucene.apache.org/mahout
- Ngazimbi M. Data Clustering with Hadoop (masters thesis) Boise State University; 2009.
- Heafield K. Hadoop Design and k-means clustering. Google presentation; 2008.
- RHIPE - R and Hadoop Integrated Processing Environment project home page. http://www.stat.purdue.edu/~sguha/rhipe/
- Das S, Sismanis Y, Beyer KS, Gemulla R, Haas PJ, McPherson J. Ricardo: integrating R and Hadoop. 2010 International Conference on Management of Data (SIGMOD '10): 2010. 2010. pp. 987–998. full_text.
- Pydoop project home page. http://pydoop.sourceforge.net
- Clojure project home page. http://clojure.org
- Clojure-Hadoop library project home page. [ http://stuartsierra.com/software/clojure-hadoop] and [ http://github.com/stuartsierra/clojure-hadoop]
- Cascading - project home page. http://www.cascading.org
- HBase - Apache Software Foundation project home page. http://hadoop.apache.org/hbase/
- Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M, Chandra T, Fikes A, E GR. Seventh Symposium on Operating System Design and Implementation. Seattle, WA: Usenix Association; 2006. Bigtable: A distributed storage system for structured data.
- Zookeeper - Apache Software Foundation project home page. http://hadoop.apache.org/zookeeper/
- Hive HBase Integration project home page. http://wiki.apache.org/hadoop/Hive/HBaseIntegration
- Integrating Hive and HBase - Cloudera Developer Center. http:// http://www.cloudera.com/blog/2010/06/integrating-hive-and-hbase/
- HBql project home page. http://www.hbql.com
- HadoopDB - project home page. http://db.cs.yale.edu/hadoopdb/hadoopdb.html
- Abouzeid A, Bajda-Pawlikowski K, Abadi D, Silberschatz A, Rasin A. VLDB '09 (August 24-28, 2009) Lyon, France: VLDB Endowment; 2009. HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads.
- Hypertable - project home page. http://hypertable.org
- Cassandra - Apache Software Foundation project home page. http://cassandra.apache.org
- Cloudburst project home pages. [ http://www.cbcb.umd.edu/software/] and [ http://sourceforge.net/apps/mediawiki/cloudburst-bio/index.php?title=CloudBurst]
- Schatz M. Cloudburst: highly sensitive read mapping with MapReduce. Bioinformatics. 2009;25(11):1363–1369. doi: 10.1093/bioinformatics/btp236. (Excellent starting point for not just details of Cloudburst, but also for short coherent descriptions of such mapping algorithms in general and of Hadoop.) [PMC free article][PubMed][Cross Ref]
- Crossbow project home page. http:// http://bowtie-bio.sourceforge.net/crossbow/index.shtml
- Langmead B, Schatz MC, Lin J, Pop M, Salzberg SL. Searching for SNPs with cloud computing. Genome Biology. 2009;10(11):R134. doi: 10.1186/gb-2009-10-11-r134.[PMC free article][PubMed][Cross Ref]
- Contrail project home page (Contrail: Assembly of Large Genomes using Cloud Computing) http:// http://sourceforge.net/apps/mediawiki/contrail-bio/index.php?title=Contrail
- Myrna project home pages. [ http://bowtie-bio.sourceforge.net/myrna/index.shtml] and [ http://sourceforge.net/projects/bowtie-bio/files/myrna]
- Langmead B, Hansen KD, Leek JT. Cloud-scale RNA-sequencing differential expression analysis with Myrna. Genome Biology. 2010;11:R83. doi: 10.1186/gb-2010-11-8-r83.[PMC free article][PubMed][Cross Ref]
- Langmead B, Trapnell C, Pop M, Salzberg SL. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biology. 2009;10(3):R25. doi: 10.1186/gb-2009-10-3-r25.[PMC free article][PubMed][Cross Ref]
- Bowtie project home page. http://bowtie-bio.sourceforge.net/index.shtml
- Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Iacus S. et al. Bioconductor: open software development for computational biology and bioinformatics. Genome Biology. 2004;5:R80. doi: 10.1186/gb-2004-5-10-r80.[PMC free article][PubMed][Cross Ref]
- Amazon Elastic MapReduce. http://aws.amazon.com/elasticmapreduce/
- Amazon Elastic Compute Cloud (Amazon EC2) http://aws.amazon.com/ec2/
- Hadoop for Bioinformatics (presentation by D. Singh of Amazon Web Services at Hadoop World NY meeting, Oct 2009) http://vimeo.com/7351342
- Google and IBM look to the next generation of programmers. http://www.ibm.com/ibm/ideasfromibm/us/google/index.shtml
- Qui X, Ekanayake J, Beason S, Gunarathne T, Fox G, Barga R, Gannon D. 2nd Workshop on Many-Task Computing on Grids and Supercomputers 2009. Portland, Oregon; 2009. Cloud technologies for bioinformatics applications.
- Microsoft Dryad infrastructure project for running data-parallel programs project home page. http:// http://research.microsoft.com/en-us/projects/dryad/
- Isard M, Budiu M, Yu Y, Birrell A. Dyrad: distributed data-parallel programs from sequential building blocks. 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007. 2007. pp. 59–72.
- Qui J, Ekanayake J, Gunarathne T, Choi JY, Bae S-H, Li H, Zhang B, Wu T-l, Ryan Y, Ekanayake S, Hughes A, Fox G. Hybrid Cloud and Cluster Computing Paradigms for Life Science Applications. BMC Bioinformatics. 2010;11(Suppl 12):S5.[PMC free article][PubMed]
- Gaggero M, Leo S, Manca S, Santoni F, Schiaratura O, Zanetti G. Parallelizing bioinformatics applications with MapReduce. Cloud Computing and Its Applications. 2008.
- Leo S, Santoni F, Zanetti G. Biodoop: Bioinformatics on Hadoop. 2009 International Conference on Parallel Processing Workshops. 2009.
- Matsunaga A, Tsugawa M, Fortes J. CloudBLAST: Combining MapReduce and Virtualization on Distributed Resources for Bioinformatics Applications. Fourth IEEE International Conference on eScience: 2008. 2008.
- Sadasivam G, Baktavatchalam G. A novel approach to multiple sequence alignment using hadoop data grids. 2010 Workshop on Massive Data Analytics on the Cloud: 2010. 2010. pp. 1–7. full_text. [PubMed]
- O'Connor BD, Merriman B, Nelson SF. SeqWare Query Engine: Storing and Searching Sequence Data in the Cloud. BMC Bioinformatics. 2010;11(Suppl 12):S2.[PMC free article][PubMed]
- Genome Analysis Toolkit project home page (Broad Institute) http://www.broadinstitute.org/gsa/wiki/index.php/The_Genome_Analysis_Toolkit
- McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, Genome Research 2010. Epub ahead of print; 2010. The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data. [PMC free article][PubMed]
- Lin J, Schatz M. Design patterns for efficient graph algorithms in MapReduce. Eighth Workshop on Mining and Learning with Graphs (MLG '10): 2010. 2010. pp. 78–85. full_text.
- Mapreduce and Hadoop Algorithms in Bioinformatics Papers (Abhishek Tawari - blog site) http:// http://www.abhishek-tiwari.com/2010/08/mapreduce-and-hadoop-algorithms-in-bioinformatics-papers.html
- Canon S. National Energy Research Scientific Computing Center. Lawrence Berkeley National Laboratory, pers. comm.; 2010.
- Coghlan S. Leadership Computing Facility. Argonne National Laboratory pers.comm.; 2010.
- Magellan project home page at Argonne National Laboratory. http://magellan.alcf.anl.gov/
- U.S. Dept of Energy Magellan Project user statement of interest form. http://www.nersc.gov/nusers/systems/magellan/
- DOE Systems Biology Knowledgebase for a New Era in Biology. [ http://genomicscience.energy.gov/compbio/] and [ http://www.systemsbiologyknowledgebase.org/]
- Cottingham B. Computational Biology & Bioinformatics. Oak Ridge National Laboratory pers. comm.; 2010.
- Kandinsky, the Systems Biology Knowledgebase computer cluster at Oak Ridge National Laboratory home page. [ http://sbkbase.wordpress.com/] and [ http://sbkbase.wordpress.com/about/]
Hadoop Thesis gives the best opening for you to start your intellectual voyage with your big goal and high motivation. On these days, most the scholars require the best guidance to prepare their thesis by own. In view of, our association of professionals are introduced our Hadoop Projects service for worldwide scholars. Our hundreds of thesis writing experts and technocrats are highly talented persons in both of theoretical and technical sideways for the grant vision of provide highly patterned thesis with the best quality. Finally, we can deliver your thesis at the perfect time. World you eager to utilize our Hadoop Thesis service? You can take your mobile and dial our contact quickly.
Hadoop Thesis is our famous and breathtaking service to grant our best for students and research academicians to obtain the grand victory in their scientific trip. We give our best guidance and support for you in each and every part of thesis preparation including research proposal/abstract preparation, introduction preparation, literature review preparation, problem statement making, research system preparation, algorithm/mathematical equations/pseudo code derivation writing, complete implementation support, experimental result support etc. If you are ready to dive into your Hadoop Thesis, this article takes you step by step guidance through our top thesis writers.
Hadoop Thesis Format:
- Table of Contents
- List of Tables
- List of Figures
- Background Overview
- Motivation of the Research
- Aim of the Research
- Thesis Organization
- Research Methodologies
- Algorithm Description
- Pseudocode description
- Simulation setup
- Performance Analysis
- Comparative Study
- Future research
- Appendix Bioset
Main Hadoop Thesis Topics Covers:
- Big Data Introduction and Data Analytics
- Hadoop Fundamentals
- HDFS, Hive, MapReduce
- Sensors Dataset (Weather Datasets)
- Wordcount problem
- Social media datasets like twitter data analysis and YouTube
- HBase, Pig and Sqoop
- Spark and Scala
- Apache Spark and Oozie
- Installations: Hive, Sqoop and Hadoop
- Complex MapReduce Jobs Functioning
- Data ingestion into Hadoop
- HDFS architecture and MapReduce framework
- Understanding of Hadoop Design Patterns
- Job Scheduling Oozie
Major Algorithms in Hadoop:
- User and item based recommendations
- Fuzzy-C-Means and K-Means Clustering
- Collaborative filtering algorithm
- Mean Shift Clustering
- Latent Dirichlet Allocation
- Dirichlet Process clustering
- Singular Value Decomposition
- Complementary Naïve Bayes Classifier
- Random Forest Decision Tree
- Parallel Frequent Pattern Mining
- High performance java collections
- KNN Algorithm
- Genetic Algorithm
- Scheduling using Tabular Approach
- Machine Learning algorithm
- Apriori based algorithms for MapReduce
- K-mer counting
- Secondary sorting
- DNA Sequencing
- Naïve Bayes Algorithm
- Linear Regression
- Bloom filtering on MapReduce
- PageRank Algorithm
- Job Scheduling Algorithms
Key Technologies over Hadoop:
- Machine learning automation
- Web Notebooks
- Data Security and Governance
- Global Resource Management
- Data Fabrics spreading
- Messaging Platforms
- NoSQL Takeover
Stream Processing Technologies:
- Apache Flink
- Spark Streaming
- Apache Apex
- Apache Samza
- Apache Storm
- Akka Streams
Topics include in Hadoop Thesis:
- Hadoop for Frequent Accessed Data Files Using Flexible Replication Management
- Hadoop MapReduce Paradigm for Mining Parallel Distributed Patterns
- Organize and Enhance Online Search Results Using Hadoop Based Novel Approach in Big Data Ecosystem
- Sun Grid Engine and Apache Hadoop Big Data Image Processing Empirical and Theoretical comparison
- Clustering for Analyze Mobile Phone Usage in Pig and Spark MLLib
- Bandwidth Reduction Using Multicast Based Replication in Hadoop HDFS
- Local Scheduling Algorithm Policy Enhancement in Hadoop Cluster Platform
- High Scalable Distributed Processing and Storage Paradigm in Big Data Framework for Unstructured Data
- Naive Bayes Classifier for Predict Cancer Report Generation, and Query Providing
- Survey Framework Based on Cloud Robotics to Solve Problem of Simultaneous Mapping and Localization
- Develop Internet of Things in Industrial Educational IoT Case for Cloud Framework
- Machine Learning Techniques for Analyze Microarray Data on Scalable Environment
- Hadoop Processing Interface for Computationally Intensive Processes Service Offloading in Private Cloud
- Ensemble Data Classification Approach Based on Iterative Hadoop on Distributed Medical Databases
- Compare Large Volume of Data Distributed Processing Performance on Docker and Xen Based Virtual Clusters