Resum
The map-reduce paradigm has shown to be a simple and feasible way of filtering and analyzing large data sets in cloud and cluster systems. Algorithms designed for the paradigm must implement regular data distribution patterns so that appropriate use of resources is ensured. Good scalability and performance on Map-Reduce applications greatly depend on the design of regular intermediate data generationconsumption patterns at the map and reduce phases.We describe the data distribution patterns found in current Map-Reduce read mapping bioinformatics applications and show some data decomposition principles to greatly improve their scalability and performance. © Springer Science+Business Media, LLC 2012.
Idioma original | Anglès |
---|---|
Pàgines (de-a) | 1305-1317 |
Revista | Journal of Supercomputing |
Volum | 62 |
DOIs | |
Estat de la publicació | Publicada - 1 de des. 2012 |