A Critical Path File Location (CPFL) algorithm for data-aware multiworkflow scheduling on HPC clusters

César Acevedo, Porfidio Hernández, Antonio Espinosa, Víctor Méndez

Research output: Contribution to journalArticleResearch

8 Citations (Scopus)


© 2017 The Authors A representative set of workflows found in bioinformatics pipelines must deal with large data sets. Most scientific workflows are defined as Direct Acyclic Graphs (DAGs). Despite DAGs are useful to understand dependence relationships, they do not provide any information about input, output and temporal data files. This information about the location of files of data intensive applications helps to avoid performance issues. This paper presents a multiworkflow store-aware scheduler in a cluster environment called Critical Path File Location (CPFL) policy where the access time to disk is more relevant than network, as an extension of the classical list scheduling policies. Our purpose is to find the best location of data files in a hierarchical storage system. The resulting algorithm is tested in an HPC cluster and in a simulated cluster scenario with bioinformatics synthetic workflows, and largely used benchmarks like Montage and Epigenomics. The resulting simulator is tuned and validated with the first test results from the real infrastructure. The evaluation of our proposal shows promising results up to 70% on benchmarks in real HPC clusters using 128 cores and up to 69% of makespan improvement on simulated 512 cores clusters with a deviation between 0.9% and 3% regarding the real HPC cluster.
Original languageEnglish
Pages (from-to)51-62
JournalFuture Generation Computer Systems
Publication statusPublished - 1 Sep 2017


  • Cluster
  • Critical path
  • Data processing
  • Multiworkflows
  • Scheduler
  • Simulation


Dive into the research topics of 'A Critical Path File Location (CPFL) algorithm for data-aware multiworkflow scheduling on HPC clusters'. Together they form a unique fingerprint.

Cite this