A Workflow-Aware Storage System Emalayan Vairavanathan Samer Al-Kiswany, Lauro Beltro Costa, Zhao Zhang, Daniel S. Katz, Michael Wilde, Matei Ripeanu 1 Workflow Example - ModFTDock Protein docking application Simulates a more complex protein model from two known proteins Applications Drugs design Protein interaction prediction
2 Background ModFTDock in Argonne BG/P Workflow Runtime Engine 1.2 M File based communication Docking Tasks Large IO volume Scale: 40960 Compute nodes App. task
App. task App. task App. task App. task Local storage Local storage Local storage
Local storage Local storage IO rate : 8GBps = 51KBps / core Backend file system (e.g., GPFS, NFS) 3 Background Backend Storage Bottleneck Storage is one of the main bottlenecks for workflows
Execution 29% Data management 30% Scheduling and Idle 40% Montage workflow (512 BG/P cores, GPFS backend file system) Source [Zhao et. al] 4 Intermediate Storage Approach
Source [Zhao et. al] MTAGS 2008 Workflow Runtime Engine Scale: 40960 Compute nodes App. task POSIX API Local storage
App. task App. task Local storage Local storage Stage Out Intermediate Storage Stage In Backend file system (e.g., GPFS, NFS) 5
Research Question How can we improve the storage performance for workflow applications? 6 IO-Patterns in Workflow Applications by Justin Wozniak et al PDSW09 Pipeline Locality and location-aware scheduling Broadcast
Replication Reduce Collocation and location-aware scheduling Scatter and Gather Block-level data placement 7 IO-Patterns in ModFTDock Stage - 1
Broadcast pattern ModFTDock Stage - 2 Reduce pattern Stage - 3 Pipeline pattern 1.2 M Dock, 12000 Merge and Score instances at large run Average file size 100 KB 75 MB
8 Research Question How can we improve the storage performance for workflow applications? Our Answer Workflow-aware storage: Optimizing the storage for IO patterns Traditional approach: One size fits all Our approach: File / block-level optimizations 9 Integrating with the workflow runtime engine
Application hints (e.g., indicating access patterns) Workflow Runtime Engine POSIX API Storage hints (e.g., location information) Compute Nodes App. task Local storage
App. task App. task Local storage Local storage Workflow-aware storage (shared) Stage In/Out Backend file system (e.g., GPFS, NFS) 10
Outline Background IO Patterns Workflow-aware storage system: Implementation Evaluation 11 Implementation: MosaStore File is divided into fixed size chunks. Chunks: stored on the storage nodes. Manager maintains a block-map for each file
POSIX interface for accessing the system MosaStore distributed storage architecture 12 Implementation: Workflow-aware Storage System Workflow-aware storage architecture 13 Implementation: Workflow-aware Storage System Optimized data placement for the pipeline pattern Priority to local writes and reads Optimized data placement for the reduce pattern
Collocating files in a single storage node Replication mechanism optimized for the broadcast pattern Parallel replication Exposing file location to workflow runtime engine 14 Outline Background IO Patterns Workflow-aware storage system: Implementation Evaluation 15
Evaluation - Baselines Compute Nodes App. task App.and task MosaStore, NFS Node-local storage Local storage Local storage
App. task vs Local storage Intermediate storage (shared) Local Workflow-aware storagestorage MosaStore Workflowaware storage Stage In/Out Backend file system (e.g., GPFS, NFS)
NFS 16 Evaluation - Platform Cluster of 20 machines. Intel Xeon 4-core, 2.33-GHz CPU, 4-GB RAM, 1-Gbps NIC, and a RAID1 on two 300-GB 7200-rpm SATA disks Backend storage NFS server Intel Xeon E5345 8-core, 2.33-GHz CPU, 8-GB RAM, 1-Gbps NIC, and a 6 SATA disks in a RAID 5 configuration NFS server is better provisioned 17
Evaluation Benchmarks and Application Synthetic benchmark Workload Pipeline Broadcast Reduce Small 100KB, 200KB, 10KB 100KB, 1KB 10KB, 100KB
Medium 100 MB, 200 MB, 1MB 100 MB, 1MB 10MB, 200 MB Large 1GB, 2GB, 10MB 100MB, 2 GB 1 GB, 10 MB Application and workflow run-time engine
ModFTDock 18 Synthetic Benchmark - Pipeline Optimization: Locality and location-aware scheduling Average runtime for medium workload 19 Synthetic Benchmarks - Reduce Optimization: Collocation and location-aware scheduling
Average runtime for medium workload 20 Synthetic Benchmarks - Broadcast Optimization: Replication Average runtime for medium workload 21 Not everything is perfect ! Average runtime for small workload (pipeline, broadcast and reduce benchmarks)
22 Evaluation ModFTDock Total application time on three different systems ModFTDock workflow 23 Evaluation Highlights WASS shows considerable performance gain with all the benchmarks on medium and large workload (up to 18x faster than NFS and up to 2x faster than MosaStore). ModFTDock is 20% faster on WASS than on MosaStore, and
more than 2x faster than running on NFS. WASS provides lower performance with small benchmarks due to metadata overheads and manager latency. 24 Summary Problem How can we improve the storage performance for workflow applications? Approach Workflow aware storage system (WASS) From backend storage to intermediate storage Bi-directional communication using hints
Future work Integrating more applications Large scale evaluation 25 THANK YOU MosaStore: netsyslab.ece.ubc.ca/wiki/index.php/MosaStore Networked Systems Laboratory: netsyslab.ece.ubc.ca 26