• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

A large-scale study of failures in highperformance computing systems (2006)

Cached

  • Download as a PDF

Download Links

  • [www.ssrc.ucsc.edu]
  • [cs.uwaterloo.ca]
  • [cs.uwaterloo.ca]
  • [www.pdl.cmu.edu]
  • [www.cs.utexas.edu]
  • [www.pdl.cs.cmu.edu]
  • [www.pdl.cmu.edu]
  • [www.pdl.cs.cmu.edu]
  • [pdl.cmu.edu]
  • [www.cs.cmu.edu]
  • [www.cs.toronto.edu]
  • [www.cs.toronto.edu]
  • [www.cs.utoronto.ca]
  • [www.pdl.cmu.edu]
  • [www.pdl.cs.cmu.edu]
  • [www.pdl.cs.cmu.edu]
  • [www.pdl.cmu.edu]
  • [www.cs.cmu.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Bianca Schroeder , Garth A. Gibson
Venue:in Proceedings of the International Conference on Dependable Systems and Network, DSN
Citations:206 - 7 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Schroeder06alarge-scale,
    author = {Bianca Schroeder and Garth A. Gibson},
    title = {A large-scale study of failures in highperformance computing systems},
    booktitle = {in Proceedings of the International Conference on Dependable Systems and Network, DSN},
    year = {2006}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations is publicly available. This paper analyzes failure data recently made publicy available by one of the largest high-performance computing sites. The data has been collected over the past 9 years at Los Alamos National Laboratory and includes 23000 failures recorded on more than 20 different systems, mostly large clusters of SMP and NUMA nodes. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find for example that average failure rates differ wildly across systems, ranging from 20–1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution. 1

Keyphrases

large-scale study    mean time    los alamo national laboratory    hazard rate    weibull distribution    lognormal distribution    large cluster    good understanding    repair time varies    numa node    failure data    repair time    dependable system    large installation    average failure rate    little raw data    root cause    different system    failure characteristic   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University