• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Cooperative caching for chip multiprocessors (2006)

Cached

  • Download as a PDF

Download Links

  • [www.cs.wisc.edu]
  • [www.cs.wisc.edu]
  • [ftp.cs.wisc.edu]
  • [www.cs.cmu.edu]
  • [www.cs.cmu.edu]
  • [www.cs.wisc.edu]
  • [pages.cs.wisc.edu]
  • [www.cs.pitt.edu]
  • [www.cs.wisc.edu]
  • [www.cis.udel.edu]
  • [research.cs.wisc.edu]
  • [www.cs.sfu.ca]
  • [www.dsi.unive.it]
  • [cs.uwaterloo.ca]
  • [pages.cs.wisc.edu]
  • [www.cs.wisc.edu]
  • [www.cs.wisc.edu]
  • [pages.cs.wisc.edu]
  • [research.cs.wisc.edu]
  • [ftp.cs.wisc.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Jichuan Chang
Venue:In Proceedings of the 33nd Annual International Symposium on Computer Architecture
Citations:145 - 1 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Chang06cooperativecaching,
    author = {Jichuan Chang},
    title = {Cooperative caching for chip multiprocessors},
    booktitle = {In Proceedings of the 33nd Annual International Symposium on Computer Architecture},
    year = {2006},
    pages = {264--276}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Chip multiprocessor (CMP) systems have made the on-chip caches a critical resource shared among co-scheduled threads. Limited off-chip bandwidth, increasing on-chip wire delay, destructive inter-thread interference, and diverse workload characteristics pose key design challenges. To address these challenge, we propose CMP cooperative caching (CC), a unified framework to efficiently organize and manage on-chip cache resources. By forming a globally managed, shared cache using cooperative private caches. CC can effectively support two important caching applications: (1) reduction of average memory access latency and (2) isolation of destructive inter-thread interference. CC reduces the average memory access latency by balancing between cache latency and capacity opti-mizations. Based private caches, CC naturally exploits their access latency benefits. To improve the effective cache capacity, CC forms a “shared ” cache using replication control and LRU-based global replacement policies. Via cooperation throttling, CC provides a spectrum of caching behaviors between the two extremes of private and shared caches, thus enabling dynamic adaptation to suit workload requirements. We show that CC can achieve a robust performance advantage over private and shared cache schemes across different processor, cache and memory configurations, and a wide selection of multithreaded and multiprogrammed

Keyphrases

chip multiprocessor    cooperative caching    destructive inter-thread interference    shared cache    average memory access latency    cooperative private cache    cache scheme    key design challenge    diverse workload characteristic    memory configuration    workload requirement    wide selection    replication control    based private cache    critical resource    capacity opti-mizations    access latency benefit    on-chip cache    cmp cooperative caching    lru-based global replacement policy    effective cache capacity    on-chip cache resource    limited off-chip bandwidth    on-chip wire delay    different processor    cache latency    co-scheduled thread    robust performance advantage    unified framework    via cooperation throttling    dynamic adaptation   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University