Results 1 -
8 of
8
Flash Caching on the Storage Client
- In Proceedings of the USENIX ATC
, 2013
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.
FlexECC: Partially relaxing ecc of mlc ssd for better cache performance
- in Proceedings of the 2014 USENIX Annual Technical Conference, ser. ATC
, 2014
"... USENIX. ..."
(Show Context)
Centaur: Host-side SSD Caching for Storage Performance Control
"... Abstract—Host-side SSD caches represent a powerful knob for improving and controlling storage performance and improve performance isolation. We present Centaur, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals. Centaur implements dynam ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Host-side SSD caches represent a powerful knob for improving and controlling storage performance and improve performance isolation. We present Centaur, as a host-side SSD caching solution that uses cache sizing as a control knob to achieve storage performance goals. Centaur implements dynamically partitioned per-VM caches with per-partition local replacement to provide both lower cache miss rate, better performance isolation and performance control for VM workloads. It uses SSD cache sizing as a universal knob for meeting a variety of workload-specific goals including per-VM latency and IOPS reservations, proportional share fairness, and aggregate optimizations such as minimizing the average latency across VMs. We implemented Centaur for the VMware ESX hypervisor. With Centaur, times for simultaneously booting 28 virtual desktops improve by 42% relative to a non-caching system and by 18 % relative to a unified caching system. Centaur also implements per-VM shares for latency with less than 5 % error when running microbenchmarks, and enforces latency and IOPS reservations on OLTP workloads with less than 10 % error. I.
To ARC or not to ARC
- In Proc. of USENIX HotStorage
, 2015
"... Cache replacement algorithms have focused on man-aging caches that are in the datapath. In datapath caches, every cache miss results in a cache update. Cache up-dates are expensive because they induce cache insertion and cache eviction overheads which can be detrimental to both cache performance and ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Cache replacement algorithms have focused on man-aging caches that are in the datapath. In datapath caches, every cache miss results in a cache update. Cache up-dates are expensive because they induce cache insertion and cache eviction overheads which can be detrimental to both cache performance and cache device lifetime. Non-datapath caches, such as host-side flash caches, allow the flexibility of not having to update the cache on each miss. We propose the multi-modal adaptive replacement cache (mARC), a new cache replacement algorithm that extends the adaptive replacement cache (ARC) algorithm for non-datapath caches. Our initial trace-driven sim-ulation experiments suggest that mARC improves the cache performance over ARC while significantly reduc-ing the number of cache updates for two sets of storage I/O workloads from MSR Cambridge and FIU. 1
Non-blocking Writes to Files
"... Writing data to a page not present in the file-system page cache causes the operating system to synchronously fetch the page into memory first. Synchronous page fetch defines both policy (when) and mechanism (how), and al-ways blocks the writing process. Non-blocking writes eliminate such blocking b ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Writing data to a page not present in the file-system page cache causes the operating system to synchronously fetch the page into memory first. Synchronous page fetch defines both policy (when) and mechanism (how), and al-ways blocks the writing process. Non-blocking writes eliminate such blocking by buffering the written data elsewhere in memory and unblocking the writing pro-cess immediately. Subsequent reads to the updated page locations are also made non-blocking. This new han-dling of writes to non-cached pages allow processes to overlap more computation with I/O and improves page fetch I/O throughput by increasing fetch parallelism. Our empirical evaluation demonstrates the potential of non-blocking writes in improving the overall performance of systems with no loss of performance when workloads cannot benefit from it. Across the Filebench write work-loads, non-blocking writes improve benchmark through-put by 7X on average (up to 45.4X) when using disk drives and by 2.1X on average (up to 4.2X) when using SSDs. For the SPECsfs2008 benchmark, non-blocking writes decrease overall average latency of NFS opera-tions between 3.5 % and 70 % and average write latency between 65 % and 79%. When replaying the MobiBench file system traces, non-blocking writes decrease average operation latency by 20-60%. 1
Virtualization-aware Access Control for Multitenant Filesystems
"... Abstract-In a virtualization environment that serves multiple tenants, storage consolidation at the filesystem level is desirable because it enables data sharing, administration efficiency, and performance optimizations. The scalable deployment of filesystems in such environments is challenging due ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract-In a virtualization environment that serves multiple tenants, storage consolidation at the filesystem level is desirable because it enables data sharing, administration efficiency, and performance optimizations. The scalable deployment of filesystems in such environments is challenging due to intermediate translation layers required for networked file access or identity management. First we present several security requirements in multitenant filesystems. Then we introduce the design of the Dike authorization architecture. It combines native access control with tenant namespace isolation and compatibility to object-based filesystems. We use a public cloud to experimentally evaluate a prototype implementation of Dike that we developed. At several thousand tenants, our prototype incurs limited performance overhead up to 16%, unlike an existing solution whose multitenancy overhead approaches 84% in some cases.
Client-side Flash Caching for Cloud Systems
"... As the size of cloud systems and the number of hosted VMs rapidly grow, the scalability of shared VM storage systems becomes a serious issue. Client-side flash-based caching has the potential to improve the performance of cloud VM stor-age by employing flash storage available on the client-side of t ..."
Abstract
- Add to MetaCart
(Show Context)
As the size of cloud systems and the number of hosted VMs rapidly grow, the scalability of shared VM storage systems becomes a serious issue. Client-side flash-based caching has the potential to improve the performance of cloud VM stor-age by employing flash storage available on the client-side of the storage system to exploit the locality inherent in VM IOs. However, because of the limited capacity and durabil-ity of flash storage, it is important to determine the proper size and configuration of the flash caches used in cloud sys-tems. This paper provides answers to the key design ques-tions of cloud flash caching based on dm-cache, a block-level caching solution customized for cloud environments, and a large amount of long-term traces collected from real-world public and private clouds. The study first validates that cloud workloads have good cacheability and dm-cache-based flash caching incurs low overhead with respect to commod-ity flash devices. It further reveals that write-back caching substantially outperforms write-through caching in typical cloud environments due to the reduction of server IO load. It also shows that there is a tradeoff on making a flash cache persistent across client restarts which saves hours of cache warm-up time but incurs considerable overhead from com-mitting every metadata update persistently. Finally, to re-duce the data loss risk from using write-back caching, the pa-per proposes a new cache-optimized RAID technique, which minimizes the RAID overhead by introducing redundancy of cache dirty data only, and shows to be significantly faster than traditional RAID and write-through caching. 1.
File-Level, Host-Side Flash Caching with Loris
"... Abstract—As enterprises shift from using direct-attached storage to network-based storage for housing primary data, flashbased, host-side caching has gained momentum as the primary latency reduction technique. In this paper, we make the case for integration of flash caching algorithms at the file le ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—As enterprises shift from using direct-attached storage to network-based storage for housing primary data, flashbased, host-side caching has gained momentum as the primary latency reduction technique. In this paper, we make the case for integration of flash caching algorithms at the file level, as opposed to the conventional block-level integration. In doing so, we will show how our extensions to Loris, a reliable, file-oriented storage stack, transform it into a framework for designing layout-independent, file-level caching systems. Using our Loris prototype, we demonstrate the effectiveness of Loris-based, filelevel flash caching systems over their block-level counterparts, and investigate the effect of various write and allocation policies on the overall performance.