Quick links

Implementation and Performance of Integrated Application-Controlled Caching, Prefetching and Disk Scheduling

Report ID:
June 1995
Download Formats:


Although file caching and prefetching are known techniques to improve
the performance of file systems, little work has been done on
intergrating caching and prefetching. Optimal prefetching is
nontrivial because prefetching may require early cache block
replacements. Moreover, the tradeoff between the latency-hiding
benefits of prefetching and the increase in the number of fetches
required must be considered.
This paper presents the design, implementation and performance of a
file system that integrates application-controlled caching,
prefetching and disk scheduling. We use a two-level cache management
strategy. The kernel uses the LRU-SP policy [pei:usenix94] to
allocate blocks to processes, and each process uses the
policy, an algorithm previously shown in a
theoretical sense to be near-optimal, for managing its cache. Each
process then improves its disk access latency by submitting its
prefetches in batches and schedules the requests in each batch to
optimize disk access performance. Our measurements show that this
combination of techniques greatly improves the performance of the file
system. Average running time is reduced by 26% for single-process
workloads, and by 46% for multi-process workloads.

This technical report has been published as
Implementation and Performance of Integrated
Application-Controlled File Caching, Prefetching and
Disk Scheduling. Pei Cao, Edward W. Felten, Anna
R. Karlin and Kai Li, ACM Transactions on
Computer Systems
vol. 14, no. 4, 311-343, Nov. 1996.
Follow us: Facebook Twitter Linkedin