KEYWORDS: Multimedia, Operating systems, Video, Computing systems, Algorithm development, Systems modeling, Solid state electronics, Digital photography, Data storage, Manufacturing
Mobile multimedia computers require large amounts of data storage, yet must consume low power in order to
prolong battery life. Solid-state storage offers low power consumption, but its capacity is an order of magnitude
smaller than the hard disks needed for high-resolution photos and digital video. In order to create a device with
the space of a hard drive, yet the low power consumption of solid-state storage, hardware manufacturers have
proposed using flash memory as a write buffer on mobile systems. This paper evaluates the power savings of such
an approach and also considers other possible flash allocation algorithms, using both hardware- and software-level
flash management. Its contributions also include a set of typical multimedia-rich workloads for mobile systems
and power models based upon current disk and flash technology. Based on these workloads, we demonstrate
an average power savings of 267 mW (53% of disk power) using hardware-only approaches. Next, we propose
another algorithm, termed Energy-efficient Virtual Storage using Application-Level Framing (EVS-ALF), which
uses both hardware and software for power management. By collecting information from the applications and
using this metadata to perform intelligent flash allocation and prefetching, EVS-ALF achieves an average power
savings of 307 mW (61%), another 8% improvement over hardware-only techniques.
Modern mobile processors offer dynamic voltage and frequency scaling, which can be used to reduce the energy requirements of embedded and real-time applications by exploiting idle CPU resources, while still maintaining all applications' real-time characteristics. However, accurate predictions of task run-times are key to computing the frequencies and voltages that ensure that all tasks' real-time constraints are met. Past work has used feedback-based approaches, where applications' past CPU utilizations are used to predict future CPU requirements. Inaccurate predictions in these approaches can lead to missed deadlines, less than expected energy savings, or large overheads due to frequent voltage and frequency changes. Previous solutions ignore other `indicators' of future CPU requirements, such as the frequency of I/O operations, memory accesses, or interrupts. This paper addresses this shortcoming for memory-intensive applications, where measured task run-times and cache miss rates are used as feedback for accurate run-time predictions. Cache miss rates indicate the frequency of memory accesses and enable us to derive the latencies introduced by these operations. The results shown in this paper indicate improvements in the number of deadlines met and the amount of energy saved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.