Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 2)

Mini-Tools

 
 

Search Report

  • 1. Morris, Nathaniel The Modeling and Management of Computational Sprinting

    Doctor of Philosophy, The Ohio State University, 2021, Computer Science and Engineering

    Sustainable computing, dark silicon and approximate computing have ushered a new era in which some processing capacity is available only as ephemeral bursts, a technique called computational sprinting. Computational sprinting speeds up query execution by increasing power usage, dropping tasks, precision scaling, and etc. for short bursts. Sprinting policy decides when and how long to sprint. Poor policies inflate response time significantly. However, sprinting alters query executions at runtime, creating a complex dependency between queuing and processing time. Sprinting can speed up query processing and reduce queuing delay, but it is challenging to set efficient policies. As sprinting mechanisms proliferate, system managers will need tools to set policies so that response time goals are met. I provide a method to measure the efficiency of sprinting policies and a framework to create response time models for sprinting mechanisms such as DVFS, CPU throttling, cache allocation, and core scaling. I compared sprinting policies used in competitive solutions with policies found using our models.

    Committee: Christopher Stewart PHD (Advisor); Radu Teodorescu PHD (Committee Member); Xiaorui Wang PHD (Committee Member); Xiaodong Zhang PHD (Committee Member) Subjects: Computer Science
  • 2. Saravanan, Indrajeet Exploring Computational Sprinting in New Domains

    Master of Science, The Ohio State University, 2019, Computer Science and Engineering

    The dawn of dark silicon and utilization wall are the main issues that current processors face. Moore's law is virtually dead due to the breakdown of Dennard scaling. An array of novel approaches have been proposed to tackle the above-mentioned issues and computational sprinting is the latest one to be advocated. Computational sprinting is a set of management techniques that selectively speed up the execution of cores for short intervals of time followed up idle periods to achieve improved performance. This is physically feasible due to the inherent thermal capacitance that absorbs the heat generated by a rise in operating frequency or voltage. In our paper, we explore multiple avenues on how and where computational sprinting can be used. Firstly, we apply the core scaling method to sprint web queries which eventually makes page loads faster. We observe a 5.86% and 12.6% decrease in average load time for average-case and best-case scenarios respectively. Likewise, the number of page loads increase by 12.12% (average-case) and 21.88% (best-case). Secondly, we explore the Intel Cache Allocation Technology Tool to enable sprinting for server workloads. Since toggling L3 cache capacity for workloads introduces interference and uncertain consequences, we study the impact of cache stealing from co-located workloads. With base and polluted cache state being 4 MB and 1 MB respectively, we observe an average increase of 41.37 seconds in the runtime of Jacobi workload for every 20% increase in interruption from other workloads. We propose a machine learning approach for future work. Finally, we study SLOs in practice to analyze the realities and myths surrounding their design and use. We learn that single-digit response goals are challenging, extreme percentiles for complex software is prohibited by black swans, and find the parameters of importance for evaluating infrastructure and complex cloud services.

    Committee: Christopher Stewart (Advisor); Radu Teodorescu (Committee Member) Subjects: Computer Science