Search Results (1 - 8 of 8 Results)

Sort By  
Sort Dir
 
Results per page  

Wang, MingyangImproving Performance And Reliability Of Flash Memory Based Solid State Storage Systems
PhD, University of Cincinnati, 2016, Engineering and Applied Science: Computer Science and Engineering
Flash memory based Solid State Disk systems (SSDs) are becoming increasingly popular in enterprise applications where high performance and high reliability are paramount. While SSDs outperform traditional Hard Disk Drives (HDDs) in read and write operations, they pose some unique and serious challenges to I/O and file system designers. The performance of an SSD has been found to be sensitive to access patterns. Specifically read operations perform much faster than write ones, and sequential accesses deliver much higher performance than random accesses. The unique properties of SSDs, together with the asymmetric overheads of different operations, imply that many traditional solutions tailored for HDDs may not work well for SSDs. The close relation between performance overhead and access patterns motivates us to design a series of novel algorithms for I/O scheduler and buffer cache management. By exploiting refined access patterns such as sequential, page clustering, block clustering in a per-process per-file manner, a series of innovative algorithms on I/O scheduler and buffer cache can deliver higher performance of the file system and SSD devices. Other than the performance issues, SSDs also face some unique reliability challenges due to the natural properties of flash memory. Along with the well-known write-endurance, flash memory also suffers from read-disturb and write-disturb. Even repeatedly reading from an SSD may cause data corruption because the read voltage may stress neighboring memory cells. As the density of flash memory keeps increasing, the disturbing problems are becoming even more severe for memory cells to store data reliably. One of the structural merits of an SSD is its internal parallelism. Such parallelism of flash memory chips could be exploited to support data redundancy in a similar fashion to traditional HDD RAID. Recently an emerging non-volatile memory (NVM) such as PCM is receiving increasing research interest, as it outperforms flash memory by providing in-place update and better performance and reliability. Hybrid solutions, which combine both flash memory and NVM to balance performance and cost, are under special investigation to address the reliability and performance issues of flash memory based storage systems. To address the reliability concerns, we present a novel storage architecture called i-RAID (internal RAID) that introduces RAID-like parity-based redundancy while avoiding many of its problems. What make i-RAID so unique like no other are its deferred parity maintenance, selective RAID protection and dynamic RAID organization. It solves traditional RAID’s small update problem and avoids SSD RAID pitfalls. Unlike traditional disk drives, SSDs cannot perform in-place updates. We view this unique characteristic as an opportunity instead of a hurdle. The out-of-place update feature means that old data will not be over-written by the new data, which enables us to design some fundamentally new algorithms that defer the computing and updating of parity blocks until the garbage collection time, thereby significantly reducing the overhead and possibly increasing the life-time of SSDs. Our algorithms also dynamically and selectively construct parity stripes only on aged, error-prone blocks, and utilize the internal parallelism of SSDs to further improve performance.

Committee:

Yiming Hu, Ph.D. (Committee Chair); Kenneth Berman, Ph.D. (Committee Member); Karen Davis, Ph.D. (Committee Member); Wen-Ben Jone, Ph.D. (Committee Member); Carla Purdy, Ph.D. (Committee Member)

Subjects:

Computer Engineering

Keywords:

Flash Memory;RAID;Solid State Disk;Non Volatile Memory;Write Endurance;Read Write Disturb

Yang, Jing peiA Design of Buffer Scheme by Using Data Filter for Solid State Disk
MS, University of Cincinnati, 2010, Engineering : Computer Engineering
Solid-State-Disks (SSDs), which provide high random read and write performance by eliminating the mechanical movements, have been widely used in embedded systems and mobile devices. They have the potential to replace the traditional storage device. However, the problem of limited life time and write amplification effect still remains in today’s flash memory based SSDs which becomes a bottleneck of performance. A lot of research has been done to solve these problems. In this paper, we analyze some design issues on this topic, and introduce a new buffer management scheme by using data filter to improve the write performance and enhance the life time for SSD devices.

Committee:

Yiming Hu, PhD (Committee Chair); Wen Ben Jone, PhD (Committee Member); Carla Purdy, C, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

flash; SSDs; flash memory; LRU; CFLRU; queues; cold queue

Ordosgoitti, Jorhan RainierDevelopment of a Non-Volatile Memristor Device Based on a Manganese-Doped Titanium Oxide Material
Master of Science in Electrical Engineering, University of Toledo, 2010, Electrical Engineering
Flash Memory is the current predominant technology in the non-volatile memory market. It has gained popularity due to its rapid increase in reliability, storage density and the reduction of cost. This has been possible because this technology has borrowed a large amount of collected knowledge from complementary metal oxide semiconductor (CMOS) technology helping to its fast development and optimization. Unfortunately, in line with the CMOS transistor technology, scaling of Flash devices is becoming harder to obtain in a reliable way due to the limitations of the current lithography technology. As a result, semiconductor companies are finding it hard to supply the increasing market demand for the higher density non-volatile memories. The recent invention of memristor devices has given hope to semiconductor companies by offering an easier way to increase the storage density by using the current fabrication technology. This is possible because memristor devices only requires two terminals in order to operate, which uses less wafer space, reduces the complexity of circuit interconnections, and facilitates high density integration when used in crossbar structures. Additionally, the main limitation of the flash memory devices is the quality and reliability of the tunnel oxide, which is not present in memristor devices. Due to these advantages, it is expected that the memristor devices will outperform flash memory in scalability, speed and endurance. However, memristor devices are relatively new compared to flash memory technology, for this reason, researches like this one are needed in order to gain a better understanding of the memristor operation, to identify the ways to improve its performance and to understand its limitations. Eventually, all the effort put into researching this device will pay off and soon memristor devices will become the new predominant non-volatile technology replacing the aging flash memory devices.

Committee:

Dr. Rashmi Jha (Committee Chair); Dr. Christopher Melkonian (Committee Member); Dr. Mansoor Alam (Committee Member); Dr. Vijay K. Devabhaktuni (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Volatile; Non-Volatile; Memristor; Manganese; Titanium; Oxide; Flash; Memory

MARTIN, ROBERT ROHANMULTI-LEVEL CELL FLASH MEMORY FAULT TESTING AND DIAGNOSIS
MS, University of Cincinnati, 2005, Engineering : Computer Engineering
Single bit per cell flash memories have been widely used and many efficient testing and diagnosis methodologies have been proposed. On the other hand, their multi-level cell counterparts are relatively not well-known, even though they have many advantages such as low area, high-density, low power and short access times. To the best of our knowledge, no research papers have been published for MLC flash memory testing. One reason is that conventional march algorithms cannot be directly applied. Secondly, the faults affecting MLC are not formulated efficiently, and no theoretical analysis of such faults is available. This thesis is an attempt to bridge this gap by providing a simple solution to test and diagnose a MLC flash memory array. The fault model proposed takes into account many physical defects which cause the state of the memory cell to change. The diagonal flash test (FTX) and flash diagnosis (FDX) march algorithms proposed in this thesis are a first-of-its-kind for MLC flash. Their 100% fault coverage for the fault model we propose in this thesis, low complexity and test time make them an attractive methodology for testing and diagnosing faults for multi-level flash memories.

Committee:

Dr. Wen-Ben Jone (Advisor)

Keywords:

MLC; Multi-level Cell; Flash Memory; Fault Testing; Fault Diagnosis; Diagonal Algorithm; March Algorithm

Liang, ShuangAlgorithms Designs and Implementations for Page Allocation in SSD Firmware and SSD Caching in Storage Systems
Master of Science, The Ohio State University, 2010, Computer Science and Engineering

The emerging flash memory based SSD technology poses a new challenge for us to effectively use it in storage systems. Despite of its many advantages that promise a performance breakthrough for the next generation storage systems, the technical limitations of flash memory write operations call for innovative designs that can overcome the long latency of small random write pattern and the limited lifetime of flash memory cells.

Previous studies have attempted to address the write performance limitations by trading capacity at the firmware-level such that the write access pattern can be transformed to avoid small random write patterns. Although it may improve the write performance, it does not utilize the data access locality information available at system level to achieve optimal performance. In this thesis, we study techniques to improve the performance via. system-level designs. We consider a major application scenario of flash SSDs – SSD based disk cache. Based on a new disk cache model for SSD based cache, we propose cache management algorithms which utilize the data access locality for write access pattern transformation with minimal cost to cache hit ratio. We implemented the proposed algorithm in an SSD based second-level disk cache system. The results show that the proposed algorithm can significantly improve the performance of existing cache management algorithms.

To maximize the lifetime of flash SSDs, we study techniques that can reduce the number of write operations, which consume the life-cycle of flash memory cells. Unlike previous approaches of flash memory life-cycle management, which balance the wear-level of flash memory cells through data migration, we focus on reducing the internal data movement overhead introduced by flash memory SSD firmware to mask the long latency of small random write. We found that the internal data movement overhead can be managed effectively by capacity over-provisioning based on the write request working-set size, and we show a theoretical analysis to illustrate the relationship between the minimal over-provisioning capacity and the page allocation policy to achieve zero data movement overhead. Based on these findings, two SSD firmware page allocation algorithms are proposed to improve the capacity efficiency. Our trace-driven simulation using the Microsoft SSD extension of the well-known Disksim simulator shows that the proposed algorithm can significantly reduce the internal data movement overhead.

Committee:

Xiaodong Zhang, PhD (Advisor); Feng Qin, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

flash memory; disk cache; cache algorithms; page allocation algorithm; firmware

Chen, BainanHardware Implementation of Error Control Decoders
Master of Sciences, Case Western Reserve University, 2008, Computer Engineering

In this thesis, an FPGA implementation of a factorization processor for algebraic soft-decision Reed-Solomon (RS) decoding is first presented. The design is based on the root-order prediction architecture and extensible for the factorization of polynomials with designated degrees. Parallel processing is exploited to speed up the polynomial updating involved in the factorization. To resolve the data dependency issue in parallel polynomial updating, an efficient coefficient storage and transfer scheme with small memory requirement and low latency is proposed. Synthesis results show that the factorization processor for a (255, 239) RS code with maximum multiplicity four can achieve an average decoding speed of 226 Mbps on a Xilinx Virtex-II FPGA device when the frame error rate is less than 10-2.

Next, an FPGA implementation of a factorization processor for algebraic soft-decision bit-level generalized minimum distance (BGMD) RS decoding is presented. The BGMD factorization processor utilizes a low-latency and prediction-free scheme for root computation. Furthermore, parallel processing architectures and efficient coefficient storage schemes are employed to reduce the latency. Synthesis results show that the BGMD factorization processor for a (255, 239) RS code with maximum multiplicity two can achieve a decoding speed of 815 Mbps on a Xilinx Virtex-II FPGA device.

Prior research efforts have been focusing on using BCH codes for error correction in multi-level cell (MLC) NAND flash memory. However, BCH codes often require highly parallel implementations to meet the throughput requirement. As a result, large area is needed. In this thesis, RS codes are proposed to be used for the error correction inMLC flash memory. A (828, 820) RS code has almost the same rate and length in terms of bits as a BCH (8248, 8192) code. Moreover, it has at least the same error-correcting performance in flash memory applications. Nevertheless, with 70% of the area, the RS decoder can achieve a throughput that is 121% higher than the BCH decoder. A novel bit mapping scheme using Gray code is also proposed. Compared to direct bit mapping, the proposed scheme can achieve 0.02 dB and 0.2dB additional gains by using RS and BCH codes, respectively, without any overhead.

Committee:

Xinmiao Zhang (Advisor); Daniel Saab (Committee Member); Christos Papachristou (Committee Member)

Subjects:

Electrical Engineering

Keywords:

Reed-Solomon codes; factorization; algebraic soft-decision decoding; BCH codes; VLSI architecture; flash memory

Kolla, Purushotham Pothu RajuParallel Garbage Collection in Solid State Drives
MS, University of Cincinnati, 2012, Engineering and Applied Science: Computer Engineering

Flash memories are making their way into both desktop and server environments. Over the years, the major limitation to the wide-adoption of flash memories has been their cost. However, with the advancements in the semiconductor industry, the price per gigabyte (GB) gap between the conventional disk drives and flash memories is getting closer. As such, flash memories can replace disks, where disk utilization is less and extra spindles are added just to increase performance. Though they ventured into the storage architecture as cache and as a hybrid counterpart with Hard Disk Drives (HDD), slowly they are expected to replace the disk drives in servers and super computers [1]. The other major drawback with flash memory is its inability to sustain unlimited erase cycles, which directly limits their lifetime [2]. In order to improve reliability, it is proposed to create redundancy [3].

Creating a Redundant Array of Independent Disks (RAID) is a conventional way of providing redundancy in hard disk drives (HDD) [4]. The same idea is adopted in Solid State Drives (SSD). In addition to the conventional RAID techniques that are implemented at the device level (external RAID), redundancy can be created in an SSD at a much lower level (internal RAID) [3]. The scope of this work is limited to internal RAID.

This work uses i-RAID [3]; an architecture and simulator for internal RAID as background and proposes two improvements. The first contribution is to improve the dynamic stripe formation using access patterns. Another enhancement is to utilize the idle domains when i-RAID is not active by invoking parallel instances of garbage collection.

This thesis describes how these methods can affect the performance of the device and explains how the internal parallelization of an SSD can be better exploited. Both the methods are evaluated individually and the findings are presented. Though both the methods have a great potential to improve the performance of the device, the earlier work (on which the current work is based) is done in such a way that exploiting access patterns during stripe formation could not provide much improvement.

Committee:

Yiming Hu, PhD (Committee Chair); Wen Ben Jone, PhD (Committee Member); Carla Purdy, PhD (Committee Member)

Subjects:

Computer Engineering

Keywords:

Solid State Drives;SSD;Garbage Collection;RAID;Flash Memory;Access Patterns;

Chen, FengOn Performance Optimization and System Design of Flash Memory based Solid State Drives in the Storage Hierarchy
Doctor of Philosophy, The Ohio State University, 2010, Computer Science and Engineering

As an emerging storage technology, Flash Memory based Solid State Drive (SSD) has shown a high potential to fundamentally change the existing Hard Disk Drive (HDD) based storage systems. Unlike conventional magnetic disks, SSD is built on semiconductor chips and has no mechanical components (e.g. rotating disk platters). This architectural difference brings many attractive technical features, such as high random data access performance and low power consumption. Most importantly, these unique features could potentially address the long-existing technical limitations of conventional magnetic disks. Due to this reason, SSD has been called a 'pivotal technology' that may completely revolutionize current computer storage systems.

On the other hand, SSD also poses several critical challenges to application and system designers. First, due to divergent internal structures, SSD is fundamentally different from rotating media, although they share the same logical and physical interfaces. To date we still lack an insightful understanding about the performance characteristics of SSDs, both the positive and negative sides, and their implications to application and system designers. In this dissertation, we present a thorough experimental study on the unique features of SSDs. Second, although SSDs have shown a great performance potential, especially for handling small and random data accesses, SSDs are much more expensive than conventional hard disks. Even considering the decreasing price trend, the price gap between SSDs and HDDs will not disappear in the near future and it significantly prevents the wide adoption of SSDs in practice, especially in cost-sensitive commercial systems. In this dissertation we present the design and implementation of a hybrid storage system, called Hystor, which integrates both SSDs and HDDs to provide a cost-efficient solution for commercial applications with minimal change to other system components and applications. Third, a unique merit of SSD is the internal parallelism, which is the key to effectively exploiting the performance potential of SSDs. Unfortunately, the existing literature mostly focuses on addressing the technical limitations of SSDs (e.g. random write issues) and rarely discusses the internal parallelism. In this dissertation, we present our experimental studies on this unique opportunity provided by SSDs. Our study shows that exploiting internal parallelism can bring great performance benefits, but we must also pay attention to some unexpected dynamics. Our case studies in database systems, a typical data-intensive application, indicate that internal parallelism can significantly improve performance of real-world applications, and many existing HDD-based application optimizations need to be revisited.

Committee:

Xiaodong Zhang, PhD (Committee Chair); Ten-Hwang Lai, PhD (Committee Member); Russell Pitzer, PhD (Committee Member); Feng Qin, PhD (Committee Member)

Subjects:

Computer Science

Keywords:

solid state drive; flash memory; hard disk drive; hybrid storage system;