Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 24)

Mini-Tools

 
 

Search Report

  • 1. Ziniel, Justin Message Passing Approaches to Compressive Inference Under Structured Signal Priors

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Across numerous disciplines, the ability to generate high-dimensional datasets is driving an enormous demand for increasingly efficient ways of both capturing and processing this data. A promising recent trend for addressing these needs has developed from the recognition that, despite living in high-dimensional ambient spaces, many datasets have vastly smaller intrinsic dimensionality. When capturing (sampling) such datasets, exploiting this realization permits one to dramatically reduce the number of samples that must be acquired without losing the salient features of the data. When processing such datasets, the reduced intrinsic dimensionality can be leveraged to allow reliable inferences to be made in scenarios where it is infeasible to collect the amount of data that would be required for inference using classical techniques. To date, most approaches for taking advantage of the low intrinsic dimensionality inherent in many datasets have focused on identifying succinct (i.e., sparse) representations of the data, seeking to represent the data using only a handful of "significant" elements from an appropriately chosen dictionary. While powerful in their own right, such approaches make no additional assumptions regarding possible relationships between the significant elements of the dictionary. In this dissertation, we examine ways of incorporating knowledge of such relationships into our sampling and processing schemes. One setting in which it is possible to dramatically improve the efficiency of sampling schemes concerns the recovery of temporally correlated, sparse time series, and in the first part of this dissertation we summarize our work on this important problem. Central to our approach is a Bayesian formulation of the recovery problem, which allows us to access richly expressive models of signal structure. While Bayesian sparse linear regression algorithms have often been shown to outperform their non-Bayesian counterparts, this frequently come (open full item for complete abstract)

    Committee: Philip Schniter PhD (Advisor); Lee Potter PhD (Committee Member); Per Sederberg PhD (Committee Member) Subjects: Computer Science; Electrical Engineering
  • 2. Shah, Julin Mukeshkumar Compressive Sensing Analog Front End Design in 180 nm CMOS Technology

    Master of Science in Engineering (MSEgr), Wright State University, 2015, Electrical Engineering

    In order to accurately reconstruct signal waveform a signal must be sampled at least twice as fast as the bandwidth of the signal. Ultra Wideband (UWB) signals have extraordinary potential for high information transmission while a central focus of wireless has been the mobile communication. It is an emerging area that involves development of RF sensing and spectral applications over multiple GHz bandwidths. Even though our technology is improving, it is very challenging to build ADC's that are compatible and keep up with the growth of ultra-wideband range. Compressive sensing does “sampling” and “compressing” at the same time and exploits the sparsity for commensurate power saving by enabling sub-Nyquist under-sampling acquisition. The main idea behind compressive sensing is to recover specific signals from very few samples as compared to conventional Nyquist samples. In this thesis, a compressive sensing front-end (CSFE) is designed and analyzed to mitigate sampling approach limitations of the architecture in a CMOS process. CSFE has four main components: pseudo random sequence generator (PBRS), multiplier, integrator, and ADC. The PBRS (implemented by a Gold code generator) and the multiplier are designed in Cadence Spectre using TSMC 180nm technology. The integrator and the 10-bit ADC are designed and verified using both Verilog-A and Matlab. Using 4 GHz PBRS and 800 MHz under sampling ADC, the CSFE design can detect signal frequency up to 2 GHz after applying the Orthogonal Matching Pursuit algorithm to reconstruct the under sampling ADC data.

    Committee: Henry Chen Ph.D. (Committee Chair); Marian Kazimierczuk Ph.D. (Committee Member); Jiafeng Xie Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 3. Parker, Jason Approximate Message Passing Algorithms for Generalized Bilinear Inference

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Recent developments in compressive sensing (CS) combined with increasing demands for effective high-dimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and low-rank constraints. In this dissertation, we extend the generalized approximate message passing (GAMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of CS, to handle several classes of bilinear inference problems. First, we consider a general form of noisy CS where there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm called Matrix Uncertain GAMP (MU-GAMP) whose goal is minimization of mean-squared error of the signal estimates in the presence of these uncertainties, without attempting to estimate the uncertain measurement matrix itself. Next, we extend GAMP to the generalized-bilinear case, in which the measurement matrix is estimated jointly with the signals of interest, enabling its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. We derive this Bilinear GAMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assu (open full item for complete abstract)

    Committee: Philip Schniter (Advisor); Lee Potter (Committee Member); Emre Ertin (Committee Member) Subjects: Electrical Engineering
  • 4. Halman, Jennifer On the Use of Physical Basis Functions in a Sparse Expansion for Electromagnetic Scattering Signatures

    Master of Science, The Ohio State University, 2014, Electrical and Computer Engineering

    Radar images are created from measurements of the electromagnetic field scattered from an object or scene of interest. The scattered field defines the radar signature as a function of frequency and aspect angle. High resolution radar images and radar signatures are used for target recognition, tracking, and hardware-in-the-loop testing. High resolution radar images of electrically large targets may require a large amount of data to be measured, stored, and processed. A sparse representation of this data may allow the radar signature to be efficiently measured, stored, and rapidly reconstructed on demand. Compressed sensing is applied to obtain the sparse representation without measuring the full data set. “Compressed sensing” has different interpretations, but in this thesis it refers to using non-adaptive, random samples of the measured signal, with no a priori knowledge of the signal. According to compressed sensing theory, this is possible if the radar signature can be expressed in terms of a sparse basis. If a signal y can be approximated by K non-zero coefficients in the sparse basis (“K-sparse”), the coefficients may be obtained with random sampling of the signal at sub-Nyquist rates provided that K is much smaller than the total number of Nyquist samples. The random sampling is non-adaptive (i.e., future samples are independent of previous samples) and the number of samples required is primarily related to the sparseness of the signal, and not the bandwidth nor the size of the dictionary from which the basis functions are selected. The objective of this thesis is to investigate the effectiveness of physical basis functions, defined as point scatter functions with frequency-dependent amplitudes characteristic of physical scattering mechanisms, to provide an improved sparse basis in which to expand radar signatures. The goal is to represent a radar signature accurately with the fewest terms possible and with the fewest measurements. Use of physical basi (open full item for complete abstract)

    Committee: Robert Burkholder (Advisor); Lee Potter (Committee Member) Subjects: Electrical Engineering; Electromagnetics
  • 5. Plummer, Joseph Improving techniques and clinical utility of hyperpolarized 129-Xenon lung MRI

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Biomedical Engineering

    Hyperpolarized 129Xe MRI is a monumental imaging advancement for studying pulmonary structure and function. Following an inhalation of hyperpolarized 129Xe gas, it enables regional quantification of gas diffusion inside the airways, alveolar sacs, interstitial blood air membrane, and the red blood cells. It is often described as the only 4-dimensional pulmonary function test, as it can generate functional images with high spatio-temporal resolution and no radiation. Consequently, it has gained considerable interest from both the lung and MRI communities, and has recently obtained approval by the U.S. Food and Drug Administration as a clinical tool. Despite its power, there are many areas in which hyperpolarized 129Xe MRI can be improved. One of its biggest technical challenges is that its magnetization suffers non-recoverable decay with each excitation. This constrains the sequences to settings that are rapid and efficient with their use of magnetization. Furthermore, its magnetization decay is highly susceptible to regional changes in flip angle, which is problematic when imaging the large thoracic-cavity field-of-view as inhomogeneities in B1 field strength can be severe. From a clinical standpoint, 129Xe MRI is relatively novel, so the true-definitions of healthy metrics are yet to be rigorously explored, especially with regard to demographics such as age. The goal of this dissertation is to meet these needs and deliver a series of technical means that improve hyperpolarized 129Xe MRI and its impact in the clinic. Firstly, we propose a keyhole reconstruction pipeline for 2D spiral MRI that supports the calculation of accurate flip angle maps with minimal additional data cost, and subsequently, analytical B1-inhomogeneity correction. Next, we propose a novel compressed sensing reconstruction framework that incorporates hyperpolarized decay directly into the forward model, subsequently ena (open full item for complete abstract)

    Committee: Laura Walkup Ph.D. (Committee Chair); Mary Kate Manhard Ph.D. (Committee Member); Thomas Talavage Ph.D. (Committee Member); Zackary Cleveland Ph.D. (Committee Member) Subjects: Radiology
  • 6. Agarwal, Tushar Improving Deep Representations by Incorporating Domain Knowledge and Modularization for Synthetic Aperture Radar and Physiological Data

    Doctor of Philosophy, The Ohio State University, 2022, Electrical and Computer Engineering

    Machine Learning (ML) using Artificial Neural Networks (ANNs), referred to as Deep Learning (DL), is a very popular and powerful method of statistical inference. A primary advantage of deep-learning has been the automatic learning of informative features (that encodes the data referred to as deep-representations henceforth) based on gradient-descent optimization of an objective function. While DL is applicable to problem domains where hand-crafted features are not readily available, its performance is critically dependent on other factors like dataset size and model architecture. Despite recent advances in the field, the question of how to modify the DL framework to incorporate domain knowledge or to disentangle factors of variation warrants more research. Until recently, most popular works in the DL literature have primarily employed inductive-bias of architectures (e.g., translational invariance in convolutional neural-nets) and relied on the availability of large labeled datasets for improved representation learning. Unfortunately, curating such large datasets is costly and not practical for many application areas. In this dissertation, we study methods to improve learned representations by incorporating domain knowledge into the learning process and through disentangling factors of variation. First, we present a sparse-modeling based data augmentation method for tomographic images and use it to incorporate domain knowledge of Synthetic Aperture Radar (SAR) target phenomenology into deep representations. We validate the improvements in learned representations by using them for a benchmark classification problem of Automatic Target Recognition (ATR) where we establish new state-of-the-art on subsampled datasets. Second, we propose a DL-based hierarchical modeling strategy for a physiological signal generation process which in turn can be used for data augmentation. Based on the physiology of cardiovascular system function, we propose a (open full item for complete abstract)

    Committee: Emre Ertin (Advisor); Lee Potter (Committee Member); Kiryung Lee (Committee Member) Subjects: Artificial Intelligence; Biomedical Engineering; Computer Engineering; Computer Science; Electrical Engineering; Health Care; Remote Sensing; Scientific Imaging; Statistics
  • 7. Ma, Zizhong Bi-directional Sampling in Partial Fourier Reconstruction

    Master of Science, The Ohio State University, 2022, Electrical and Computer Engineering

    This thesis explores a hypothesis for magnetic resonance image (MRI) formation: a new acquisition pattern, bi-directional sampling, can yield improved imaging compared to traditional partial Fourier (PF) acquisition at the same sampling rate. Two image reconstruction techniques are applied to evaluate the acquisition methods: compressed sensing methods with L-1 wavelet regularization and unroll method with deep net prior. Three metrics are used to evaluate image quality: normalized mean squared error (NMSE), peak signal to noise ratio (PSNR), and structural similarity index measure (SSIM) Additionally, residual maps are applied to evaluate the reconstruction. The results show that traditional PF sampling method works better than the bi-directional PF sampling method.

    Committee: Rizwan Ahmad (Committee Member); Lee Potter (Advisor) Subjects: Electrical Engineering
  • 8. Pruitt, Aaron Pushbutton 4D Flow Imaging

    Doctor of Philosophy, The Ohio State University, 2021, Biomedical Engineering

    Cardiovascular heart disease (CVD) is the leading cause of mortality in the U.S. and worldwide. Over the past several decades, the healthcare costs associated with CVD have steadily risen to more than 200 billion dollars per year and are expected to rise further with the aging population. Cardiovascular MRI (CMR) is a well-established imaging technique that provides the most comprehensive evaluation of the cardiovascular system. CMR is considered the gold standard for evaluating ventricular function and myocardial viability. Despite the growing evidence of its advantages over other imaging modalities and its potential as a “one-stop-shop” diagnostic tool, the role of CMR in clinical cardiology remains limited. One major impediment to its wider usage is the inefficient acquisition that makes CMR exams excessively long, often lasting for more than an hour; this diminishes its efficiency and cost-effectiveness relative to other imaging modalities. The current paradigm offers either a prolonged segmented acquisition that requires regular cardiac rhythm and multiple breath-holds or a fallback option of real-time, free-breathing acquisition with degraded spatial and temporal resolutions. Recently, 3D imaging has gained significant interest due to its volumetric coverage and isotropic resolution. In particular, 4D flow imaging has emerged as a powerful tool that provides temporally and spatially resolved velocity maps of the blood in the heart and great vessels. A major technical limitation of 4D flow imaging is the long acquisition, which makes the images susceptible to motion artifacts. In this work, we present a framework that provides a whole-heart coverage and enables a rapid, quantitative assessment of hemodynamics. In addition, the method employs self-gating and thus extracts and compensates the physiological motions from the information in the MRI data itself, obviating the need to utilize electrocardiogram or respiratory gating. Novel extensions of the method, whe (open full item for complete abstract)

    Committee: Rizwan Ahmad (Advisor); Rengasayee Veeraraghavan (Committee Member); Orlando Simonetti (Committee Member); Jun Liu (Committee Member) Subjects: Biomedical Engineering; Medical Imaging
  • 9. McCamey, Morgan Deep Learning for Compressive SAR Imaging with Train-Test Discrepancy

    Master of Science in Computer Engineering (MSCE), Wright State University, 2021, Computer Engineering

    We consider the problem of compressive synthetic aperture radar (SAR) imaging with the goal of reconstructing SAR imagery in the presence of undersampled phase history. While this problem is typically considered in compressive sensing (CS) literature, we consider a variety of deep learning approaches where a deep neural network (DNN) is trained to form SAR imagery from limited data. At the cost of computationally intensive offline training, on-line test-time DNN-SAR has demonstrated orders of magnitude faster reconstruction than standard CS algorithms. A limitation of the DNN approach is that any change to the operating conditions necessitates a costly retraining procedure. In this work, we consider development of DNN methods that are robust to discrepancies between training and testing conditions. We examine several approaches to this problem, including using input-layer dropout, augmented data support indicators, and DNN-based robust approximate message passing.

    Committee: Joshua Ash Ph.D. (Advisor); Tanvi Banerjee Ph.D. (Committee Member); Mateen Rizki Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering
  • 10. Grunden, Beverly On the Characteristics of a Data-driven Multi-scale Frame Convergence Algorithm

    Doctor of Philosophy (PhD), Wright State University, 2021, Interdisciplinary Applied Science and Mathematics PhD

    In recent years, data-driven representation methods have been introduced to improve compressed sensing image reconstruction. This research explores a recently proposed algorithm that utilizes a data-driven multi-scale Parseval frame for image compression. Because a sensing matrix by itself may be insufficient to obtain a sparse representation for an image, a frame is combined with the compressed sensing matrix to increase flexibility in obtaining a sparse representation. The two-step algorithm optimizes the representation by alternating between adjusting a sparse coefficient vector and tuning a small filterbank which determines the frame. The structure of the frame and its relationship with the underlying filterbank were examined. Numerical experiments to characterize the algorithm include a search for the appropriate regularization parameters that control emphasis between the two terms of the objective function, examination of the effect of image size, a parameter sweep of the relaxation factor of the Weak Matching Pursuit function in the first step of the algorithm, and the relaxation of the Parseval constraint in the second step. Performance metrics used to assess the numerical results include execution time and number of loops to reach convergence, sparsity of the representation, and two image quality measures – peak signal to noise ratio (PSNR) and Structural Similarity (SSIM). The experiments indicated the algorithm takes a very long time to reach convergence, even for images of moderate size, and that reconstructions will result in greater accuracy on image patches with a small number of pixels (fewer than 100). It was also found that algorithm performance varies depending on the image format used to specify image brightness of the pixels. Finally, the Parseval constraint could be removed from the algorithm with improvement in execution time and sparsity, but without loss of accuracy.

    Committee: Travis J. Bemrose Ph.D. (Committee Co-Chair); Jason Deibel Ph.D. (Committee Co-Chair); Qingbo Huang Ph.D. (Committee Member); Steen Pedersen Ph.D. (Committee Member) Subjects: Applied Mathematics; Mathematics
  • 11. Shabara, Yahia Establishing Large-Scale MIMO Communication: Coding for Channel Estimation

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    The surge in mobile broadband data demands is expected to surpass the available spectrum capacity below 6 GHz. This expectation has prompted the exploration of millimeter-wave (mmWave) frequency bands as a candidate technology for next-generation wireless networks, like 5G-NR and WiFi ad/ay. However, numerous challenges to deploying mmWave communication systems, including channel estimation, need to be met before practical deployments are possible. The channel estimation problem is particularly complex due to the large antenna arrays, i.e., large-MIMO, used in mmWave transceivers. Large-MIMO antennas offer significant performance gains in terms of improved spectral efficiency, superior spatial multiplexing capabilities, as well as the ability to deliver high transmit signal power, which is crucial for compensating for the severe attenuation of high-frequency signals. However, large-MIMO channel estimation is complex since it entails the discovery of large-sized channel matrices, which is a daunting task and may necessitate a large number of measurements. Channel estimation is especially challenging for ``initial link establishment'', where limited prior knowledge about the channel is available. Reducing the number of necessary measurements thus holds the key to faster link establishment. For sparse MIMO channels, such reduction is possible due to the prior knowledge that the channel can be represented in a domain in which most of its components are negligibly small. The problem of "Fast Link Establishment" is the focus of this dissertation. In particular, we focus on the development and evaluation of sparse channel estimation algorithms that only require a small number of measurements. We divide this dissertation into three research objectives, as follows: First: We seek to develop a reliable channel estimation framework that: (1) requires a limited number of measurements (compared to the channel dimensions), and (2) operates using energy-efficient transcei (open full item for complete abstract)

    Committee: Eylem Ekici (Advisor); C. Emre Koksal (Advisor); Ness Shroff (Committee Member) Subjects: Communication; Computer Science; Electrical Engineering
  • 12. Dong, Huiming In Vivo Aortic MR Elastography: Technical Development and Application in Abdominal Aortic Aneurysm

    Doctor of Philosophy, The Ohio State University, 2020, Biomedical Engineering

    Abdominal aortic aneurysm (AAA) is a vascular disorder in which an abnormal and non-reversible dilation develops within the infrarenal aorta. Most AAAs remains asymptomatic until catastrophic rupture. The overall mortality rate resulted by AAA rupture is between 65% and 90%. AAA rupture claims about 15,000 lives in the United States per year, making it the 15th leading cause of death in the country. Recent advances in cardiovascular medicine have not lowered AAA mortality rate globally. Prevention of rupture is a main goal of AAA management. Currently, AAA size has been used to evaluate its rupture potential. Aneurysm repair is usually recommended when the maximum diameter exceeds 5.5 cm. Multiple studies have suggested that this criterion may not be reliable, especially for small AAAs (<5.5cm). Recent studies have demonstrated that biomechanical properties such as AAA stiffness play an important role in determining aneurysm expansion and rupture. The stiffness of an aneurysm is associated with its extracellular matrix (ECM) components and the way in which they are organized. Extensive ECM remodeling causes loss of wall strength and then eventual wall rupture. The substantial remodeling of elastin and collagen as well as mineralization can be assessed by studying the variation of AAA stiffness, making aortic stiffness a potential biomarker for evaluating the progression of an AAA. Ex vivo mechanical testing is an excellent research tool, but it is not a viable technique for stiffness measurement under a clinical setting. Currently, few techniques are available to measure in vivo aortic stiffness. Pulse wave velocity-derived stiffness, despite its wide acceptance, has its own technical limitations. Ultrasound-based elastography largely relies on semi-quantitative estimation. Magnetic resonance elastography (MRE) is a non-invasive phase-contrast MR technique where the shear modulus of soft tissues can be spatially estimated. In our pilot studies, the initi (open full item for complete abstract)

    Committee: Arunark Kolipaka PhD (Advisor); Rizwan Ahmad PhD (Committee Member); Gunjan Agarwal PhD (Committee Member); Michael Go MD (Committee Member) Subjects: Biomechanics; Biomedical Engineering; Health Care; Medical Imaging; Radiology
  • 13. Petrack, Alec Single-Pixel Camera Based Spatial Frequency Domain Imaging for Non-Contact Tissue Characterization

    Master of Science in Biomedical Engineering (MSBME), Wright State University, 2020, Biomedical Engineering

    Optical imaging has demonstrated potential as a medical imaging modality for measuring tissue functionality. Recently, interest in fluorescence guided surgery has emerged from improvements in optical imaging that have allowed real-time feedback. Of the optical imaging modalities, spatial frequency domain imaging (SFDI) has gained a lot of interest. Unlike spectroscopic techniques, such as functional near infrared spectroscopy (fNIRS) and frequency domain spectroscopy that measure bulk tissue properties, SFDI quantifies tissue functionality locally and wide field making it practical for clinical applications. Unfortunately, traditional SFDI systems use multi-pixel detectors, which may not exhibit ideal spectral characteristics, have limited sensitivity, be expensive, or bulky in size. On the other hand, avalanche photodiodes (APD) and single photon counting modules (SPCM), are much more sensitive to the spectrum ideal for optical imaging, inexpensive, and compact in size. Traditionally, an array of photodiodes are required to capture an image, but with the advent of single pixel cameras entire images can be captured with a single photodiode. In this thesis, a novel single pixel camera (SPC) is used to capture an image of the light field projected by an SFDI system to explore its feasibility as a detection method relative to a traditional charged-coupled device (CCD) or scientific complementary metal-oxide semiconductor (sCMOS) camera. To determine the feasibility of single pixel SFDI, both sCMOS and SPC SFDI implementations were built to measure the optical properties of a brain tissue simulating phantom. In the results chapter, the mean optical scattering and absorption properties are reported for regions of high and low optical absorption indicating single pixel camera spatial frequency domain imaging (SPC SFDI) is viable given certain applications. In Chapter 1, I provide the motivation and significance of single pixel spatial frequency do (open full item for complete abstract)

    Committee: Ulas Sunar Ph.D. (Advisor); Tarun Goswami D.Sc. (Committee Member); Josh Ash Ph.D. (Committee Member) Subjects: Biomedical Engineering
  • 14. Lin, Ethan Compressed Sensing for Electronic Radio Frequency Receiver: Detection, Sensitivity, and Implementation

    Doctor of Philosophy (PhD), Wright State University, 2016, Engineering PhD

    The current state-of the-art for digital receiver bandwidth coverage is now reaching multi-GHz. The conventional wideband digital receiver design is based on the Nyquist information theory, and its bandwidth coverage is limited by the Nyquist sampling rate. Therefore, receiver performance highly depends on the high speed analog-to-digital (ADC) technology and computation hardware such as FPGA. Having proved a fundamental theory that Nyquist waveform can be restored with a reduced sampling rate under certain situations, compressed sensing (CS) technique becomes an attractive solution to wideband digital receiver development. In this dissertation, performance analysis of the compressed sensing in receiver application is conducted. The compressed sensing receiver uses two modulations and sampling schemes: 1) Pseudo Random Code (PRC), a uniform sampling approach, and 2) a proposed non-uniform sampling (NUS) approach. Three algorithms are used to process the compressed signals: 1) Orthogonal Matching Pursuit (OMP), 2) Parameter Estimation (PE), and 3) Nesterov's algorithm (NESTA). Signal detection thresholds for the compressed sensing receivers are determined by Additive white Gaussian noise (AWGN) distribution through probability density function (PDF) using the best fitting analog function for a false alarm rate of 10-7. Remedy algorithms are developed to solve frequency-misread problem caused by CS modulations. Signal detection and sensitivity of the compressed sensing receivers are measured and presented. GPU-accelerated parallel computing is adopted to process the compressed signals. Computing results of OMP and PE from NVIDIA Telsa K-40 GPU are presented.

    Committee: Henry Chen Ph.D. (Advisor); Marty Emmert Ph.D. (Committee Member); Saiyu Ren Ph.D. (Committee Member); Meilin Liu Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 15. Ting, Samuel An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    Doctor of Philosophy, The Ohio State University, 2016, Biomedical Engineering

    The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant t (open full item for complete abstract)

    Committee: Orlando P. Simonetti PhD (Advisor); Lee C. Potter PhD (Committee Member); Rizwan Ahmad PhD (Committee Member); Jun Liu PhD (Committee Member) Subjects: Applied Mathematics; Electrical Engineering; Health; Health Care; Medical Imaging; Medicine; Radiology; Scientific Imaging
  • 16. Cowen, Benjamin Bregman Operator Splitting with Variable Stepsize for Total Generalized Variation Based Multi-Channel MRI Reconstruction

    Master of Sciences, Case Western Reserve University, 2015, Applied Mathematics

    This paper presents a fast algorithm for total generalized variation (TGV) based image reconstruction of magnetic resonance images collected by a technique known as partial parallel imaging (PPI). TGV is a generalization of the commonly employed total variation (TV) regularizer. TV reconstructs piecewise constant images and is known to produce oil-painting artifacts, while TGV reconstructs images with piecewise polynomial intensities and largely avoids this issue. The proposed algorithm combines the Bregman Operator Splitting with Variable Stepsize (BOSVS) approach derived by Chen, Hager, et al. [8] with the closed-form expressions for the TGV subproblem that arises in the alternating directional method of multipliers, derived by Guo, Qin and Yin [13]. The ill-conditioned inversion matrix that comes from PPI is approximated according to a stepsize rule similar to that in BOSVS. The stepsize rule starts with a Barzilai-Borwein step, then uses a line search to ensure convergence and eciency. The proposed regularizer is shown to achieve better results than TV, especially for reconstructing smooth details, in sampling conditions as low as 7.87%.

    Committee: Weihong Guo PhD (Advisor); Steven Izen PhD (Committee Member); Julia Dobrosotskaya PhD (Committee Member) Subjects: Applied Mathematics
  • 17. Chaturvedi, Amal Wideband Signal Delay and Direction of Arrival Estimation using sub-Nyquist Sampling

    PhD, University of Cincinnati, 2014, Engineering and Applied Science: Electrical Engineering

    This work is dedicated towards finding time delay and the Direction of Arrival (DOA) of Wideband signals impinging on a sensor array using sub-Nyquist sampling. Compressed Sensing or Compressive Sampling (CS) has recently emerged as a breakthrough technique. The strength of CS lies in solving an underdetermined problem with a very high probability and almost no error, provided the solution is sparse. The problem of estimating DOA of sources has been under focus of researchers worldwide. Most of the methods for estimating DOA exist for narrowband signals. Several methods solving the same problem have been proposed for wideband signals where usually the wideband signal is decomposed into multiple frequency bins and the concept of narrowband signals is applied on all the bins. In a sensor array, the time delay between two sensors depends on the position of sources and thus estimating the time delay could be used in finding the Direction of Arrival of the sources. In this research work, I have estimated the time delay between the sensors using the cross-correlation of the signals received at the sensor array to estimate the DOA. The wider the bandwidth of the signal, the sharper will be the peak of its autocorrelation. Thus, it would be safe to say that the autocorrelation of the wideband signal is sparse. This property of sparsity together with the CS reconstruction algorithm is used in this dissertation to solve the underdetermined system of equations to estimate the autocorrelation and hence the DOA of the signals. For sub-Nyquist rate sampling, the architecture of Modulated Wideband Converter\cite{eldar4} is being used. We show that the correlation of sub-Nyquist samples could be used to reconstruct the sparse correlation of Nyquist samples. The peaks in the reconstructed Nyquist correlation are used to estimate the time delay and hence the DOA. Another very important area of research involving time delay estimation is radar. In radar, the receiver rece (open full item for complete abstract)

    Committee: H. Howard Fan Ph.D. (Committee Chair); James Caffery Ph.D. (Committee Member); Donald French Ph.D. (Committee Member); William Wee Ph.D. (Committee Member); Xuefu Zhou Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 18. Pierre, Eric Reliable Use of Acquired and Simulated Signal Databases to Reduce MRI Acquisition Time

    Doctor of Philosophy, Case Western Reserve University, 2014, Biomedical Engineering

    Gains in MRI imaging time can be reinvested into improved patient comfort, higher resolution or reduced cost of opportunity for hospitals. In general, the most drastic increase in imaging speed is achieved via signal undersampling, compensated with prior information for image reconstruction. Two successful undersampled acquisition techniques are Parallel Imaging (PI) and Magnetic Resonance Fingerprinting (MRF). Both methods could benefit from optimized exploitation of prior information. We first improved PI reconstruction quality by exploiting local anatomical correlation across large collection of subject-independent images, while developing a robust framework to avoid prior information bias. Anatomical correlation from same-subject images was similarly exploited in the case of PI reconstruction for dynamic cardiac MRI. Finally in the case of MRF, a robust iterative framework was developed to exploit temporal template dictionaries to their full extent, reducing acquisition time for in vivo quantitative imaging by a factor 3.

    Committee: Mark Griswold Ph.D. (Committee Chair); Jeffrey Duerk Ph.D. (Committee Member); Jeffrey Sunshine MD/Ph.D. (Committee Member); David Wilson Ph.D. (Committee Member) Subjects: Medical Imaging
  • 19. O'Donnell, Andrew Sparsity and Compressed Sensing for Electromagnetic Scattering and Radiation Applications

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Real-world electromagnetics problems often involves analysis from electrically large structures. Accurate knowledge of the radar signature of targets is needed for many applications within the defense community. Such applications include target recognition and systems analysis. Finding a complete radar signature for a large target requires a large amount of data over frequency and aspect angle to satisfy the Nyquist sampling criterion. Fortunately, the scattering from electrically large targets often comes from a small set of localized target features. This characteristic invites the use of sparsity and Compressed Sensing to alleviate the amount of data needed to characterize a target. State of the art models based on high-frequency asymptotic physics use scattering centers that scale as a half-power of frequency, but many scattering features on real world targets do not follow this model. A more general model is proposed here that combines physical basis functions with a polynomial basis resulting in a robust representation that is able to compress scattering centers with various types of frequency dependencies for very wide bandwidths. This mixed basis is extended to include angular variations which allows for simultaneous radar signature compression in frequency and angle. Additionally, this model resolves two issues that plague automated scattering center compression algorithms, namely grid mismatch and merged scattering centers caused by resolution limits. Because we have a sparse representation for the radar signature, Compressed Sensing theory can be applied to acquire the scattering center representation with minimal sampled data. It is shown that the mixed basis is able to acquire the scattering center representation through Compressed Sensing with significantly less samples compared to other scattering center models. In addition to electromagnetic scattering applications, this dissertation investigates sparsity and Compressed Sensing for large ape (open full item for complete abstract)

    Committee: Robert Burkholder (Advisor); Joel Johnson (Committee Member); Fernando Teixeira (Committee Member) Subjects: Electrical Engineering
  • 20. Chaturvedi, Amal New and Improved Compressive Sampling Schemes for Medical Imaging

    MS, University of Cincinnati, 2012, Engineering and Applied Science: Electrical Engineering

    Compressed Sensing reconstructs the signal / image from a significantly less number of samples violating the Nyquist criteria. It exploits the sparsity present in the signal /image. Medical Imaging techniques like MRI (Magnetic Resonance Imaging), MRA (Magnetic Resonance Angiography), PET (Positron Emission Tomography) and MRSI (Magnetic Resonance Spectroscopic Imaging) are very popular and powerful medical tools and are used throughout the globe. The drawback associated with these important tools is that they have very slow data acquisition processes. On the other hand, all natural images are sparse in nature in some transform domain. Magnetic Resonance Angiograms are sparse in the image domain itself. More complex images like Magnetic Resonance Imaging of brain is sparse in some transform domain like Wavelet Transform etc. Compressed Sensing using this property of the medical images could significantly change the concept of scanning associated with the devices used in the sense that Compressed Sensing when applied could speed up the scanning process by a large margin. Using the inherent sparsity in the medical images, Compressed Sensing undersamples the k-space by acquiring very small amount of data from it and reconstructs the original image using non-linear optimization method. In this thesis, we have worked on the sampling schemes or patterns used to undersample the k-space or the Fourier space of different medical imaging techniques. Our sampling scheme when compared to the ones proposed by Dr. Lustig[1, 2] in his work, gives a better output. For a proper comparison, same amount of the data was acquired and the results were compared.

    Committee: H. Howard Fan PhD (Committee Chair); T. Douglas Mast PhD (Committee Member); William Wee PhD (Committee Member) Subjects: Electrical Engineering