Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 14)

Mini-Tools

 
 

Search Report

  • 1. Nair, Srijith Robust Blind Image Denoising via Instance Normalization

    Master of Science, The Ohio State University, 2024, Electrical and Computer Engineering

    Image denoising is a fundamental problem in image processing where a high fidelity image is recovered from a noise corrupted version. Denoising is fundamental because, from the Bayesian perspective denoisers are believed to also encode information about the prior probability distribution of images. This in turn, makes denoisers a widely applicable tool in many image inverse problems like compressive sensing, deblurring, in-painting, super-resolution, etc. As a result various algorithmic approaches for denoising have been studied in the past decades. However, data-driven denoising methods, which learn to denoise images from large image datasets using deep neural networks, have demonstrated far superior performance compared to the classical algorithmic methods while having much faster inference times. The data-driven methods can be broadly classified into two categories: blind and non-blind methods. While non-blind methods require knowledge of the noise level contained within the image, blind methods which require no such information are more practical. However, the performance of many recent state-of-the-art blind denoisers depend heavily on the noise levels used during training. In more recent work, ideas of inducing scale and normalization equivariance properties in denoisers have been explored in order to make denoisers more robust to changes in noise levels from training to test data. In our work we extend upon this idea, where we introduce a method to make any given denoiser normalization equivariant using a simple idea of instance normalization, which improves the noise level robustness of the denoiser by a significantly large margin with minimal change to the underlying architecture. In this thesis, we theoretically formulate our idea from the perspective of minimizers of the Wasserstein-1 distance between empirical distributions of training and test data, and propose a more practically feasible 2-pixel approximation that yi (open full item for complete abstract)

    Committee: Philip Schniter (Advisor); Lee Potter (Committee Member) Subjects: Electrical Engineering
  • 2. Zhang, Chen Poisson Noise Parameter Estimation and Color Image Denoising for Real Camera Hardware

    Doctor of Philosophy (Ph.D.), University of Dayton, 2019, Electrical and Computer Engineering

    Noise is present in all images captured by real-world image sensors. The distribution of real camera sensor data is well approximated by Poisson, and the estimation of the light intensity signal from the Poisson count data plays a prominent role in digital imaging. Multi-scale Poisson image denoising techniques have processed Haar frame and wavelet coefficients---being enabled by Skellam distribution analysis. Previous work has solved the minimum risk shrinkage operator (MRSO) that produces denoised wavelet coefficients with best achievable Mean Squared Error (MSE) for gray scale image. We extend the idea of MRSO to denoise color sensor data in color-opponent space, improving the quality of denoised color images. In addition, the stable representation of color is to use ratios which we denote by chromaticities. Thus we propose a new Bayes estimator for color image denoising in log-chromaticity coordinate. Using full resolution real R/G/B camera images, we verified that the proposed denoising is more stable than the state-of-art color denoising techniques, yielding higher image quality result. Furthermore, the noise parameters that characterize the level of noise in an image or video frame are required for effective denoising. We develop a novel technique to estimate the noise parameters from natural scenes by exploiting the global joint statistics across multiple video frames, which can be interpreted as a binomial random variable that is insensitive to textures and scene contents. We verify experimentally that the proposed noise parameter estimation method recovers noise parameters more accurately than the state-of-art noise parameter estimation techniques.

    Committee: Keigo Hirakawa (Advisor); Russell Hardie (Committee Member); Raul Ordonez (Committee Member); Ryan Kappedal (Committee Member) Subjects: Electrical Engineering
  • 3. Miller, Sarah Statistical Approaches to Color Image Denoising and Enhancement

    Doctor of Philosophy (Ph.D.), University of Dayton, 2023, Electrical Engineering

    This dissertation is comprised of two novel contributions. First, we propose a novel technique to determine the noise-free color at each pixel by estimating the ratio of the red, green, and blue (RGB) pixel values from their noisy version. In order to model the spatial statistics of the proportion of primary colors such as RGB components known to correspond to the human perception of color, we interpret the simplex representation of color as an Aitchison geometry. Specifically, we develop a minimum mean square error (MMSE) estimator of log-color pixel values in the wavelet representation, with Poisson as its pixel domain likelihood function. We contrast this to most existing denoising techniques that are predominantly designed for single-channel/greyscale images that are then applied to YCbCr channels independently without regard for the RGB proportionality. In the extremely low photon regime, we verify experimentally that the proposed method yields state-of-the-art color denoising performance. Second, we propose a novel image enhancement algorithm to assist with the automation of the quantification and characterization of fiber reinforced composite materials. The success of this Aitchison- and Noise2Noise-based enhancement algorithm allows for faster and more accurate classification of composite materials that are frequently used in aerospace systems. The enhancement algorithm is applied to X-ray/CT scans of composite materials and the resulting denoised frames are classified utilizing DRAGONFLY technology. It is found that the enhanced images are able to achieve superior classification accuracy as compared to unprocessed images.

    Committee: Keigo Hirakawa (Advisor); Raul Ordonez (Committee Member); Kenneth Barnard (Committee Member); Stanley Chan (Committee Member); Bradley Ratliff (Committee Member) Subjects: Electrical Engineering; Engineering; Statistics
  • 4. Shah, Jaimin Underwater Document Recognition

    Master of Computer Science (M.C.S.), University of Dayton, 2021, Computer Science

    In this thesis, we propose an Image Quality Assessment and Comparison metrics for Image denoising algorithms. It is well known that image denoising plays a significant role in various Image related applications. Motivated by this, we attempt to develop Image quality assessment and comparison metrics specifically targeting image denoising algorithms. We have prepared a dataset containing images of text documents with appropriate noise specifically to meet the needs of this project. Images are denoised using different algorithms and then fed into an OCR engine to obtain text, we then compare it with text obtained using ground truth images which do not have any added noise to assess denoised image quality obtained using different algorithms.

    Committee: Tam Nguyen (Advisor); Yao Zhongmei (Committee Member); Luan Nguyen (Committee Member) Subjects: Computer Science
  • 5. Zhang, Jiachao Image denoising for real image sensors

    Master of Science (M.S.), University of Dayton, 2015, Electrical Engineering

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. Quantile analysis in pixel, wavelet, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed in this work to calibrate the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we offer a new Poisson mixture image denoising scheme to overcome the problem. Experiments with real sensor data verify that the undersmooth is effectively improved.

    Committee: Keigo Hirakawa (Advisor) Subjects: Electrical Engineering
  • 6. Zhang, Chen Blind Full Reference Quality Assessment of Poisson Image Denoising

    Master of Science (M.S.), University of Dayton, 2014, Electrical Engineering

    The distribution of real camera sensor data is well approximated by Poisson, and the estimation of the light intensity signal from the Poisson count data plays a prominent role in digital imaging. It is highly desirable for imaging devices to carry the ability to assess the performance of Poisson image restoration. Drawing on a new category of image quality assessment called corrupted reference image quality assessment (CR-QA), we develop a computational technique for predicting the quality score of the popular structural similarity index (SSIM) without having the direct access to the ideal reference image. We verified via simulation that the CR-SSIM scores indeed agrees with the full reference scores; and the visually optimal denoising experiments performed on real camera sensor data give credibility to the impact CR-QA has on real imaging systems.

    Committee: Keigo Hirakawa (Advisor); Russell Hardie (Committee Member); Raul Ordonez (Committee Member) Subjects: Electrical Engineering
  • 7. Li, Meng Low-Observable Object Detection and Tracking Using Advanced Image Processing Techniques

    Master of Science, University of Toledo, 2014, Engineering (Computer Science)

    Over the past few years, digital image processing has been widely studied and used in various fields. Image processing uses computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the bulid-up of noise and signal distortion during processing. In this thesis, we are going to introduce three important algorithms dealing with digital images: image denoising, image enhancement and target detection and tracking. The proposed Genetic Algorithm (GA) can detect and track dim, low observable and point targets, mainly for remote monitoring applications. As a first step to detect and track objects more effectively, the input image is first denoised and enhanced. We use Total Variation (TV) technique to remove the noise and improve the Signal to Noise Ratio (SNR) of the input image. To further enhance the image for outdoor applications a foggy image enhancement technique is introduced which significantly benefits traffic and outdoor visual systems. Foggy image enhancement is an important branch of digital image processing, which is used when the weather is foggy. To overcome the shortcomings of the existing foggy image enhancement algorithms, we have developed a method that combines Principal Component Analysis (PCA), Multi-Scale Retinex (MSR) and Global Histogram Equalization (GHE). Initially, a PCA transform is applied to the foggy image to split the input image into a luminance and two chrominance components. In the second step, the luminance and the chrominance components are individually enhanced by MSR and GHE, respectively. In the final stage, an inverse PCA is applied to combine the results of the three channels into a new RGB image. To detect and track low observable targets in a digital image sequence. an encoding schem (open full item for complete abstract)

    Committee: Ezzatollah Salari (Committee Chair); Junghwan Kim (Committee Member); Jackson Carvalho (Committee Member) Subjects: Computer Science
  • 8. Reehorst, Edward Machine Learning for Image Inverse Problems and Novelty Detection

    Doctor of Philosophy, The Ohio State University, 2022, Electrical and Computer Engineering

    This dissertation addresses two separate engineering challenges: image-inverse problems and novelty detection. First, we address image-inverse problems. We review Plug-and-Play (PnP) algorithms, where a proximal operator is replaced by a call of an arbitrary denoising algorithm. We apply PnP algorithms to compressive Magnetic Resonance Imaging (MRI). MRI is a non-invasive diagnostic tool that provides excellent soft-tissue contrast without the use of ionizing radiation. However, when compared to other clinical imaging modalities (e.g., CT or ultrasound), the data acquisition process for MRI is inherently slow, which motivates undersampling and thus drives the need for ac- curate, efficient reconstruction methods from undersampled datasets. We apply the PnP-ADMM algorithm to cardiac MRI and knee MRI data. For these algorithms, we developed learned denoisers that can process complex-valued MRI images. Our algorithms achieve state-of-the-art performance on both the cardiac and knee datasets. Regularization by Denoising (RED), as proposed by Romano, Elad, and Milanfar, is a powerful image-recovery framework that aims to minimize an explicit regular- ization objective constructed from a plug-in image-denoising function. Experimental evidence suggests that RED algorithms are state-of-the-art. We claim, however, that explicit regularization does not explain the RED algorithms. In particular, we show that many of the expressions in the paper by Romano et al. hold only when the denoiser has a symmetric Jacobian, and we demonstrate that such symmetry does not occur with practical denoisers such as non-local means, BM3D, TNRD, and DnCNN. To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a “score” (i.e., the gradient of a log-prior). Novelty detection is the ability for a machine learning system to detect signals that are significantly different from samples seen during training. Detecting novelties is (open full item for complete abstract)

    Committee: Philip Schniter (Advisor); Rizwan Ahmad (Committee Member); Lee Potter (Committee Member) Subjects: Electrical Engineering
  • 9. Idoughi, Achour A Wavelet Based Method for ToF Camera Depth Images Denoising

    Master of Science (M.S.), University of Dayton, 2022, Electrical Engineering

    This work addresses the problem of shot noise in Time-of-Flight (ToF) camera depth sensors, which is caused by the random nature of photon emission and detection. In this paper, we derive a Bayesian denoising technique based on Maximum A Posteriori (MAP) probability estimation, implemented in the wavelet domain, which denoises (2D) depth images acquired by ToF cameras. We also propose a new noise model describing the photon noise present in the raw ToF data. We demonstrate that the raw data captured by ToF camera depth sensors follows a Skellam distribution. We test the resulting denoising technique, in the millimeter level, with real sensor data and verify that it performs better than other denoising methods described in the literature.

    Committee: Keigo Hirakawa (Advisor) Subjects: Electrical Engineering
  • 10. Mukherjee, Rohit Improving Satellite Data Quality and Availability: A Deep Learning Approach

    Doctor of Philosophy, The Ohio State University, 2020, Geography

    Remote Sensing offers a unique perspective of our Earth and is crucial for managing its resources. Currently, there is no single satellite data product that is suitable for all applications. Satellite data are limited by their spatial, spectral, and temporal resolution. Additionally, satellite images can be affected by sensor noise and cloud cover. One of the solutions to overcome these limitations is by combining existing satellite products to minimize the drawbacks of a dataset. In this dissertation, we improve the spatial and temporal resolution of satellite data products, minimize sensor noise, and remove cloud cover from satellite images by combining data from multiple satellite sensors using deep learning methods. Deep learning has been successful in natural image superresolution, denoising, and translation and these methods perform efficiently given sufficiently large datasets and computational resources. Therefore, publicly available satellite datasets and recent computational advancements provide an ideal opportunity for applying deep learning for our tasks. In our first study, we downscale low resolution optical and thermal spectral bands of MODIS to match higher resolution NIR and Red bands. Information extraction from satellite data often requires the combined use of multiple spectral bands. Usually, the low-resolution bands are downscaled using naive interpolation methods or high-resolution bands are upscaled to create spectral indices. We train a deep learning model for downscaling MODIS spectral to create a spatially consistent MODIS dataset. Our model is compared to a state-of-the-art satellite image downscaling method and a deep learning image superresolution method. Additionally, we investigate the importance of prior natural images towards downscaling satellite images. Next, we increase the effective spatial resolution and denoise MODIS spectral bands with the help of Landsat 8 images. MODIS and Landsat 8 have similar measurement principles and (open full item for complete abstract)

    Committee: Desheng Liu Dr (Advisor); Alvaro Montenegro Dr (Committee Member); Srinivasan Parthasarathy Dr (Committee Member); Rongjun Qin Dr (Committee Member) Subjects: Geographic Information Science; Geography; Remote Sensing
  • 11. Miller, Sarah Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography

    Master of Science in Electrical Engineering, University of Dayton, 2020, Electrical and Computer Engineering

    In low-photon imaging regime, noise in image sensors are dominated by shot noise, best modeled statistically as Poisson. In this work, we show that the Poisson likelihood function is very well matched with the Bayesian estimation of the "difference of log of contrast of pixel intensities". More specifically, our work takes root in statistical compositional data analysis, whereby we reinterpret the Aitchison geometry as a multiresolution analysis in log-pixel domain. We demonstrate that the difference-log-contrast has wavelet-like properties that correspond well with human visual system, while being robust to illumination variations. We derive a denoising technique based on an approximate conjugate prior for the latent Aitchison variable that gives rise to an explicit minimum mean squared error estimation. The resulting denoising techniques preserves image contrast details that are arguably more meaningful to human vision than the pixel intensity values themselves.

    Committee: Keigo Hirakawa Ph.D. (Advisor); Brad Ratliff Ph.D. (Committee Member); Vijayan Asari Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 12. Balster, Eric Video compression and rate control methods based on the wavelet transform

    Doctor of Philosophy, The Ohio State University, 2004, Electrical Engineering

    Wavelet-based image and video compression techniques have become popular areas in the research community. In March of 2000, the Joint Pictures Expert Group (JPEG) released JPEG2000. JPEG2000 is a wavelet-based image compression standard and predicted to completely replace the original JPEG standard. In the video compression field, a compression technique called 3D wavelet compression shows promise. Thus, wavelet-based compression techniques have received more attention from the research community. This dissertation involves further investigation of the wavelet transform in the compression of image and video signals, and a rate control method for real-time transfer of wavelet-based compressed video. A pre-processing algorithm based on the wavelet transform is developed for the removal of noise in images prior to compression. The intelligent removal of noise reduces the entropy of the original signal, aiding in compressibility. The proposed wavelet-based denoising method shows a computational speedup of at least an order of magnitude than previously established image denoising methods and a higher peak signal-to-noise ratio (PSNR). A video denoising algorithm is also included which eliminates both intra- and inter-frame noise. The inter-frame noise removal technique estimates the amount of motion in the image sequence. Using motion and noise level estimates, a video denoising technique is established which is robust to various levels of noise corruption and various levels of motion. A virtual-object video compression method is included. Object-based compression methods have come to the forefront of the research community with the adoption of the MPEG-4 (Motion Pictures Expert Group) standard. Object-based compression methods promise higher compression ratios without further cost in reconstructed quality. Results show that virtual-object compression outperforms 3D wavelet compression with an increase in compression ratio and higher PSNR. Finally, a rate-control method (open full item for complete abstract)

    Committee: Yuan Zheng (Advisor) Subjects:
  • 13. Jin, Xiaodan Poisson Approximation to Image Sensor Noise

    Master of Science (M.S.), University of Dayton, 2010, Electrical Engineering

    POISSON APPROXIMATION OF IMAGE SENSOR NOISE Name: Jin, Xiaodan University of Dayton Advisor: Dr. K. Hirakawa Noise is present in all images captured by image sensors. Due to photon emission and photoelectric effects that are the foundations of the ways in which quantum mechanics enable image sensors, in fact, random noise is a “necessary evil” of image sensors that will continue to require our attention. The goal of this research is to provide a comprehensive characterization of random noise in ways that enhance post-image-capture signal processing steps. We derive the Poisson approximation to model the measurement noise that is the result of photon arrival and photon recapture. A novel methodology to learn the parameters that describe the noise is developed. We conclude by presenting preliminary evidence that accurate noise modeling would improve image denoising, especially in the low photon count/high noise regimes.

    Committee: Keigo Hirakawa PhD (Committee Chair); Vijayan Asari PhD (Committee Member); Raul Ordonez PhD (Committee Member) Subjects: Electrical Engineering
  • 14. Qin, Jing Prior Information Guided Image Processing and Compressive Sensing

    Doctor of Philosophy, Case Western Reserve University, 2013, Applied Mathematics

    Signal/image processing and reconstruction based on mathematical modeling and computational techniques have been well developed and still attract much attention due to their broad applications. It becomes challenging to build mathematical models if the given data lacks some certainties. Prior information, including geometric priors, high frequency priors, spatially variant intensity variations and image regularities, assists to establish mathematical models by providing a more accurate description of the underlying signal/image. We have been exploring applications of the extracted prior information in two directions: integrating prior information into the image denoising explained in nonlocal means (NL-means) denoising framework; enhancing the compressive sensing signal/image reconstruction with the guidance of prior information. The first topic is geometric information based image denoising, where we develop a segmentation boosted image denoising scheme, balancing the removal of excessive noise and preservation of fine features. By virtue of segmentation algorithms and more general geometry extraction schemes, we are able to obtain the phase or geometric prior information. Based on the NL-means method, we introduce a mutual position function to ensure that averaging is only taken over pixels in the same image phase. To further improve the performance, we provide the respective selection scheme for the convolution kernel and the weight function. To address the unreliable segmentation due to the presence of excessive noise, the phase prior is relaxed to a more general geometric prior. The second topic is prior information guided compressive sensing signal/image reconstruction. Concerning the 1D signal reconstruction, we extract high frequency subbands as prior to boost the subsequent reconstruction. In 2D image reconstruction realm, we propose a novel two-stage intensity variation prior guided image reconstruction method using pixel-to-pixel varying weights ass (open full item for complete abstract)

    Committee: Weihong Guo (Advisor); Daniela Calvetti (Committee Member); Erkki Somersalo (Committee Member); David Wilson (Committee Member) Subjects: Applied Mathematics