Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 8)

Mini-Tools

 
 

Search Report

  • 1. Kong, Lingchao Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

    PhD, University of Cincinnati, 2019, Engineering and Applied Science: Computer Science and Engineering

    Wireless camera networks are ubiquitously deployed in various distributed sensing applications. The basic functions of each sensor node include video capture, video encoding or local video processing, and data transmission. The process of video analysis is implemented either in the central server or in the sensor node. Automatic video analysis can efficiently extract useful information from a huge amount of videos without human intervention. Object detection is the first and the most essential step of automatic video analysis. Thanks to abundant information provided by cameras and the development of computer vision techniques, automatic video analysis in wireless distributed systems is applied further. However, traditional network quality measures, such as QoS and QoE, do not necessarily reflect the quality of automatic video analysis in wireless camera networks. The overall goal of this dissertation is to propose new quality measures that could reflect the quality of automatic video analysis in wireless camera networks and to design efficient video processing and encoding schemes for wireless cameras that could boost the quality of automatic video analysis. The impact of lossy compression on object detection is systematically investigated. It has been found that current standardized video encoding schemes cause temporal domain fluctuation for encoded blocks in stable background areas and spatial texture degradation for encoded blocks in dynamic foreground areas of a raw video, both of which degrade the accuracy of object detection. Two measures, the sum-of-absolute frame difference (SFD) and the degradation of texture (TXD), are introduced to depict the temporal domain fluctuation and the spatial texture degradation in an encoded video, respectively. A model of object detection quality on compressed videos is established based on these two measures. Then we have proposed an efficient video encoding framework for boosting the accuracy of object detection for dist (open full item for complete abstract)

    Committee: Rui Dai Ph.D. (Committee Chair); Dharma Agrawal D.Sc. (Committee Member); H. Howard Fan Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member); Julian Wang Ph.D. (Committee Member) Subjects: Computer Science
  • 2. Hettiarachchi, Don Lahiru Nirmal An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique

    Master of Science (M.S.), University of Dayton, 2016, Electrical Engineering

    This study suggests improvements and an extension for the No-Reference Image Enhancement Quality Metric And Fusion (NRIE-QMF) Technique, that measures a perceptual quality score. To mesure the quality score, the NRIE-QMF metric uses the image statistics based on brightness, contrast, and noise content. The NRIE-QMF uses several image inputs from various image enhancement methods (GHE, CLAHE, and LTSN) and calculates a score value for each pixels based on the local neighborhood statistics. Then respective pixel scores of each enhanced image are weighted and fused into one to create a combined image. The NRIE-QMF metric is analyzed for execution time using the MATLAB profiler. Few modification and optimization steps are carried out to increase the execution speed while maintaining a good output. Secondly, enhanced images are scored using the proposed metric and the score matrices are thresholded compared to the original image's score matrix to avoid over-amplification caused by some enhancement methods. Finally, it is shown that the proposed metric achieves a 85.8% speed increase compared to the NRIE-QMF method and generates a combined output image with a superior visual quality. Also, quality score of the new combined image results higher than those of the enhanced images used for fusion, demonstrating the superiority of the proposed method's fusion technique.

    Committee: Eric Balster (Committee Chair); Keigo Hirakawa (Committee Member); Frank Scarpino (Committee Member) Subjects: Electrical Engineering
  • 3. Gao, Zhigang Image/video compression and quality assessment based on wavelet transform

    Doctor of Philosophy, The Ohio State University, 2007, Electrical Engineering

    Because of the contradiction of the vast data size of raw digital images and videos and the limited transmission bandwidth and storage space, it is essential to develop compression methodologies with high compression ratio and good reconstructed quality. It is also important to develop quality metrics which are consistent with human vision and easy to calculate. The spatial-frequency localization and multi-resolution capabilities of the wavelet transform make it a natural means of signal representation. This work investigates the advantages of the wavelet transform and focuses on the following research topics: 1) An image quality metric that assesses the quality of an image in the wavelet domain; 2) A quality constrained compression algorithm that compresses an image to a desired visual quality; 3) An innovative DWT-based temporal filtering scheme that achieves high compression ratio and reduces the ghost effect without motion estimation; 4) A virtual sub-object video coding scheme that is suitable for applications with static background.

    Committee: Yuan Zheng (Advisor) Subjects:
  • 4. Ikusan, Ademola Quality Aware Video Processing for Deep Learning Based Analytics Tasks

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Computer Science and Engineering

    Wireless video systems have been heavily deployed for various distributed sensing purposes, such as attempting to detect, recognize, track objects, and understand their behaviors which have been integrated into surveillance to provide real-time analysis results to human operators who will make final decisions. As technology advances, we have seen deep neural networks be a backbone technology supporting modern intelligent mobile applications and they have the ability to perform highly accurate and reliable inference tasks. Wireless cameras have played a big role in video surveillance capacity and are equipped with an embedded camera with video capture, video encoding or local video processing, and data transmission. The process of video analysis is implemented either in the central server or in the sensor node, depending on their computational capability, energy supply, and the purpose of applications. We have studied two paradigms used in video analytics, the first one is extracting the visual information at a wireless camera node and then the information is sent to a central processing hub for processing, this paradigm is commonly known as the extract-compress-analyze paradigm which is the traditional strategy. While the second paradigm extracts visual information using a wireless camera and it partially analyzes the visual information before compressing and sending it over to the processing hub for final computation and use cases, this paradigm is commonly known as the extract-analyze-compress strategy which is a recent strategy. Firstly, we studied on the extract-compress-analyze strategy. We investigated the impact of distortions such as noise, blur, etc. on visual information generated from wireless cameras. Based on studies, distortions such as noise, blur, bad lighting, etc. are introduced to visual information at the point of generation which negatively impacts computer vision applications such as object detection, classification, etc. Based on (open full item for complete abstract)

    Committee: Rui Dai Ph.D. (Committee Member); Gowtham Atluri Ph.D. (Committee Member); Heng Wei Ph.D. (Committee Member); Boyang Wang (Committee Member); Anca Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 5. Shah, Jaimin Underwater Document Recognition

    Master of Computer Science (M.C.S.), University of Dayton, 2021, Computer Science

    In this thesis, we propose an Image Quality Assessment and Comparison metrics for Image denoising algorithms. It is well known that image denoising plays a significant role in various Image related applications. Motivated by this, we attempt to develop Image quality assessment and comparison metrics specifically targeting image denoising algorithms. We have prepared a dataset containing images of text documents with appropriate noise specifically to meet the needs of this project. Images are denoised using different algorithms and then fed into an OCR engine to obtain text, we then compare it with text obtained using ground truth images which do not have any added noise to assess denoised image quality obtained using different algorithms.

    Committee: Tam Nguyen (Advisor); Yao Zhongmei (Committee Member); Luan Nguyen (Committee Member) Subjects: Computer Science
  • 6. Zhang, Chen Blind Full Reference Quality Assessment of Poisson Image Denoising

    Master of Science (M.S.), University of Dayton, 2014, Electrical Engineering

    The distribution of real camera sensor data is well approximated by Poisson, and the estimation of the light intensity signal from the Poisson count data plays a prominent role in digital imaging. It is highly desirable for imaging devices to carry the ability to assess the performance of Poisson image restoration. Drawing on a new category of image quality assessment called corrupted reference image quality assessment (CR-QA), we develop a computational technique for predicting the quality score of the popular structural similarity index (SSIM) without having the direct access to the ideal reference image. We verified via simulation that the CR-SSIM scores indeed agrees with the full reference scores; and the visually optimal denoising experiments performed on real camera sensor data give credibility to the impact CR-QA has on real imaging systems.

    Committee: Keigo Hirakawa (Advisor); Russell Hardie (Committee Member); Raul Ordonez (Committee Member) Subjects: Electrical Engineering
  • 7. Cheng, Wu Corrupted Image Quality Assessment

    Master of Science (M.S.), University of Dayton, 2012, Electrical Engineering

    We propose a foundation for assessing visual quality with "corrupted reference"(CR-QA) - a new quality assessment(QA) paradigm for reasoning about human vision and image restoration problems jointly. The visual quality of a processed image signal is assessed relative to an ideal reference image (not provided) with the help of observed image. This is in contrast to today's QAs, which are optimized for a "post-hoc" usage (process first, assess quality second) and are unequipped to handle the assessment of the processed data relative to the ideal reference that exist only in theory and not in practice.

    Committee: Keigo Hirakawa (Committee Chair); K. Asari Vijayan (Committee Member); H. Brian Tsou (Committee Member) Subjects: Engineering; Statistics
  • 8. Ghosh Roy, Gourab A Simple Second Derivative Based Blur Estimation Technique

    Master of Science, The Ohio State University, 2013, Computer Science and Engineering

    Blur detection is a very important problem in image processing. Different sources can lead to blur in images, and much work has been done to have automated image quality assessment techniques consistent with human rating. In this work a no-reference second derivative based image metric for blur detection and estimation has been proposed. This method works by evaluating the magnitude of the second derivative at the edge points in an image, and calculating the proportion of edge points where the magnitude is greater than a certain threshold. Lower values of this proportion or the metric denote increased levels of blur in the image. Experiments show that this method can successfully differentiate between images with no blur and varying degrees of blur. Comparison with some other state-of-the-art quality assessment techniques on a standard dataset of Gaussian blur images shows that the proposed method gives moderately high performance values in terms of correspondence with human subjective scores. Coupled with the method's primary aspect of simplicity and subsequent ease of implementation, this makes it a probable choice for mobile applications.

    Committee: Brian Kulis (Advisor); Mikhail Belkin (Committee Member) Subjects: Computer Engineering; Computer Science