Skip to Main Content
Frequently Asked Questions
Submit an ETD
Global Search Box
Need Help?
Keyword Search
Participating Institutions
Advanced Search
School Logo
Files
File List
34239.pdf (17.91 MB)
ETD Abstract Container
Abstract Header
Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks
Author Info
Kong, Lingchao
Permalink:
http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645
Abstract Details
Year and Degree
2019, PhD, University of Cincinnati, Engineering and Applied Science: Computer Science and Engineering.
Abstract
Wireless camera networks are ubiquitously deployed in various distributed sensing applications. The basic functions of each sensor node include video capture, video encoding or local video processing, and data transmission. The process of video analysis is implemented either in the central server or in the sensor node. Automatic video analysis can efficiently extract useful information from a huge amount of videos without human intervention. Object detection is the first and the most essential step of automatic video analysis. Thanks to abundant information provided by cameras and the development of computer vision techniques, automatic video analysis in wireless distributed systems is applied further. However, traditional network quality measures, such as QoS and QoE, do not necessarily reflect the quality of automatic video analysis in wireless camera networks. The overall goal of this dissertation is to propose new quality measures that could reflect the quality of automatic video analysis in wireless camera networks and to design efficient video processing and encoding schemes for wireless cameras that could boost the quality of automatic video analysis. The impact of lossy compression on object detection is systematically investigated. It has been found that current standardized video encoding schemes cause temporal domain fluctuation for encoded blocks in stable background areas and spatial texture degradation for encoded blocks in dynamic foreground areas of a raw video, both of which degrade the accuracy of object detection. Two measures, the sum-of-absolute frame difference (SFD) and the degradation of texture (TXD), are introduced to depict the temporal domain fluctuation and the spatial texture degradation in an encoded video, respectively. A model of object detection quality on compressed videos is established based on these two measures. Then we have proposed an efficient video encoding framework for boosting the accuracy of object detection for distributed sensing applications. The proposed encoding framework is designed to suppress unnecessary temporal fluctuation in stable background areas and preserve spatial texture in dynamic foreground areas based on the two measures, and it introduces new mode decision strategies for both intra and inter frames to improve the accuracy of object detection while maintaining an acceptable rate-distortion performance. Video analysis at network edges in a distributed manner can alleviate bandwidth pressure, enable better real-time response and achieve higher system reliability. We investigate the impact of imaging quality, such as noise and blur, on the performance of distributed in-network video analysis. We propose a no-reference regression model based on a bagging ensemble of regression trees to predict the accuracy of object detection using observable features in an image. Based on the estimation of detection performance, we propose a quality adjustment framework to provide satisfactory object detection performance on embedded cameras. Key components of the framework include a blind regression model for predicting the performance of object detection and two classifiers for determining the type of distortion in an image. The proposed framework achieves accurate estimations of both image quality and image distortion types with low computational complexity and it can effectively enhance the performance of object detection on embedded cameras.
Committee
Rui Dai, Ph.D. (Committee Chair)
Dharma Agrawal, D.Sc. (Committee Member)
H. Howard Fan, Ph.D. (Committee Member)
Carla Purdy, Ph.D. (Committee Member)
Julian Wang, Ph.D. (Committee Member)
Pages
117 p.
Subject Headings
Computer Science
Keywords
Image Quality Assessment
;
Object Detection
;
Embedded Cameras
;
Video encoding
;
Video Quality Assessment
;
Video Surveillance
Recommended Citations
Refworks
EndNote
RIS
Mendeley
Citations
Kong, L. (2019).
Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks
[Doctoral dissertation, University of Cincinnati]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645
APA Style (7th edition)
Kong, Lingchao.
Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks.
2019. University of Cincinnati, Doctoral dissertation.
OhioLINK Electronic Theses and Dissertations Center
, http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645.
MLA Style (8th edition)
Kong, Lingchao. "Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks." Doctoral dissertation, University of Cincinnati, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645
Chicago Manual of Style (17th edition)
Abstract Footer
Document number:
ucin1563295836742645
Download Count:
119
Copyright Info
© 2019, some rights reserved.
Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks by Lingchao Kong is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. Based on a work at etd.ohiolink.edu.
This open access ETD is published by University of Cincinnati and OhioLINK.