Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 31)

Mini-Tools

 
 

Search Report

  • 1. Mathew, Alex Rotation Invariant Histogram Features for Object Detection and Tracking in Aerial Imagery

    Doctor of Philosophy (Ph.D.), University of Dayton, 2014, Electrical Engineering

    Object detection and tracking in imagery captured by aerial systems are becoming increasingly important in computer vision research. In aerial imagery, objects can appear in any orientation, varying sizes and in different lighting conditions. Due to the exponential growth in sensor technology, increasing size and resolution of aerial imagery are becoming a challenge for real-time computation. A rotation invariant feature extraction technique for detecting and tracking objects in aerial imagery is presented in this dissertation. Rotation invariance in the feature representation is addressed by considering concentric circular regions centered at visually salient locations of the object. The intensity distribution characteristics of the object region are used to represent an object effectively. A set of intensity-based features is derived from intensity histograms of the circular regions and they are inherently rotation invariant. An integral histogram computation approach is used to compute these features efficiently. To improve the representational strength of the feature set for rotation and illumination-invariant object detection, a gradient-based feature set is derived from normalized gradient orientation histograms of concentric regions. Rotation invariance is achieved by considering the magnitude of the Discrete Fourier Transform (DFT) of the gradient orientation histograms. A novel computational framework called Integral DFT is presented for fast and efficient extraction of gradient-based features in large imagery. A part-based model, which relies on a representation of an object as an aggregation of significant parts, using the gradient-based features is also presented in this dissertation. Integrating the features of significant parts gives robustness to partial occlusions and slight deformations, thus leading to a better object representation. The effectiveness of the intensity-based feature is demonstrated in tracking objects in Wide Area Motion Imagery (WA (open full item for complete abstract)

    Committee: Vijayan Asari (Committee Chair); Keigo Hirakawa (Committee Member); Raul Ordonez (Committee Member); Youssef Raffoul (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Engineering
  • 2. Sharma, Vinay Simultaneous object detection and segmentation using top-down and bottom-up processing

    Doctor of Philosophy, The Ohio State University, 2008, Computer and Information Science

    This thesis addresses the fundamental tasks of detecting objects in images, recovering their location, and determining their silhouette shape. We focus on object detection techniques that 1) enable simultaneous recovery of object location and object shape, 2) require minimal manual supervision during training, and 3) are capable of consistent performance under varying imaging conditions found in real-world scenarios. The work described here results in the development of a unified method for simultaneously acquiring both the location and the silhouette shape of specific object categories in outdoor scenes. The proposed algorithm integrates top-down and bottom-up processing, and combines cues from these processes in a balanced manner. The framework provides the capability to incorporate both appearance and motion information, making use of low-level contour-based features, mid-level perceptual cues, and higher-level statistical analysis. A novel Markov random field formulation is presented that effectively integrate the various cues from the top-down and bottom-up processes. The algorithm attempts to leverage the natural structure of the world, thereby requiring minimal user supervision during training. Extensive experimental evaluation shows that the approach is applicable to different object categories, and is robust to challenging conditions such as large occlusions and drastic changes in viewpoint. For static camera scenarios, we present a contour-based background-subtraction technique. Utilizing both intensity and gradient information, the algorithm constructs a fuzzy representation of foreground boundaries called a Contour Saliency Map. Combined with a low-level data-driven approach for contour completion and closure, the approach is able to accurately recover object shape. We also present object detection and segmentation approaches that combine information from visible and thermal imagery. For object detection, we present a contour-based fusion algorithm for b (open full item for complete abstract)

    Committee: James Davis (Advisor) Subjects: Computer Science
  • 3. WU, SHUANG Deep Learning-based Hazardous Materials Detection Algorithm

    Master of Sciences, Case Western Reserve University, 2022, EECS - System and Control Engineering

    Stations, airports, subways, and other transportation hubs luggage security testing. The general public's personal and property are scanned by X-ray equipment to luggage and observed by professionally trained staff to see if it contains prohibited items. However, with the rapid development of the economy, the flow of people in airports and stations is growing rapidly, and the types of luggage people carry have become diverse. The existing method of relying on security personnel to manually identify prohibited items from X-ray pictures is not applicable to the current complex and changing situation, resulting in lower identification accuracy and bringing certain security risks. To solve this problem and combine the characteristics of the X-ray security inspection instrument itself, this paper proposes an automated system for detecting contraband from X-ray scanned images. The system is studied using Deep Learning models in multiple dimensions such as model design, feature extraction, automatic border regression, and model deployment, using multiple metrics such as Precision, Recall, AP, and mAP for analysis. The details of the study are as follows: Based on the YOLO-V3 Object detection algorithm and combined with the task characteristics, this paper proposes a lightweight feature extraction network, multidimensional feature fusion mechanism, automatic Anchor regression, and other structures to design a high-precision X-RayNet algorithm. The method not only dramatically reduces the number of parameters and operations of the YOLO-V3 model but also improves the detection accuracy. In this paper, we propose a Transformer-based X-ray Contraband detection algorithm, which is optimized for the problems of poor border fit and insufficient generalization of the X-Ray Net algorithm detection and the need for NMS operations and uses the Self-Attention mechanism to complete the feature extraction and prediction in the Object detection process. After experimental analysis, t (open full item for complete abstract)

    Committee: Christos Papachristou (Advisor); Vira Chankong (Committee Member); Michael Fu (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 4. Che, Peining ZERO-SHOT OBJECT DETECTION METHOD COMPARISON AND ANALYSIS

    Master of Computer Science, Miami University, 2019, Computer Science and Software Engineering

    Object detection is a popular research domain, however, it is a challenge to annotate imagery for object detection as each object within an image must be identified. We focus on an approach for object detection that allows classifiers to use less annotated data, called Zero- shot Detection (ZSD). The aim is to build an object detector that is trained on one set of classes (e.g., for which we have annotations), and that also performs well on novel categories of objects that were not annotated in its training set. Several recent studies have tackled Zero-shot Learning (ZSL) for image classification problems, but ZSD is more challenging. This thesis evaluates recently reported ZSD approaches in order to understand and explain where ZSD can be successful in an object detection framework. From our experiments, we show ZSD models are variant after training and perform differently on the same detection task. We also find the quantitative results cannot be the only factor that measures the success of the detection model because the visual outputs are also important.

    Committee: John Femiani (Advisor); Karen Davis (Committee Member); Vijayalakshmi Ramasamy (Committee Member) Subjects: Computer Science
  • 5. Schrider, Christina Histogram-based template matching object detection in images with varying brightness and contrast

    Master of Science in Engineering (MSEgr), Wright State University, 2008, Biomedical Engineering

    Our challenge was to develop a semi-automatic target detection algorithm to aid human operators in locating potential targets within images. In contrast to currently available methods, our approach is relatively insensitive to image brightness, image contrast and object orientation. Working on overlapping image blocks, we used a sliding difference method of histogram matching. Incrementally sliding the histograms of the known object template and the image region of interest (ROI) together, the sum of absolute histogram differences was calculated. The minimum of the resultant array was stored in the corresponding spatial position of a response surface matrix. Local minima of the response surface suggest possible target locations. Because the template contrast will rarely perfectly match the contrast of the actual image contrast, which can be compromised by illumination conditions, background features, cloud cover, etc., we perform a random contrast manipulation, which we term ‘wobble', on the template histogram. Our results have shown improved object detection with the combination of the sliding histogram difference and wobble.

    Committee: Julie Skipper PhD (Advisor); Daniel Repperger PhD (Committee Member); Thomas Hangartner PhD (Committee Member); S. Narayanan PhD (Other); Joseph F. Thomas, Jr. PhD (Other) Subjects: Biomedical Research; Engineering; Scientific Imaging
  • 6. Yoon, Taehun Object Recognition Based on Multi-agent Spatial Reasoning

    Doctor of Philosophy, The Ohio State University, 2008, Geodetic Science and Surveying

    Object recognition is one of the key processes in Photogrammetry to generate maps from sensory information, because it is to convert 'data' to 'information.' However, as the size of input data is increased, it also has been one of the bottle neck processes in Photogrammetry. Thus many researchers have been working on developing semi-automated or automated object recognition methods to speed up the process. Some of the developed methods have been proved to be feasible in controlled environments, and others have been applicable for real world applications. However, most of the conventional object recognition methods still require human operators' interventions to correct errors or clarify ambiguous results.The new object recognition method proposed in this dissertation is to recognize multiple types of objects in parallel so that the ambiguous results would be minimized. Since 1980's, new paradigms in Computer Science such as parallel computing and agent models have emerged with the progress of computer systems. The proposed method is built on one of the paradigms, the agent model. With built-in knowledge and predefined goals, the agent actively searches clues to reach the goals. In a multi-agent model, several agents with specific goals and tasks are deployed, and they are trying to reach the main goal together. The proposed system consists of the coordinator agent, the recognition agents, and the data agent. The coordinator agent initiates other agents, and the data agent handles and processes input data. While the recognition agents aggressively collect regions for the target objects, sometimes conflicts arise between more than two recognition agents. With the proposed conflict resolution scheme, the conflicts can be resolved, and finally ambiguity can be removed. Experiments on the proposed system were performed with a multi-spectral image and LIDAR data. Results of feature extraction done by the data agent, and object recognition are presented. The results show t (open full item for complete abstract)

    Committee: Anton Schenk PhD (Advisor); Alan Saalfeld PhD (Committee Member); Alper Yilmaz PhD (Committee Member); Bea Csatho PhD (Committee Member) Subjects: Computer Science
  • 7. Komarabathuni, Ravi Performance Assessment of a 77 GHz Automotive Radar for Various Obstacle Avoidance Application

    Master of Science (MS), Ohio University, 2011, Electrical Engineering (Engineering and Technology)

    Human safety is one of the highest priorities in the automotive industry. The demands made for reliable safety systems have been increasing tremendously in the past decade. The radar sensors used for safety systems should be capable of detecting not only automobiles but also motorcycles, bicycles, pedestrians, roadside objects and any other obstacles the vehicle may come in contact with. This thesis investigates several performance aspects and test procedures for a 77 GHz long range radar sensor with different test target objects. This assessment helps to investigate the potential to use these radar sensors for obstacle detection and/or avoidance for smaller objects like bicycles, humans, traffic barrels, 4” poles, metal sheets, and also for bigger objects like vans, motorcycles, aircraft, etc. For these purposes, different test cases were developed to evaluate the performance. The different test cases used to test a 77 GHz radar sensor includes: finding maximum range, range accuracy, finding maximum field of view, detection (& separation) of two target objects (similar & different) at different radial distances, and maximum range for detecting an aircraft. Observations were made with the radar sensor mounted on a moving cart and the measurements were logged. The results from these tests will provide insight into analyzing the possibilities and limitations of these radar sensors for different applications. The tests were successfully conducted on a flat, open field at Ohio University Airport, Albany, OH.

    Committee: Chris Bartone PhD, P.E. (Advisor); Jeffrey Dill PhD (Committee Member); Bryan Riley PhD, PMP (Committee Member); William Kaufman PhD (Committee Member) Subjects: Automotive Engineering; Electrical Engineering
  • 8. Balasubramaniam, Sowbaranika Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Motion-activated cameras, commonly known as camera traps, play a crucial role in wildlife monitoring, biodiversity conservation, and basic ecological science, capturing a diverse range of species in their natural habitats. However, the sheer volume of images generated requires manual analysis by ecologists, making this a time-consuming and inefficient process. This is particularly frustrating since many of the images do not contain animals because camera traps are triggered by the motion of vegetation due to wind or miss the animal due to the slow camera response. This thesis presents an approach aimed at addressing these challenges through the automation of image classification and the development of smart camera traps that leverage artificial intelligence implementations in edge devices. First, we develop a species classifier pipeline consisting of object detection and a species classifier for a novel camera trap setting. We train a convolutional neural network to identify 45 trained species using 1.2M images, achieving an accuracy of around 89\% at the class level without manual intervention. This research demonstrates the combination of deep learning methodologies and human oversight, showcasing their ability to facilitate automatic or semiautomatic, precise, and efficient surveys of small animals in the wild, particularly in novel settings or for novel species. The suggested pipeline helps achieve 95\% accuracy in image classification, resulting in 11\% of images that require manual review at the class level. This pipeline helps to automatically annotate the image while maintaining accuracy. The camera trap generates a large number of images. The time delay between data capture and image processing leads to the loss of critical information. Real-time labeling while capturing images can help mitigate this issue. Following the traditional classifier, we investigate methods for deploying an AI model in a camera trap. Smart Camera Traps would result in real (open full item for complete abstract)

    Committee: Tanya Berger-Wolf (Advisor); Christopher Stewart (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering
  • 9. Tadisetty, Srikanth Deep Learning Methods to Quantify Cell Morphology in 3D Multichannel Microscope Data

    PHD, Kent State University, 2024, College of Arts and Sciences / Department of Computer Science

    Astrocytes are supportive brain cells that undergo morphological changes during multiple disease states. Studies have shown their importance in aspects of CNS physiology, but these mechanisms remain poorly understood. Characterizing these changes in large cell populations is useful for understanding disease states and response to potential therapies. To provide an unbiased and accurate framework for astrocyte analysis, we introduce a novel processing pipeline that applies a Multi-Channel and Multi-Stage Segmentation with Artificial Components Training for Astrocytes (MCMSTA-3D) to capture spatial information from 3D microscopic images of astrocytes. Results indicate that our method outperforms the state-of-the-art segmentation models for astrocyte and nuclei segmentation in addition to providing a way to analyze these segmented fragments of the cell applicable to both 2D and 3D microscopic data.

    Committee: Ruoming Jin (Advisor) Subjects: Biology; Biomedical Research; Computer Science
  • 10. Koliyaee, Vahid Robotic Masonry Arch Construction Considering Object Recognition And Human-Robot Interaction

    MS, Kent State University, 2023, College of Architecture and Environmental Design

    In the midst of a technological revolution, where industries are rapidly evolving to embrace cutting-edge technologies, AEC faces a persistent challenge. It grapples with a noticeable disparity between state-of-the-art technologies and their practical implementation. This research takes a deep dive into the transformative potential of HRI within the AEC context, with a primary goal of bridging this profound gap. The central aim of this research is to investigate the feasibility of empowering robots with advanced object detection capabilities, primarily focusing on the YOLO algorithm, to autonomously recognize and select construction materials, such as bricks, during construction tasks. This innovative approach marks a significant departure from conventional construction practices, where instructions typically flow in a one-way direction from humans to robots. The introduction of the ArchiTech project, which is built upon the ROS and Python, exemplifies an unprecedented shift in making robots smarter and more responsive to human operators. The integration of robots into human-oriented tasks offers a plethora of benefits, and these extend beyond the AEC sector. First and foremost, it leads to enhanced precision and quality. Robots, being inherently precise and consistent, have the capacity to perform repetitive tasks with an unparalleled level of accuracy, which translates into improved product quality and consumer satisfaction. Moreover, robots enhance safety and ergonomics in the workplace. They can undertake tasks that are perilous or physically demanding for humans. By doing so, they improve worker safety and the overall ergonomics of the workplace. This is particularly notable in industries like construction, where robots can be deployed to handle heavy loads and operate in challenging terrains, thereby minimizing the risk of accidents and injuries. Robotic fabrication also holds the promise of boosting efficiency and productiv (open full item for complete abstract)

    Committee: Rui Liu (Advisor); Jong-Hoon Kim (Committee Member); Lameck Onsarigo (Committee Member); Bill Lucak (Committee Member) Subjects: Architectural; Architecture; Computer Science; Robotics
  • 11. Kian, Setareh Enhancing Object Detection Methods by Knowledge Distillation for Automotive Driving in Real-World Settings

    Doctor of Philosophy (Ph.D.), University of Dayton, 2023, Electrical Engineering

    Commercial cameras primarily aim to capture visually appealing images for human viewers, often leading to the loss of critical information during the image generation process. However, for machine vision applications, extracting as much data as possible from an image is crucial for effective operation. In the context of autonomous vehicles, cameras serve as vital vision tools, where data captured is processed through object detection algorithms such as YOLO, FasterRCNN, RetinaNet, etc. Hence, it becomes essential to have an object detection algorithm capable of leveraging all available information from camera images to perform effectively under challenging conditions, such as low-light scenarios and the detection of small or distant objects. Traditionally, the establishment and evaluation of most object detection models have been based on common RGB images, which align with human visual perception. However, important details that could be valuable for machine vision tasks often vanish through the image signal processing (ISP) pipeline. To address this limitation, cameras with an RCCB (Red, Clear, Clear, Blue) color format, replacing the green channel with clear, have been introduced in the autonomous driving industry featuring more low-light sensitivity and less noise absorptive; which leads to enhanced object detection quality. This research focuses on training cost-effective object detection models 3 using raw images captured with an RCCB color filter array, while requiring a minimum amount of training data and low computational complexity. The author employs a knowledge distillation method through unsupervised learning to transfer the knowledge from high-performance state-of-the-art object detection models, trained on RGGB (Red, Green, Green, Blue) color filter array images, to operate with high accuracy on RCCB raw images. The results of this study demonstrate the effectiveness of the proposed approach in training object detection models s (open full item for complete abstract)

    Committee: Keigo Hirakawa (Committee Chair); Scott McCloskey (Committee Member); Raul Ordonez (Committee Member); Eric Balster (Committee Member) Subjects: Engineering
  • 12. Tambolkar, Pooja Machine Learning Techniques for Campus Mobility Analysis

    Master of Science, The Ohio State University, 2023, Mechanical Engineering

    Rapid urbanization has led to an increased urban sprawl exerting a lot of pressure on the natural resources, environment, infrastructure, and the dynamics of the urban regions. Effective transportation planning and traffic management are crucial for alleviating the strain on the road networks, reducing congestion, and thus promoting sustainable mobility in cities. Advancements in big data have revolutionized traffic data by facilitating real time monitoring and data driven solutions. Various data sources have the potential to provide useful traffic information. This thesis focuses on analyzing different existing data sources on the OSU campus and develop an end-to-end approach to handle this data. The thesis explores two data sources in particular - surveillance cameras and Wi-Fi hotspots to derive relevant and usable data for traffic modelling. Object detection and tracking techniques have been implemented to extract the total counts of pedestrians and vehicles moving across campus at peak hours. A reinforcement learning approach has been developed to model the path taken by pedestrians using the Wi-Fi data. Simulation in Urban Mobility (SUMO) provides a realistic environment for obtaining the optimal path for the pedestrians. By integrating diverse data sources and employing innovative methodologies, the workings of this thesis and outcomes thereof aids in traffic management and offers valuable insights for creating smarter, more efficient, and resilient cities.

    Committee: Shawn Midlam-Mohler (Advisor); Punit Tulpule (Committee Member); Sandra Metzler (Committee Member) Subjects: Engineering
  • 13. Ji, Jiajie Object Detection and Classification Based on Point Separation Distance Features of Point Cloud Data

    Master of Science (M.S.), University of Dayton, 2023, Electro-Optics

    Today, with the development of artificial intelligence and autonomous driving in full swing, lidar is playing a vital role. As an important sensing and detection component, lidar uses 3D point cloud images as a medium to allow artificial intelligence systems to perceive the outside world and perform reasoning work. Therefore, the processing and operation implementation of point cloud is an important part of the information processing of a lidar system, which will determine the accuracy and feasibility of artificial intelligence judgment. In this thesis, an analysis method based on extracting point cloud point separation distance distribution features is used. First, we will introduce how a lidar system works and how a lidar system collects information and generates a 3D point cloud. Afterward, feature analysis of point cloud point separation distribution for dimensionality reduction will be proposed. At the same time, we will use the point separation distribution feature to do object classification, object recognition and segmentation of whether there are vehicles on the road. What's more worth mentioning is that we also provide deep learning results and analysis based on point cloud point separation distribution features. On this basis, we discuss the significance and practicality of this feature analysis.

    Committee: Edward Watson (Committee Chair); Partha Banerjee (Committee Member); Miranda van Iersel (Committee Member) Subjects: Electrical Engineering; Engineering; Optics
  • 14. Couture Del Valle, Christopher Optimization of Convolutional Neural Networks for Enhanced Compression Techniques and Computer Vision Applications

    Master of Science in Computer Engineering, University of Dayton, 2022, Electrical and Computer Engineering

    Image compression algorithms are the basis of media transmission and compression in the field of image processing. Decades after their inception, algorithms such as the JPEG image codec continue to be the industry standard. A notable research topic gathering momentum in the field of compression is deep learning (DL). This paper explores the opti- mization of DL models for ideal image compression and object detection (OD) applications. The DL model to be optimized is based upon an existing compression framework known as the CONNECT model. This framework wraps the traditional JPEG image codec within two convolutional neural networks (CNNs). The first network, ComCNN, focuses on com- pressing an input image into a compact representation to be fed into the image codec. The second network, RecCNN, focuses on reconstructing the output image from the codec as similarly as possible to the original image. To enhance the performance of the CONNECT model, an optimization software called Optuna wraps the framework. Hyperparameters are selected from each CNN to be evaluated and optimized by Optuna. Once the CONNECT model produces ideal results, the output images are applied to the YOLOv5 OD network. This paper explores the impact of DL hyperparameters on image quality and compres- sion metrics. In addition, a detection network will provide context to the effect of image compression on computer vision applications.

    Committee: Bradley Ratliff (Committee Chair); Eric Balster (Committee Member); Barath Narayanan (Committee Member) Subjects: Computer Engineering
  • 15. Mu, Yunze East, West, South, North, and Center- Live Electronic Music based on Neural Network, Board Game, and Data-driven Instrument

    DMA, University of Cincinnati, 2022, College-Conservatory of Music: Composition

    East, West, South, North, and Center is a piece based on neural network technology. In this piece, performers use a set of tiles of a traditional Chinese board game Mahjong as a data-driven instrument by using a camera to recognize all mahjong tiles on the table. Beyond only recognizing the pictures on each mahjong tile, it can also recognize the position and rotation angle of each tile. All those parameters give a lot of potential for a piece with rich timbres and fun interactions. With all these technologies, it tells a fictional story through my music about a family's life. In this family, every member has their own thoughts. They fight, but it won't affect the fact that they love each other and will try to fix it after every fight. A lot of problems we get in our life are always magically solved while we fix our family problems. As a traditional saying that Chinese people say all the time: everything will end in the place where it begins. The inspiration for the title East, West, South, North and Center comes from a set of the five Mahjong tiles called east, west, south, north, center. All five tiles can symbolize how people play this board game: you need four players (east, west, south, and north) sitting around a square table (center). The five elements give the basic figure of a family.

    Committee: Mara (Margaret) Helmuth D.M.A. (Committee Member); Douglas Knehans D.M.A. (Committee Member); Michael Fiday Ph.D. (Committee Member) Subjects: Music
  • 16. Schneider, Bradley Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

    Doctor of Philosophy (PhD), Wright State University, 2022, Computer Science and Engineering PhD

    Activities of Daily Living (ADL's) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL's has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing 's based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems.

    Committee: Tanvi Banerjee Ph.D. (Advisor); Yong Pei Ph.D. (Committee Member); Michael Riley Ph.D. (Committee Member); Mateen Rizki Ph.D. (Committee Member); Thomas Wischgoll Ph.D. (Committee Member) Subjects: Computer Science
  • 17. Chen, Yulu Spatial Temporal Analysis of Traffic Patterns during the COVID-19 Epidemic by Vehicle Detection using Planet Remote Sensing Satellite Images

    Master of Science, The Ohio State University, 0, Civil Engineering

    The spread of the COVID-19 from the end of 2019 has reached to epidemic level and become a global public health crisis. During this period, the responses for COVID-19 were highly diverse and decentralize across countries and regions. Understanding the dynamics of human mobility change at high spatial temporal resolution is critical for assessing the impacts of non-pharmaceutical interventions (such as stay-at-home orders, regional lockdowns, and travel restrictions) during the pandemic. Use a High resolution (HR) satellite image data provided by Planet Scope from November 2019 to Sep 2020 to capture traffic density trends in vehicle movement patterns for multiple cities around the world, this research is further evaluated how mobility patterns change in response to COVID-19 exposure. By developing a novel method combining radiometric correction, road mask generation, and morphological based vehicle detection, the study aims to capture traffic pattern from the satellite images with the resolution of 3-meter. Though varies by degrees, the result shows the traffic volume was significantly dropped in multiple cities after the exposure of COVID-19. This study strongly supports the social distancing as a non-pharmaceutical intervention is capable to mitigate COVID-19 transmission. Since the model has the capability to surveil traffic pattern at a high spatial-temporal resolution, it can support more timely policy making around social distancing at a small geography scale and inform future public health or planning decision making.

    Committee: Rongjun Qin (Advisor); Lei Wang (Committee Member); Desheng Liu (Committee Member) Subjects: Geographic Information Science; Geotechnology; Remote Sensing
  • 18. Saeedimoghaddam, Mahmoud Exploring the Effectiveness of the Urban Growth Boundaries in USA using the Multifractal Analysis of the Road Intersection Points, A Case Study of Portland, Oregon

    PhD, University of Cincinnati, 2020, Arts and Sciences: Geography

    Uncontrolled urban sprawls cause environmental damages which may be irreparable. To avoid such losses and to manage the scattered urban developments, several American cities have implemented an urban growth boundary (UGB) policy. The effectiveness of this policy for Portland, Oregon has been quantitatively analyzed in multiple studies. However, those studies were based on the neighborhood scale and thus they suffered from the Modifiable Areal Unit Problem (MAUP). Furthermore, they did not take into account the complex and hierarchical nature of the urban system. Finally, they used temporally aggregated datasets while using the exact temporal snapshots would be more appropriate. In this dissertation, road intersection patterns are used as proxies for urban structure. Eleven exact temporal snapshots of road intersection maps between 1852 and 2019 have been used to avoid the shortcomings of using temporally aggregated data. Before 1992, the data is only available in the form of scanned historical maps and need to be transformed into the usable vector-based format. A supervised machine learning framework based on convolutional neural networks has been used to extract road intersection points from historical maps of the Portland metro area automatically. To analyze the effectiveness of Portland's UGB policy a scale-free methodology which is based on the temporal changes in multifractal characteristics of the urban structure has been used. This method addresses the MAUP and takes the complex and hierarchical nature of the urban system into account. The concept of multifractal characteristics of urban structures is clarified by introducing a model of multifractal simulation of urban structures. Finally, the temporal evolution of the structure of Portland metro from multifractal to monofractal shows that the multifractality of Portland's metro urban structure has indeed started to decline since 1979. However, this decline has halted in 2010, and during the period 2010-2019, (open full item for complete abstract)

    Committee: Tomasz Stepinski Ph.D. (Committee Chair); Xi Chen Ph.D. (Committee Member); Carla Chifos Ph.D. (Committee Member); Diego Cuadros Ph.D. (Committee Member); Lin Liu Ph.D. (Committee Member); Kevin Raleigh Ph.D. (Committee Member) Subjects: Geographic Information Science
  • 19. Elavarthi, Pradyumna Semantic Segmentation of RGB images for feature extraction in Real Time

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    Deep learning networks for semantic segmentation are the core of modern computer vision applications involving target identification and scene extraction. Tremendous research in the area combined with the increased computation power and abundance of labelled datasets allowed the rise of deep neural networks to provide solutions for the long lasting problems. However, for the real time applications, huge parameter count and computational efficiency of the algorithms have taken significance. The novel method presented here will tackle the above mentioned problems for the effective real time segmentation and color extraction to identify the missing person.

    Committee: Janet Jiaxiang Dong Ph.D. (Committee Chair); Daniel Humpert M.S. (Committee Member); Anca Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 20. Kong, Lingchao Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks

    PhD, University of Cincinnati, 2019, Engineering and Applied Science: Computer Science and Engineering

    Wireless camera networks are ubiquitously deployed in various distributed sensing applications. The basic functions of each sensor node include video capture, video encoding or local video processing, and data transmission. The process of video analysis is implemented either in the central server or in the sensor node. Automatic video analysis can efficiently extract useful information from a huge amount of videos without human intervention. Object detection is the first and the most essential step of automatic video analysis. Thanks to abundant information provided by cameras and the development of computer vision techniques, automatic video analysis in wireless distributed systems is applied further. However, traditional network quality measures, such as QoS and QoE, do not necessarily reflect the quality of automatic video analysis in wireless camera networks. The overall goal of this dissertation is to propose new quality measures that could reflect the quality of automatic video analysis in wireless camera networks and to design efficient video processing and encoding schemes for wireless cameras that could boost the quality of automatic video analysis. The impact of lossy compression on object detection is systematically investigated. It has been found that current standardized video encoding schemes cause temporal domain fluctuation for encoded blocks in stable background areas and spatial texture degradation for encoded blocks in dynamic foreground areas of a raw video, both of which degrade the accuracy of object detection. Two measures, the sum-of-absolute frame difference (SFD) and the degradation of texture (TXD), are introduced to depict the temporal domain fluctuation and the spatial texture degradation in an encoded video, respectively. A model of object detection quality on compressed videos is established based on these two measures. Then we have proposed an efficient video encoding framework for boosting the accuracy of object detection for dist (open full item for complete abstract)

    Committee: Rui Dai Ph.D. (Committee Chair); Dharma Agrawal D.Sc. (Committee Member); H. Howard Fan Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member); Julian Wang Ph.D. (Committee Member) Subjects: Computer Science