Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 967)

Mini-Tools

 
 

Search Report

  • 1. Gurram, Mani Rupak Meta-Learning-Based Model Stacking Framework for Hardware Trojan Detection in FPGA Systems

    Master of Science (MS), Wright State University, 2024, Computer Science

    In today's technological landscape, hardware devices are integral to critical applications such as industrial automation, autonomous vehicles, and medical equipment, relying on advanced platforms like FPGAs for core functionalities. However, the multi-stage manufacturing process, often distributed across various foundries, introduces substantial security risks, notably the potential for hardware Trojan insertion. These malicious modifications compromise the reliability and safety of hardware systems. This research addresses the detection of hardware Trojans through side-channel analysis, utilizing power and electromagnetic signal data, combined with meta-learning techniques, specifically model stacking. By employing diverse base models and a meta-model to consolidate predictions, this non-invasive approach effectively identifies Trojans without requiring direct access to internal circuitry. The methodology demonstrates robust classification capabilities, achieving an accuracy of 88.0\%, precision of 81.0\%, and recall of 95.0\%, even on previously unseen data. The results highlight the superior performance of meta-learning over traditional detection methods, offering an efficient and reliable solution to enhance hardware security.

    Committee: Fathi Amsaad Ph.D. (Advisor); Junjie Zhang Ph.D. (Committee Member); Huaining Cheng Ph.D. (Committee Member); Nitin Pundir Ph.D. (Committee Member); Thomas Wischgoll Ph.D. (Other); Subhashini Ganapathy Ph.D. (Other) Subjects: Computer Engineering; Computer Science; Electrical Engineering
  • 2. Jayarama, Kiran Advanced Digital Wideband Receiver Design: High Dynamic Range and Enhanced Multi-Signal Detection with FPGA-Based Custom FFT and Nyquist Folding

    Doctor of Philosophy (PhD), Wright State University, 2024, Electrical Engineering

    In modern wideband receiver standards, efficient frequency spectrum utilization is essential to meet demands for high data rates, reduced latency, and enhanced connectivity. The Fast Fourier Transform (FFT) stands as a pivotal technology, particularly in radar signal processing, where it supports tasks such as target detection, range estimation, and velocity estimation by analyzing the frequency content of the received radar signals. This dissertation introduces the design of an advanced digital wideband receiver featuring a high dynamic range for multiple signals, with a focus on improved performance, compact size, and reduced power consumption, implemented on an FPGA using custom hardware. Key optimizations include converting floating-point data to 10-bit integers and replacing complex multipliers in the FFT module with simplified operations. The design begins with an FFT implementation using a 12-bit analog-to-digital converter (ADC) operating at a 2 GHz sampling rate, capturing 512 data points. Improvements such as a multiple-input selection block enhance weak signal amplification while preserving dynamic range, and an upgraded square-root approximation using Chebyshev coefficients reduces FFT output errors. These advancements improve weak signal detection accuracy even in the presence of strong signals, minimizing hardware requirements. The implementation utilized the Xilinx UltraScale+ RFSoC 1275 board, which integrates both RF and digital processing components onto a single chip, offering a compact and efficient solution for wideband receiver designs. The FFT module processes sampled data every 256 ns, evaluating frequencies from 64 MHz to 940 MHz. Experimental results demonstrate the lowest detectable signal strength of 500 uVpp with an approximate dynamic range of 60 dB for a single signal. For two-tone signals, the achievable instantaneous dynamic range is about 40 dB, with the lowest detectable signal strength in the presence of the s (open full item for complete abstract)

    Committee: Chein-In Henry Chen Ph.D. (Advisor); Saiyu Ren Ph.D. (Committee Member); Marian Kazimierczuk Ph.D. (Committee Member); Raymond E. Siferd Ph.D. (Committee Member); Yan Zhuang Ph.D. (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Engineering
  • 3. Gaydosh, Theodore An Investigation Into Hyperspectral Imagery Generation

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    The lack of Hyperspectral imagery (HSI) is an issue for many researchers and fields that wish to utilize the sheer amount of data a HSI cube contains. Given this along with the cost and the effort associated with gathering HSI, a way to generate them using existing would be very useful. Other works have generated synthetic images, images that contain the characteristics of a HSI cube, but that do not actually map to any real world location. This work attempts to show that it is possible to generate those cubes with easier to gather datasets and less data. This is done by using a paired image generation deep learning model, a Generative Adversarial Network. The HSI cubes were gathered from USGS's Earth Explorer and the sensor used was Earth Observing-1's Hyperion. The network was trained on four different input types in four regions and tested on three different regions. The four input types were 5 bands, 10 bands, 10 bands with no bands from the middle 100 bands, and 20 bands. The results and accuracy of the model were based on various metrics and a separate model was trained on each input until those metrics plateaued. A comparison of input vs generated spectra as well as the various metrics were then used to verify the accuracy of the test dataset. It was found the models each generalized well and that even individual bands of the greater HSI cube generated quite well to the target.

    Committee: Bradley Ratliff (Advisor); Theus Aspiras (Committee Member); Eric Balster (Committee Member) Subjects: Computer Engineering; Computer Science; Remote Sensing
  • 4. Gopalakrishna, Vaishak Enhancing Pedestrian Safety and Traffic Analytics through ONVIF Metadata Processing and V2X Communication

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Computer Engineering

    This thesis advances the Connected and Automated Vehicle Education-in-a-Box (CAVe-In-A-Box) framework by integrating Open Network Video Interface Forum (ONVIF)-compliant Internet Protocol (IP) cameras, significantly enhancing the platform's capacity for real-time pedestrian safety and Vehicle-to-Everything (V2X) communication. Originally developed by the Federal Highway Administration (FHWA) with a focus on Forward-Looking Infrared (FLIR) thermal cameras for pedestrian detection, this research extends the platform by incorporating ONVIF-compliant cameras to improve system interoperability and adaptability for broader traffic management needs. Working collaboratively with S. Bonthu[1], who contributed to the initial hardware setup of CAVe-In-A-Box, this thesis builds upon his groundwork by developing the software integration required to leverage ONVIF-compliant camera streams. This integration allows for real-time data transmission, enhancing pedestrian safety through V2X communication, particularly in urban environments. The integration of ONVIF cameras introduces a standardized, widely supported technology that offers greater flexibility compared to the FLIR-based system. This enhancement positions the platform to accommodate a wider array of traffic safety applications, particularly in complex urban settings. By incorporating ONVIF technology, the system provides a robust foundation for future V2X deployments, complementing the existing infrastructure while expanding its technological reach. This research establishes CAVe-In-A-Box as a more versatile and scalable tool for traffic engineers, capable of supporting real-time safety alerts and improved V2X communication. The work contributes to enhancing pedestrian safety and demonstrates the platform's potential for broader future traffic management and safety applications.

    Committee: Arthur Helmicki Ph.D. (Committee Chair); Nan Niu Ph.D. (Committee Member); Victor Hunt Ph.D. (Committee Member) Subjects: Computer Engineering
  • 5. Shinde, Gaurav Sanjay Efficient Synchronization and Input Queue Optimization in Parallel Discrete Event Simulation

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science

    WARPED/WARPED2 is a Time Warp synchronized parallel and distributed simulation kernel that also provides an API for building discrete event simulation models. These kernels are written in C++ and are developed and maintained to support the development and evaluation of optimizations and various alternate sub-algorithms within the Time Warp paradigm. The original WARPED series simulation kernel was developed to support distributed simulation on clusters of single-core compute elements using heavyweight MPI processes. As multi-core processing became widespread, the WARPED2 series was developed to efficiently support threaded MPI processes on a multi-core cluster. The current public version of the WARPED2 is the version 1.x series. While the v1.x series of WARPED2 targeted for efficient execution on multi-core processors and clusters composed of multi-core processors, several of the key design decisions made for some of its data structures, sub-algorithms, and synchronization were made with old technologies that need to be revisited. This thesis explores an evolution of the WARPED2 v1.x series design as the basis for a new WARPED2 v2.x series design. The original motivation for the v2.x series design was (i) the integration of read and readwrite locks into the c++ standard and (ii) the belief that a restructuring of the design and implementation of WARPED2 that focuses on an aggressive use of read locks could substantially improve overall performance. In particular, the major design departures from the v1.x series are: (i) the reorganization of the manager thread into a collection of concurrently executing threads; (ii) the use of a synchronous GVT algorithm; (iii) the introduction of read locks, the use of barrier synchronizations, and the unlocked access of some shared variables where safety is provided by access on different sides of barrier boundaries, and (iv) the use of synchronous MPI Recv and asynchrono (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Sounak Gupta Ph.D. (Committee Member); Seokki Lee Ph.D. (Committee Member) Subjects: Computer Engineering
  • 6. Singh, Rohit Optimizing Topological Spaces for Scalable Persistent Homology Computations

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science and Engineering

    Topological Data Analysis (TDA) explores the topological properties of data treated as a topological space. Persistent Homology (PH) is a component of TDA that computes the topological features (connected components, loops, voids, and their higher-dimensional counterparts) found in data. Unfortunately, computing PH in dimensional spaces above R7 is impractical due to the size and computational cost of generating the filtration (filtered complexes) of the data. While there are many types of complexes, they are typically classified into two broad categories, namely: (i) the purely combinatorial Abstract complexes, and (ii) Geometric complexes that have a realization/embedding in a given space. In general, Abstract complexes are relatively easy to generate but suffer from exponential memory growth, making them memory inefficient in higher dimensions and for big data. In contrast, Geometric complexes generally have a significantly more compact representation but their construction incurs significant computational costs that become impractical in higher dimensions. This dissertation works to address the computational and memory costs experienced in the construction of filtered complexes used by TDA algorithms. In particular, this study will examine two main approaches to build a filtered complex of data, namely: (i) the construction of approximate geometric sparsified simplicial complexes for high dimensional and big data, and (ii) the construction and use of approximate Polytopal Complexes where convex polytopes represent cells in the different dimensions of the data. The overall objective of this work is to investigate techniques for quickly constructing geometric complexes that are memory efficient and suitable for use with higher dimensional big data than currently possible. Although several techniques for the sparsification of simplicial complexes are well known, this work contributes a solution based on ß-skeletons (open full item for complete abstract)

    Committee: Philip Wilsey Ph.D. (Committee Chair); Vikram Ravindra Ph.D. (Committee Member); Bledar Konomi Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member); Badri Vellambi Ravisankar Ph.D. (Committee Member) Subjects: Computer Engineering
  • 7. Schafer, Austin Enhancing Vehicle Detection in Low-Light Imagery Using Polarimetric Data

    Master of Science (M.S.), University of Dayton, 2024, Electrical Engineering

    RGB imagery provides detail which is usually sufficient to perform computer vision tasks. However, images taken in low-light appear vastly different from well-lit imagery due to the diversity in light intensity. Polarimetric data provides additional detail which focuses on the orientation of the light rather than intensity. Scaling our classic RGB images using polarimetric data can maintain the RGB image type, while also enhancing image contrast. This allows transfer learning using pre-trained RGB models to appear more feasible. Our work focuses on developing a large dataset of paired polarimetric RGB images in a highly controlled laboratory environment. Then, we perform transfer learning on a pre-trained image segmentation model with each of our image product types. Finally, we compare these results in both well-lit and low-light scenarios to see how our polarimetrically enhanced RGB images stack up against regular RGB images.

    Committee: Bradley Ratliff (Committee Chair); Amy Neidhard-Doll (Committee Member); Eric Balster (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Engineering; Optics; Remote Sensing; Scientific Imaging; Statistics
  • 8. Javed, Nur Uddin GPS Denied Vehicle Localization

    Master of Science, The Ohio State University, 2024, Electrical and Computer Engineering

    Automated driving needs lane-level accurate localization. However, automated driving systems face significant challenges in environments where GPS signals are unavailable or compromised. Several techniques have been introduced over time to address this issue. However, each technique presents its own set of challenges.To address the lane-level localization challenges, this study proposes a kinematic dead reckoning system utilizing vehicle onboard sensor data, which is crucial for vehicle operation itself. Onboard sensors provide data such as steering angle, steering rate, yaw rate, and wheel speed sensors through the vehicle's Controller Area Network (CAN). However, dead reckoning is susceptible to drift over time, compromising localization accuracy. To mitigate this drift, an innovative arc-length-based map matching method is introduced, which leverages a digital 2D map of road and lane geometry to correct the dead reckoning estimates.The proposed methodology enhances vehicle localization by combining the temporal prediction of a kinematic model with spatial information from static map data, effectively correcting drift without GPS support. This approach was tested in multiple safety-critical scenarios suggested by NHTSA in distinct road geometry, speed, and maneuvers, demonstrating consistent localization accuracy. The overall results showed reliable drift correction for all tested scenarios. Furthermore, we evaluated the outage performance for each scenario at different times during the scenario test, revealing a bound error in the localization method. Furthermore, the proposed method calculates a confidence interval to identify overestimation and underestimation.This novel arc-length-based map matching ensures continuous and dependable navigation for automated vehicles in GPS-denied situations, significantly enhancing safety and operational reliability. The findings of this study highlight a scalable and effective solution to maintain automated vehicle localizati (open full item for complete abstract)

    Committee: Qadeer Ahmed (Advisor); Lisa Fiorentini (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Engineering; Transportation
  • 9. Elleman, Benjamin Towards a Superior Malware Analysis Environment: Bare-Metal Process-Tracing with IOTA

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Malware is the most persistent security threat to the modern information ecosystem. To mitigate the damage that malware inflicts, the most effective resource is a thorough understanding of its exact mechanisms and operations. For this reason, malware analysis safeguards the entire computing industry. However, the overall effectiveness and efficiency of this sector are increasingly being undermined by a combination of detrimental internal practices and external forces. The cybersecurity field must adapt and evolve in order to counteract and mitigate the impact of these threats. In this paper, I propose a new and superior model for the analysis of Linux and IoT malware with the introduction of IOTA (Invisible Observation \& Threat Analysis): A bare-metal process-tracing framework. By merging the superior quality of data that bare-metal analysis supplies with the superior quantity of data provided by process-tracing, IOTA is able to effortlessly accomplish what traditional malware analysis sandbox environments cannot: undetectable malware analysis. Notably, the unification of these two technologies facilitates a novel strategy of intentionally and iteratively triggering anti-analysis code segments, allowing IOTA to automatically reverse-engineer malware samples. Additionally, this unique approach provides data that can be used to harden devices and traditional malware analysis environments without reverse-engineering entire malware samples. All of these capabilities are supplemented by the regular suite of tools found in traditional sandbox environments. Moreover, IOTA is designed to be fully modular, allowing for the modification of existing code and the addition of new routines with ease. IOTA presents a novel malware analysis paradigm with less overhead, more accurate data collection, less susceptibility to sandbox evasion techniques, and leverages these anti-analysis routines to its advantage. To demonstrate the efficacy of bare-metal process-tracing, the vulner (open full item for complete abstract)

    Committee: Carter Yagemann (Advisor) Subjects: Computer Engineering; Computer Science; Information Science
  • 10. Hong, Chaeun Data Preprocessing Pipeline for UAS Imagery in Agriculture

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    By leveraging UAS imagery, ML-based approaches can support a variety of critical agricultural applications, including crop health monitoring for determining optimum in-season fertilization treatments, plant disease detection to inform fungicide treatments, and weed pressure assessment to guide herbicide applications. While commercially available approaches use sampling approaches of one (1) image per acre, this approach enables the acquisition of high-quality images with continuous coverage of agricultural fields thereby ensuring comprehensive monitoring and analysis. These applications and methodologies facilitate data-driven insights that promote more sustainable and cost-effective agricultural management. However, the development of agricultural data pipelines from data collection to ML-based analytics is impeded by the scarcity of standardized datasets and tailored preprocessing methodologies. This thesis addresses these challenges by proposing a systematic data preprocessing pipeline focused on three critical areas: georeferencing UAS imagery, UAS image sampling for model training, and canopy coverage calculation with crop segmentation. The thesis introduces practical georeferencing methods designed to address the limitations of orthomosaic-based approaches. Direct georeferencing significantly reduces processing time, making it suitable for large-plot research, while the additional step of image registration improves spatial accuracy meeting the geospatial accuracy requirements for using UAS imagery in small-plot research. The use of a Dice coefficient-based error detection mechanism ensures robust geospatial accuracy, facilitating precise data alignment for ML model training. Further, the thesis presents two distinct UAS image sampling methods. The first method is an uncertainty aware sampling framework to optimize Vision Transformer (ViT) model training for agricultural imagery. By employing Monte Carlo dropout to estimate predictive uncertainty, the propos (open full item for complete abstract)

    Committee: Raghu Machiraju (Advisor); Hari Subramoni (Committee Member) Subjects: Agriculture; Computer Engineering; Computer Science
  • 11. Baietto, Anthony Data-Aware Tuning of Deep Learning Models

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Artificial Neural Networks (ANNs) are a powerful branch of Artificial Intelligence (AI) that utilize interconnected layers of neurons to expertly approximate complex non-linear functions. ANNs have revolutionized the domains of computer vision, natural language processing, and autonomous systems where traditional convex optimization algorithms would fail. However, the design and training of ANNs is non-trivial and can pose a significant obstacle in their deployment. With the rise in complexity of AI applications, conventional wisdom points towards the need for larger models and datasets which adversely affects model sustainability given the end of Moore's Law and the rapid growth of hardware capabilities. I propose a novel data-aware tuning of deep learning models that leverages extant problem information to more intelligently and efficiently implement neural networks. My goal is to recognize the tremendous shift in computing towards AI and to help bring existing systems into this new era of computing. To that end, I seek to develop new methods for ANN creation backed by learned domain expertise. This dissertation will focus on the following five topics encompassing design, implementation, and execution of neural networks: §1. Incorporation of existing problem-specific information into neural network architecture choices. §2. Emphasis on maintaining low SWaP (size, weight, and power) solutions without sacrificing performance. §3. Supplying self-correcting abilities via augmentative training data. §4. Providing equivalent robustness to tried and true existing solutions. §5. Insuring flexibility for deployment to the latest hardware including neuromorphic processors.

    Committee: Christopher Stewart (Advisor); Mi Zhang (Committee Member); Radu Teodorescu (Committee Member); Trevor Bihl (Advisor) Subjects: Computer Engineering; Computer Science
  • 12. Wurster, Skylar Adaptive Neural Networks for Large Scale Scientific Data Visualization

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Scientists use supercomputers to simulate physical phenomena in domains such as meteorology, cosmology, and fluid dynamics to name a few. After simulation, visualization is a common tool for scientists to explore their data and gain insights toward their research. As supercomputers continue to become more powerful, reaching exa-scale (1 exaFLOPS) early this decade, scientists are afforded the ability to increase the amount of data generated by increasing the resolution, number of timesteps generated, or number of simulation parameters experimented with. However, managing the sheer amount of data generated is often a challenge, as storage and bandwidth have not kept up with compute rates. Thus, an important research direction for scientific visualization is the management and efficient visualization of large scale data. In this dissertation, we show that adaptive versions of modern machine learning and neural networks offer competitive trade-offs for managing and visualizing large scale scientific data. To this end, we propose two novel improvements to existing non-adaptive neural networks for scientific data: (1) hierarchical super resolution networks, and (2) adaptive implicit neural representations. The first direction we explore is adding adaptability to super resolution neural networks. Super resolution neural networks take in some low resolution data and return a high resolution predicted output. The networks often increase resolution by a constant scale factor, such as 2x or 4x, and are trained with a set of curated high resolution data. In the context of scientific data, once a network is trained, the simulation can save low resolution output, which reduces the amount of data that must be saved, reducing the effect of the I/O bottleneck. For post-hoc analysis and visualization, the trained super resolution neural network can upscale the saved low resolution data to the original high resolution. To adapt to the sparsity of features in some sc (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Hanqi Guo (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering; Computer Science
  • 13. Rahaei, Arefeh DESIGN AND ANALYSIS OF A CHAOS-BASED LIGHTWEIGHT CRYPTOSYSTEM

    MS, Kent State University, 2024, College of Arts and Sciences / Department of Computer Science

    Cryptography, derived from the Greek word meaning "to hide information," involves techniques for converting readable plaintext into unreadable ciphertext through a process called encryption. Cryptography algorithms are broadly categorized into two types: symmetric key cryptography and asymmetric key cryptography. Symmetric key cryptography is further divided into block ciphers and stream ciphers. Block ciphers, based on their structure, can be classified into two main categories: Substitution-Permutation Networks (SPN) and Feistel Networks (FN). This research focuses on SPN-based block ciphers. In 1949[1], Claude Shannon introduced two fundamental operations required for a robust cryptosystem: substitution and permutation. Substitution, the core component of SPN-based cryptography, is implemented through substitution boxes (S-Boxes), where each element in the plaintext is mapped to another element to achieve nonlinearity and provide the confusion property crucial for security. With the rise of constrained devices, such as the Internet of Things (IoT), there is an increasing demand for lightweight symmetric-key algorithms. However, in many cases, the S-Box contributes the most to the hardware complexity and computational load compared to other linear components. This research addresses this challenge by designing and optimizing a lightweight cryptosystem suitable for resource-limited environments. The thesis makes two key contributions to the field of lightweight cryptography. The first contribution is the development of chaos-based S-Boxes tailored for devices with restricted computational capabilities. By leveraging chaotic maps, the proposed S-Boxes achieve a high degree of nonlinearity and security while maintaining a minimal computational and hardware footprint, making them ideal for IoT and other constrained devices. These chaos-based S-Boxes introduce dynamic, unpredictable substitution patterns that enhance resistance to cryptanalysis techniques such as l (open full item for complete abstract)

    Committee: Maha Allouzi Dr (Advisor); Younghun Chae Dr (Committee Member); Lei Xu Dr (Committee Member) Subjects: Computer Engineering; Computer Science
  • 14. Khan, Mahfizur Rahman Distributed UAV-Based Wireless Communications Using Multi-Agent Deep Reinforcement Learning

    Master of Science, Miami University, 2024, Electrical and Computer Engineering

    In this thesis, a thorough investigation into the optimization of user connectivity in ad hoc communication networks using robust policy creation and intelligent UAV location in stochastic environments is presented. In order to handle the dynamic and decentralized character of ad hoc networks, we identified the optimal UAV positions by applying a multi-agent deep Q-learning technique. To train stochastic environment-adaptive policies, a novel simple algorithm was devised with an emphasis on the usefulness of these policies under different scenarios. Through an empirical investigation, the study offered information on the generalizability and adaptability of learnt behaviors by examining how well policies based on one distribution of settings performed when applied to different, unseen distributions. In this thesis, we also explored the resilience of UAV networks against jamming attempts and propose a method for unaffected UAVs to self-adjust their placements. This approach ensured optimal user coverage even in adversarial situations. By demonstrating the potential of machine learning techniques to maximize network performance and enhance user connectivity in the face of environmental uncertainties and security risks, these contributions will collectively advance the field of UAV-assisted communication.

    Committee: Dr. Bryan Van Scoy (Advisor); Dr. Mark Scott (Committee Member); Dr. Veena Chidurala (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 15. Kumar, Prashant Development and Preliminary Calibration of a Campus Mobility Digital Twin

    Master of Science, The Ohio State University, 2024, Electrical and Computer Engineering

    This thesis addresses the growing challenges of traffic and parking management at Ohio State University's Columbus campus due to increased student enrollment as well as interdisciplinary research center in west campus. By leveraging traffic data from CampusParc and the Campus Area Bus Service (CABS), a detailed traffic simulation model was developed using the Simulation of Urban Mobility (SUMO) platform, enhanced with the Traffic Control Interface (TraCI). The model is designed to replicate and analyze real-world traffic patterns on campus, with a specific focus on high-density road sections and peak congestion areas. The research objectives include the collection and analysis of traffic data, the creation of a high-fidelity traffic simulation, and its integration into a digital twin of the campus. The model's accuracy was validated by comparing simulated vehicle counts and parking occupancy levels with actual data, showing minimal error and confirming the model's reliability. This study also explores various mobility solutions such as optimized bus routes, smart traffic signals, and shared mobility services, assessing their impact on campus traffic flow. The integration of the simulation model into the digital twin framework provides a comprehensive view of campus mobility, aiding in effective traffic management and planning. The results of this research demonstrate the potential of the developed tools and methodologies to enhance campus transportation efficiency and sustainability. This work not only contributes to the immediate needs of OSU but also offers a scalable approach for urban mobility management in similar environments.

    Committee: Shawn Midlam-Mohler (Advisor) Subjects: Computer Engineering; Electrical Engineering
  • 16. Jones, Ryan OFE-EBS — An Optical Flow Equation-Inspired Event-Based Sensor for Low-Earth Orbit Ground Moving Target Indication

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Event-based sensors (EBS) report pixel-asynchronous changes in scene intensity called events. Hence, its sparse event stream is well-suited for many computer vision tasks— particularly object tracking. However, relative motion between the sensor and scene will generate extraneous events caused by the translation of static scene features across the sensor. We present OFE-EBS, an optical flow equation-inspired event-based sensor for low-Earth orbit (LEO) ground moving target indication (GMTI). Owing to the predictable velocity of a satellite platform in LEO, we augment the EBS pixel with additional cross-row subtraction hardware to remove static background features. Pixel adaptivity is modified to ensure dynamic foreground features generate fewer events, further reducing event rate. Finally, using our analytical sensor model, we show that OFE-EBS outperforms conventional EBS in spatial resolution and event rate, considering the effects of pixel nonuniformity.

    Committee: Keigo Hirakawa (Committee Chair); Partha Banerjee (Committee Member); Bradley Ratliff (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 17. Mallik, Anurag Deep Vision Based Driving Behavior Analysis System for Roadside Restricted Area Traffic Control

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Administering the behavior of drivers near roadside restricted areas, such as work zones, accident zones, or natural calamity zones, is necessary for safety. It helps steer vehicles clear of the ongoing blocked region. This ensures the safety of both drivers and people in that area. The vehicles need to be diverted to a different lane away from the restricted area for smooth running of the traffic. A computer vision-based autonomous system could be able to automatically monitor the movements of the vehicles and predict their pathways based on the direction and speed of the vehicles. This would help to provide appropriate signals to the drivers for changing the lanes appropriately. Development of an artificial intelligence-based learning system for detection and tracking vehicles on the road and prediction of their future locations in real-time videos captured by a stationary camera is proposed in this thesis. The videos captured in outdoor environments will be subjected to several challenges due to varying lighting conditions and changes in orientation, viewing angle, and object size. Surrounding objects like trees, buildings, or other vehicles can obscure a vehicle completely or partially, making reliable detection and tracking difficult. Stationary cameras may also capture background regions like buildings, trees, parking lots, etc. Sometimes, the detection vehicles become difficult due to their darker texture in non-uniform lighting conditions. In this thesis research, a YOLO_v8 neural network model is employed to detect the vehicles in the video frames in real-time. The neural network model needs an extensive set of annotated datasets of vehicles in roadside environments. A new annotated dataset named Dayton Annotated Vehicle Image Set (DAVIS) suited for US road conditions is built to train the vehicle detection model. An adaptive image enhancement technique, namely Contrast Limited Adaptive Histogram Equalization (CLAHE) is used in the moving object regions (open full item for complete abstract)

    Committee: Vijayan K. Asari (Committee Chair); Theus Aspiras (Committee Member); Eric J. Balster (Committee Member) Subjects: Computer Engineering
  • 18. Essig, David Comparison of a Transformer-Based Single-Image Super-Resolution Model for the CONNECT Compression Framework

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Single-image super-resolution (SISR) is the task of increasing an image's resolution using one lower resolution image. This task has been used in many areas of life to capture finer details in medical imagery, images with distant objects, and compressed images. Compressing images can save computational resources and bandwidth. Deep Learning (DL) techniques for image compression and SISR have become abundant as such methods have yielded promising results, such as in the Convolutional Neural Network for Enhanced Compression Techniques (CONNECT) compression framework [1] [2] and SwinIR [3], the multi-scale attention network [4], and the Real-ESRGAN [5] super-resolution models. In this thesis, these super-resolution models are to be analyzed and compared with each other using previous work and direct testing on the Set14 dataset with one being selected to be used on the backend of CONNECT as an alternative compression framework. This framework could yield higher compression ratios while maintaining or improving reconstructed image quality. This thesis attempts to improve the existing CONNECT compression framework by analyzing and selecting a DL-based super-resolution model to reconstruct the compressed images after they have been fed through CONNECT. Varying compression methods are then compared using widely used image quality metrics and the compression ratio metric.

    Committee: Bradley Ratliff (Committee Chair); Barath Narayanan (Committee Member); Russell Hardie (Committee Member) Subjects: Computer Engineering; Computer Science
  • 19. Wang, Wenzhuo Reverse Engineering of “Magic Box”: A Device for Screen Projection to CarPlay-Enabled Vehicles

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    With the rise of car infotainment systems, the integration of smartphones with in-car displays has become increasingly prevalent. CarPlay, as one of the popular systems, is highly favored by users and is equipped in many vehicles. The Magic Brand Magic Box is an innovative Android-based device designed to interface with a car's CarPlay-enabled USB port, enabling the projection of its own user interface onto the car's display. However, this capability raises significant safety concerns, as it allows activities typically restricted while driving, such as watching videos on car screens. This thesis aims to reverse engineer the Magic Box to understand the mechanisms by which it communicates through the CarPlay interface. By analyzing the device's hardware and software, as well as referencing partial CarPlay protocol documents found online, we seek to uncover the principles behind its functionality and explore potential vulnerabilities in the Apple CarPlay system that may have been exploited. We aim to provide a detailed insight into the process of Android reverse engineering, offering valuable knowledge for researchers and practitioners interested in similar endeavors.

    Committee: Keith Redmill (Advisor); Zhiqiang Lin (Advisor) Subjects: Computer Engineering; Computer Science
  • 20. Balasubramaniam, Sowbaranika Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Motion-activated cameras, commonly known as camera traps, play a crucial role in wildlife monitoring, biodiversity conservation, and basic ecological science, capturing a diverse range of species in their natural habitats. However, the sheer volume of images generated requires manual analysis by ecologists, making this a time-consuming and inefficient process. This is particularly frustrating since many of the images do not contain animals because camera traps are triggered by the motion of vegetation due to wind or miss the animal due to the slow camera response. This thesis presents an approach aimed at addressing these challenges through the automation of image classification and the development of smart camera traps that leverage artificial intelligence implementations in edge devices. First, we develop a species classifier pipeline consisting of object detection and a species classifier for a novel camera trap setting. We train a convolutional neural network to identify 45 trained species using 1.2M images, achieving an accuracy of around 89\% at the class level without manual intervention. This research demonstrates the combination of deep learning methodologies and human oversight, showcasing their ability to facilitate automatic or semiautomatic, precise, and efficient surveys of small animals in the wild, particularly in novel settings or for novel species. The suggested pipeline helps achieve 95\% accuracy in image classification, resulting in 11\% of images that require manual review at the class level. This pipeline helps to automatically annotate the image while maintaining accuracy. The camera trap generates a large number of images. The time delay between data capture and image processing leads to the loss of critical information. Real-time labeling while capturing images can help mitigate this issue. Following the traditional classifier, we investigate methods for deploying an AI model in a camera trap. Smart Camera Traps would result in real (open full item for complete abstract)

    Committee: Tanya Berger-Wolf (Advisor); Christopher Stewart (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering