Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 954)

Mini-Tools

 
 

Search Report

  • 1. Rahaei, Arefeh DESIGN AND ANALYSIS OF A CHAOS-BASED LIGHTWEIGHT CRYPTOSYSTEM

    MS, Kent State University, 2024, College of Arts and Sciences / Department of Computer Science

    Cryptography, derived from the Greek word meaning "to hide information," involves techniques for converting readable plaintext into unreadable ciphertext through a process called encryption. Cryptography algorithms are broadly categorized into two types: symmetric key cryptography and asymmetric key cryptography. Symmetric key cryptography is further divided into block ciphers and stream ciphers. Block ciphers, based on their structure, can be classified into two main categories: Substitution-Permutation Networks (SPN) and Feistel Networks (FN). This research focuses on SPN-based block ciphers. In 1949[1], Claude Shannon introduced two fundamental operations required for a robust cryptosystem: substitution and permutation. Substitution, the core component of SPN-based cryptography, is implemented through substitution boxes (S-Boxes), where each element in the plaintext is mapped to another element to achieve nonlinearity and provide the confusion property crucial for security. With the rise of constrained devices, such as the Internet of Things (IoT), there is an increasing demand for lightweight symmetric-key algorithms. However, in many cases, the S-Box contributes the most to the hardware complexity and computational load compared to other linear components. This research addresses this challenge by designing and optimizing a lightweight cryptosystem suitable for resource-limited environments. The thesis makes two key contributions to the field of lightweight cryptography. The first contribution is the development of chaos-based S-Boxes tailored for devices with restricted computational capabilities. By leveraging chaotic maps, the proposed S-Boxes achieve a high degree of nonlinearity and security while maintaining a minimal computational and hardware footprint, making them ideal for IoT and other constrained devices. These chaos-based S-Boxes introduce dynamic, unpredictable substitution patterns that enhance resistance to cryptanalysis techniques such as l (open full item for complete abstract)

    Committee: Maha Allouzi Dr (Advisor); Younghun Chae Dr (Committee Member); Lei Xu Dr (Committee Member) Subjects: Computer Engineering; Computer Science
  • 2. Khan, Mahfizur Rahman Distributed UAV-Based Wireless Communications Using Multi-Agent Deep Reinforcement Learning

    Master of Science, Miami University, 2024, Electrical and Computer Engineering

    In this thesis, a thorough investigation into the optimization of user connectivity in ad hoc communication networks using robust policy creation and intelligent UAV location in stochastic environments is presented. In order to handle the dynamic and decentralized character of ad hoc networks, we identified the optimal UAV positions by applying a multi-agent deep Q-learning technique. To train stochastic environment-adaptive policies, a novel simple algorithm was devised with an emphasis on the usefulness of these policies under different scenarios. Through an empirical investigation, the study offered information on the generalizability and adaptability of learnt behaviors by examining how well policies based on one distribution of settings performed when applied to different, unseen distributions. In this thesis, we also explored the resilience of UAV networks against jamming attempts and propose a method for unaffected UAVs to self-adjust their placements. This approach ensured optimal user coverage even in adversarial situations. By demonstrating the potential of machine learning techniques to maximize network performance and enhance user connectivity in the face of environmental uncertainties and security risks, these contributions will collectively advance the field of UAV-assisted communication.

    Committee: Dr. Bryan Van Scoy (Advisor); Dr. Mark Scott (Committee Member); Dr. Veena Chidurala (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 3. Jones, Ryan OFE-EBS — An Optical Flow Equation-Inspired Event-Based Sensor for Low-Earth Orbit Ground Moving Target Indication

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Event-based sensors (EBS) report pixel-asynchronous changes in scene intensity called events. Hence, its sparse event stream is well-suited for many computer vision tasks— particularly object tracking. However, relative motion between the sensor and scene will generate extraneous events caused by the translation of static scene features across the sensor. We present OFE-EBS, an optical flow equation-inspired event-based sensor for low-Earth orbit (LEO) ground moving target indication (GMTI). Owing to the predictable velocity of a satellite platform in LEO, we augment the EBS pixel with additional cross-row subtraction hardware to remove static background features. Pixel adaptivity is modified to ensure dynamic foreground features generate fewer events, further reducing event rate. Finally, using our analytical sensor model, we show that OFE-EBS outperforms conventional EBS in spatial resolution and event rate, considering the effects of pixel nonuniformity.

    Committee: Keigo Hirakawa (Committee Chair); Partha Banerjee (Committee Member); Bradley Ratliff (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 4. Mallik, Anurag Deep Vision Based Driving Behavior Analysis System for Roadside Restricted Area Traffic Control

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Administering the behavior of drivers near roadside restricted areas, such as work zones, accident zones, or natural calamity zones, is necessary for safety. It helps steer vehicles clear of the ongoing blocked region. This ensures the safety of both drivers and people in that area. The vehicles need to be diverted to a different lane away from the restricted area for smooth running of the traffic. A computer vision-based autonomous system could be able to automatically monitor the movements of the vehicles and predict their pathways based on the direction and speed of the vehicles. This would help to provide appropriate signals to the drivers for changing the lanes appropriately. Development of an artificial intelligence-based learning system for detection and tracking vehicles on the road and prediction of their future locations in real-time videos captured by a stationary camera is proposed in this thesis. The videos captured in outdoor environments will be subjected to several challenges due to varying lighting conditions and changes in orientation, viewing angle, and object size. Surrounding objects like trees, buildings, or other vehicles can obscure a vehicle completely or partially, making reliable detection and tracking difficult. Stationary cameras may also capture background regions like buildings, trees, parking lots, etc. Sometimes, the detection vehicles become difficult due to their darker texture in non-uniform lighting conditions. In this thesis research, a YOLO_v8 neural network model is employed to detect the vehicles in the video frames in real-time. The neural network model needs an extensive set of annotated datasets of vehicles in roadside environments. A new annotated dataset named Dayton Annotated Vehicle Image Set (DAVIS) suited for US road conditions is built to train the vehicle detection model. An adaptive image enhancement technique, namely Contrast Limited Adaptive Histogram Equalization (CLAHE) is used in the moving object regions (open full item for complete abstract)

    Committee: Vijayan K. Asari (Committee Chair); Theus Aspiras (Committee Member); Eric J. Balster (Committee Member) Subjects: Computer Engineering
  • 5. Essig, David Comparison of a Transformer-Based Single-Image Super-Resolution Model for the CONNECT Compression Framework

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Single-image super-resolution (SISR) is the task of increasing an image's resolution using one lower resolution image. This task has been used in many areas of life to capture finer details in medical imagery, images with distant objects, and compressed images. Compressing images can save computational resources and bandwidth. Deep Learning (DL) techniques for image compression and SISR have become abundant as such methods have yielded promising results, such as in the Convolutional Neural Network for Enhanced Compression Techniques (CONNECT) compression framework [1] [2] and SwinIR [3], the multi-scale attention network [4], and the Real-ESRGAN [5] super-resolution models. In this thesis, these super-resolution models are to be analyzed and compared with each other using previous work and direct testing on the Set14 dataset with one being selected to be used on the backend of CONNECT as an alternative compression framework. This framework could yield higher compression ratios while maintaining or improving reconstructed image quality. This thesis attempts to improve the existing CONNECT compression framework by analyzing and selecting a DL-based super-resolution model to reconstruct the compressed images after they have been fed through CONNECT. Varying compression methods are then compared using widely used image quality metrics and the compression ratio metric.

    Committee: Bradley Ratliff (Committee Chair); Barath Narayanan (Committee Member); Russell Hardie (Committee Member) Subjects: Computer Engineering; Computer Science
  • 6. Wang, Wenzhuo Reverse Engineering of “Magic Box”: A Device for Screen Projection to CarPlay-Enabled Vehicles

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    With the rise of car infotainment systems, the integration of smartphones with in-car displays has become increasingly prevalent. CarPlay, as one of the popular systems, is highly favored by users and is equipped in many vehicles. The Magic Brand Magic Box is an innovative Android-based device designed to interface with a car's CarPlay-enabled USB port, enabling the projection of its own user interface onto the car's display. However, this capability raises significant safety concerns, as it allows activities typically restricted while driving, such as watching videos on car screens. This thesis aims to reverse engineer the Magic Box to understand the mechanisms by which it communicates through the CarPlay interface. By analyzing the device's hardware and software, as well as referencing partial CarPlay protocol documents found online, we seek to uncover the principles behind its functionality and explore potential vulnerabilities in the Apple CarPlay system that may have been exploited. We aim to provide a detailed insight into the process of Android reverse engineering, offering valuable knowledge for researchers and practitioners interested in similar endeavors.

    Committee: Keith Redmill (Advisor); Zhiqiang Lin (Advisor) Subjects: Computer Engineering; Computer Science
  • 7. Balasubramaniam, Sowbaranika Optimized Classification in Camera Trap Images: An Approach with Smart Camera Traps, Machine Learning, and Human Inference

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    Motion-activated cameras, commonly known as camera traps, play a crucial role in wildlife monitoring, biodiversity conservation, and basic ecological science, capturing a diverse range of species in their natural habitats. However, the sheer volume of images generated requires manual analysis by ecologists, making this a time-consuming and inefficient process. This is particularly frustrating since many of the images do not contain animals because camera traps are triggered by the motion of vegetation due to wind or miss the animal due to the slow camera response. This thesis presents an approach aimed at addressing these challenges through the automation of image classification and the development of smart camera traps that leverage artificial intelligence implementations in edge devices. First, we develop a species classifier pipeline consisting of object detection and a species classifier for a novel camera trap setting. We train a convolutional neural network to identify 45 trained species using 1.2M images, achieving an accuracy of around 89\% at the class level without manual intervention. This research demonstrates the combination of deep learning methodologies and human oversight, showcasing their ability to facilitate automatic or semiautomatic, precise, and efficient surveys of small animals in the wild, particularly in novel settings or for novel species. The suggested pipeline helps achieve 95\% accuracy in image classification, resulting in 11\% of images that require manual review at the class level. This pipeline helps to automatically annotate the image while maintaining accuracy. The camera trap generates a large number of images. The time delay between data capture and image processing leads to the loss of critical information. Real-time labeling while capturing images can help mitigate this issue. Following the traditional classifier, we investigate methods for deploying an AI model in a camera trap. Smart Camera Traps would result in real (open full item for complete abstract)

    Committee: Tanya Berger-Wolf (Advisor); Christopher Stewart (Committee Member); Wei-Lun Chao (Committee Member) Subjects: Computer Engineering
  • 8. Jangid, Mohit Exploring Potential and Challenges of Symbolic Formal Verification in Security and Privacy

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Software and protocol development has followed the design-develop-break-patch cycle for many years. One resolution to mitigate such a persistent cycle is to build the systems with formal analysis following the "analysis-prior-to-development'' philosophy. At present, state space explosion and the limited expressibility of formal model languages limit the scalability and efficiency of this approach. Expanding the scope of formal methods to broader cases requires augmented modeling and a deeper understanding of the underlying operating mechanisms. In particular, by modeling with a precise system environment and refined adversary capabilities, I wish to expand the boundaries of formal methods, exposing limiting root causes and opening novel paths for improvement. For example, considering how concurrent execution influences the processes; modeling a granular access control for user and adversary groups; incorporating human interactions; allowing adversaries to control program execution at the instruction level; and trading off between literal cryptographic accuracy and modeled theory imprecisions augments the formal modeling to reason about unconventional properties. Apart from raising security assurance, such comprehensive coverage of the system environment and precise adversary capability expand the utility of formal methods to large systems and facilitate the derivation of unconventional properties. Additionally, such design provides further feedback to formal tool development to design targeted building blocks that improve the efficiency, scalability, and expressibility of formal modeling. In this thesis, I first present an enhanced and generic formal analysis of the trusted execution environment (TEE) technology -- Software Guard Extension -- built by Intel. In particular, I made a first attempt toward extending formal verification to program logic for SGX enclaves with the powerful SGX threat model. In this effort, I derived state continuity properties with r (open full item for complete abstract)

    Committee: Zhiqiang Lin (Advisor); Feng Qin (Committee Member); Carter Yagemann (Committee Member); Srinivasan Parthasarathy (Committee Member); David Landsbergen (Other) Subjects: Computer Engineering; Computer Science
  • 9. Zhou, Qinghua High Performance Communication Middleware with On-the-fly GPU-based Compression for HPC and Deep Learning Applications

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    General-purpose accelerators such as graphics processing unit (GPU), field-programmable gate array (FPGA), and tensor processing unit (TPU) are increasingly utilized to improve the performance of modern High-Performance Computing (HPC) and Cloud systems. GPUs, in particular, have emerged as a popular hardware choice due to their ability to handle massive parallelism and high-bandwidth memory. They have become a driving force behind rapid advancements in HPC and ML applications, particularly Deep Learning. GPUs significantly improve computational efficiency and overall performance and are ideal for handling computationally intensive workloads related to scientific simulations, data analysis, and neural network training. To handle growing data and models, HPC and Deep Learning applications need multiple nodes for faster computation. Interconnects like Ethernet and InfiniBand are keys for node communication and data sharing. Slow interconnect between nodes can be a bottleneck in these applications compared to intra-node interconnect PCIe, NVLINK, etc. Large data sets and training large deep-learning models increase the need for data transfer between nodes, causing significant delays and reducing performance. The Message Passing Interface (MPI)—considered the de facto parallel programming model—provides a set of communication primitives to support parallel and distributed execution of user applications on HPC systems. With the support of passing GPU buffers to MPI primitives directly, the state-of-the-art MPI libraries significantly improve performance for GPU-accelerated applications. However, the inter-node communication bandwidth of the state-of-the-art MPI libraries has saturated the bandwidth of the InfiniBand network for large GPU resident data. In this dissertation, we take advantage of GPU-based compression techniques with GPU computing resources to reduce the data size being transferred through the network with limited bandwidth on modern heterogeneous sy (open full item for complete abstract)

    Committee: Dhabaleswar Kumar Panda (Advisor); Hari Subramoni (Advisor); Radu Teodorescu (Committee Member); Christopher Stewart (Committee Member) Subjects: Computer Engineering; Computer Science
  • 10. Shi, Neng Deep Surrogate Models for Parameter Space Exploration of Ensemble Simulations

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Recently, ensemble simulations have been frequently used in various scientific domains, including cosmology, oceanography, and fluid dynamics. To model scientific phenomena with a wide range of possible outcomes, scientists typically run simulations using different input simulation parameters to produce different simulation outputs. Parameter space exploration plays a significant role in analyzing the impact of various simulation parameters. Scientific visualization plays a crucial role in enabling scientists to explore the parameter space effectively and intuitively. By visualizing the ensemble runs, scientists can analyze the similarities and differences between different simulations and gain insights into how various parameter settings influence the outcomes. However, it is important to note that exploring the parameter space thoroughly often necessitates conducting a batch of computationally expensive simulations. This dissertation concentrates on developing visualization surrogate models to effectively explore the immense simulation parameter space. Visualization surrogate models allow scientists to preview simulation outputs based on given input parameters without the necessity of running the actual simulations. There are two critical aspects to designing surrogate models: (1) devising efficient surrogate models tailored to the specific properties of the problem; (2) conducting a comprehensive analysis based on the trained surrogate model. For the first aspect, given the inherent complexity and non-linearity of the mappings between simulation parameters and outputs, we adopt emerging deep learning techniques to design our surrogate models. Specifically, we present three deep visualization surrogate models tailored for ensemble simulations that are spatially represented in different formats. First, for simulations conducted on \textit{regular grids}, we propose VDL-Surrogate, a neural network-based surrogate model that leverages view-dependent latent re (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Wei-Lun Chao (Committee Member); Hanqi Guo (Committee Member) Subjects: Computer Engineering; Computer Science
  • 11. Alatoun, Khitam Efficient Techniques for SoC Security Validation and Monitoring

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science and Engineering

    Embedded systems are commonly described as the powerhouses that drive a wide range of modern devices. The landscape of embedded systems has undergone significant transformation since the advent of System-on-Chip (SoC) technology. Throughout the life cycle of modern SoC designs, various hardware security threats can be introduced. These security vulnerabilities, when exercised during SoC operation, can lead to denial of service or disclosure of protected data. Hence, it is essential to undertake security validation before and after SoC fabrication and make provisions for continuous security assessment during operation. Assertion-based formal methods have been used to verify hardware designs. These methods use temporal logic to mathematically verify design properties. However, various of security policies such as confidentiality, integrity, time side-channels and power side-channels cannot be verified directly with traditional formal techniques. Additionally, current specification languages primarily focus on functional correctness and do not provide formal assurances for a wide range of security policies. Furthermore, critical design policies not only necessitate robust verification methods but also require continuous assessment during operation. Assertion-based runtime monitoring is a commonly used method to convert assertions into synthesizable hardware monitors. These monitors can be embedded into the design to detect property violations during runtime. However, an efficient method is required to generate optimized hardware runtime monitors from a security specification language. To design a trusted SoC, it is important to provide a methodology to specify, verify and monitor security policies. This research intends to develop tools that help to identify potential security vulnerabilities during SoC design and operation. We developed several methods to verify information flow, timing side-channel and power side-channel properties, they collabora (open full item for complete abstract)

    Committee: Ranganadha Vemuri Ph.D. (Committee Chair); John Emmert Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member); Rashmi Jha Ph.D. (Committee Member); Mike Borowczak Ph.D M.A B.A. (Committee Member) Subjects: Computer Engineering
  • 12. Kil, Jihyung A Closer Look at the Triad in Data-Driven Vision and Language: Curation, Representation, and Learning

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Building advanced Vision and Language (V&L) systems can offer significant societal benefits. For instance, V&L systems with visual question answering capabilities enable visually impaired individuals to perform daily tasks more independently; multimodal web agents streamline our daily activities, such as booking flights or shopping online; embodied robots enhance the efficiency and automation of manufacturing systems. However, developing such sophisticated V&L models is challenging due to the need for an integrated understanding of visual and linguistic information. This integration is particularly complex as it requires models not only to recognize and interpret detailed visual cues but also to understand and generate contextually relevant text. At its core, data plays an essential role in learning such integrated understanding. The effectiveness of V&L systems relies on how well data is curated, represented, and utilized for learning. In this dissertation, we thus aim to advance V&L systems through the lens of data. First, we discuss “data curation” to enrich training materials and benchmarks for V&L models. Second, we delve into “data representation” to encode visual and linguistic information from data into meaningful representations. Third, we explore “data learning” to enable models to acquire V&L knowledge from data. In short, we investigate three different aspects (i.e., curation, representation, and learning) of data to improve V&L understanding. We believe this comprehensive study greatly contributes to the development of advanced V&L models, ultimately providing substantial benefits to our society.

    Committee: Wei-Lun Chao Dr. (Advisor); Yu Su Dr. (Committee Member); Andrew Perrault Dr. (Committee Member) Subjects: Computer Engineering; Computer Science
  • 13. Kang, Sunjung Enhancing Remote Tracking Systems in IoT Networks: Strategies for Balancing Timely Updates and Resource Constraints

    Doctor of Philosophy, The Ohio State University, 2024, Electrical and Computer Engineering

    Remote tracking systems have long been a critical component in various fields, and their integration into the Internet of Things (IoT) networks has further expanded their applicability and importance. These systems are employed in diverse applications such as healthcare monitoring, smart cities, and industrial automation, where real-time tracking and management of dynamic environments are essential. However, they face inherent challenges in balancing the need for timely updates with other critical performance metrics such as estimation accuracy and communication efficiency. The constraints imposed by limited bandwidth, energy resources, and the need for coordination among multiple devices further complicate the optimization of these systems. This dissertation explores enhanced strategies to optimize the performance of remote tracking systems, focusing on enhancing Age of Information (AoI) and remote estimation. The first part of this research investigates remote estimation within shared communication channels, where multiple dynamic sources need to be tracked by an estimator. We explore and evaluate both decentralized (transmitter-driven) and centralized (receiver-driven) update policies. Our study reveals that decentralized policies are particularly effective when error costs are the primary concern, as they utilize actual error data for updates. Conversely, centralized policies are more suited for high communication cost environments, as they efficiently manage updates to minimize simultaneous transmissions. Moreover, in situations where system parameters are unknown, we employ the stochastic Multi-Armed Bandit (MAB) framework to adaptively learn the optimal update strategy, ensuring robust performance under uncertainty. The second focus of this dissertation addresses IoT networks with numerous low-complexity devices, which face significant computational and communication constraints. The primary goal here is to minimize estimation errors when these devices (open full item for complete abstract)

    Committee: Atilla Eryilmaz (Advisor); Jia Liu (Committee Member); Ness Shroff (Advisor) Subjects: Computer Engineering; Electrical Engineering
  • 14. Veeraswamy Premkumar, Gowtham Raj Centralized Deep Reinforcement Learning and Optimization in UAV Communication Networks Towards Enhanced User Coverage

    Master of Science, Miami University, 2024, Electrical and Computer Engineering

    In wireless communications, traditional base stations provide network connectivity to users. Static base stations, however, require significant time and money to construct and are therefore not suitable for remote areas and disaster scenarios. An alternative uses mobile base stations attached to UAVs. Such UAV-based communication networks can be rapidly deployed and adapt to their environment. The goal of this research is to position the UAVs to maximize user coverage. One approach treats UAVs as independent agents and uses multi-agent reinforcement learning to design policies that move the UAVs to positions that increase coverage; each UAV, however, must train its own policy and optimality is not guaranteed. Instead, we consider two centralized approaches to place the UAVs. The first uses centralized reinforcement learning to design a joint policy over all UAVs, but training the policy is not computationally tractable for large problems. The second approach uses mixed-integer optimization to find the UAV positions that maximize user coverage. While this yields the optimal solution, the computational time does not scale well with the problem size. Therefore, we first group users into clusters and then optimize UAV positions with respect to the clusters. The number of clusters trades off computational time with optimality.

    Committee: Bryan Van Scoy (Advisor); Gokhan Sahin (Committee Member); Veena Chidurala (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering
  • 15. Zhang, Zichen Design of Unmanned Aerial Systems for Precision Agriculture using Machine Learning Techniques

    Doctor of Philosophy, The Ohio State University, 2024, Computer Science and Engineering

    Precision Agriculture (PA) a new type of agricultural concept that fully integrates information technology and agricultural production. While supported by information technology, PA is able to quantitatively implementing a complete set of modern agricultural operation and management systems based on spatial variation, positioning, and timing. The goal of PA is to better utilize the potential of farmlands and optimize the input resources by dividing a whole farm field into management zones and treating each zone according to its actual condition. Comparing to the traditional agriculture, PA focuses on using high-tech investments and scientific managements in exchange for the largest conservation of natural resources and the largest claim for agricultural output. Thus, instead of crop yield, PA emphasizes on efficiency. Unmanned aerial systems (UAS) are increasingly used in precision agriculture to collect crop health related data. UAS can capture data more often and more cost-effectively than sending human scouts into the crop field. By deploying UAS to collect aerial images, it allows farmers and researchers to make management-zone level decisions based on the collected crop health data, which meets the demands of PA to treat each part of the crop field accordingly. However, to fully integrate a UAS into PA and enjoy the advantages, we have to solve a few questions. First, when flying UAS in large crop fields, flight time, and hence data collection mission, is limited by battery life. Second, to monitor the health condition of a crop field means frequent flying missions, weekly, even daily. It's hard to get the health data we need from all collected aerial images. And third, once a monitoring system with machine learning models is set up for a specific crop field. How to migrate it to a new field and maintain a decent accuracy. Last but not least, before settling upon a model, domain experts repeatedly train and test models over a wide range of (open full item for complete abstract)

    Committee: Christopher Stewart (Advisor); Sami Khanal (Advisor); Wei-Lun Chao (Committee Member); Darren Drewry (Committee Member) Subjects: Computer Engineering; Computer Science
  • 16. Kumar, Venkataramani Intelligent Channel Estimation and Sensing in Next-Generation Wireless Networks

    Doctor of Philosophy (Ph.D.), University of Dayton, 2024, Electrical and Computer Engineering

    Internet of things (IoT), an all pervasive technology, is expected to reach 41 billions by 2027. Such a revolutionary technology is utilized in plethora of applications such as health, and agriculture. IoT offers numerous advantages in terms of computing, and intelligence. Such a growth of IoT devices lead to the proliferation of wireless technologies to cater to the growing demands of users. Such proliferation of wireless technologies pose multiple challenges such as higher interference, limited spectrum resources, compatibility issues between different standards, and higher power consumption. The existing approaches as well as their limitations are surveyed in addition to including end-to-end deep learning based frameworks to alleviate the challenges described above. The proposed framework is validated, and evaluated on open-source and real-time data respectively.

    Committee: Bradley Ratliff (Committee Chair); Ying-Ju Chen (Committee Member); Dong Cao (Committee Member); Feng Ye (Committee Member) Subjects: Computer Engineering; Information Science; Information Technology; Mathematics
  • 17. Pederson, Lily Machine Learning and Classifying Phishing Emails

    Bachelor of Arts, Wittenberg University, 2024, Computer Science

    The question investigated in this thesis is whether or not machine learning can correctly classify and separate phishing emails from regular emails. Various methods of data processing were used to clean the Enron corpus including regular expression, lemmatization, and stop word removal. Results through term-frequency inverse-document-frequency (TF-IDF) and KMeans clustering yielded successful results.

    Committee: Sunday Ngwobia (Advisor); Richard Phillips (Committee Member); Alyssa Hoofnagle (Committee Member) Subjects: Computer Engineering; Computer Science
  • 18. Yang, Junjie Theory and Application of Meta Learning Techniques

    Doctor of Philosophy, The Ohio State University, 2024, Electrical and Computer Engineering

    Meta learning has gained increasing popularity due to its rapid adaptation capability with limited data. Despite its tremendous success in image classification and reinforcement learning, its applications to other fields are not fully exploited, and its theoretical foundations are not yet fully understood. In this thesis, we explore the applications of meta learning to learning to optimize (L2O) and ControlNet settings. Specifically, in L2O, a special case of meta learning, we examine its generalization ability and propose additional meta learning techniques for model training. Furthermore, we extend the formulation of meta learning into a bilevel framework and investigate its theoretical properties using newly proposed bilevel algorithms. Finally, we delve into a specific case of bilevel optimization, namely, adversarial training. More specifically, this thesis makes the following contributions. Firstly, we explore the generalization of learning to optimize (L2O) by introducing generalizable regularizers. We establish an implicit connection between local entropy and the Hessian, unifying their roles in designing generalizable optimizers as equivalent metrics of landscape flatness. We incorporate these metrics as flatness-aware regularizers into the L2O framework to meta-train optimizers for improved generalization. Extensive experiments validate the effectiveness of our proposals with substantially improved generalization on multiple sophisticated L2O models and diverse tasks. Secondly, we address the problem of testing data distribution deviation by meta-training an L2O optimizer that can perform fast test-time self-adaptation to out-of-distribution tasks in only a few steps. We characterize the generalization of L2O and show that our proposed framework (M-L2O) facilitates rapid task adaptation by locating well-adapted initial points for the optimizer weight. Empirical results on tasks like LASSO, Quadratic, and Rosenbrock demonstrate that M-L2O converges si (open full item for complete abstract)

    Committee: Yingbin Liang (Advisor); Jia Liu (Committee Member); Philip Schniter (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering
  • 19. Aldyaflah, Izdehar BLOCKCHAIN-BASED SECURE SENSING DATA PROCESSING AND LOGGING

    Doctor of Engineering, Cleveland State University, 2024, Washkewicz College of Engineering

    This dissertation research investigated how to use the blockchain technology to secure sensor data processing and logging. The research was done in three phases. First, to ensure the legitimate of the sensor to log data into Blockchain, sensor identifcation and authentication mechanism is used where only the defned sensors sensing data are accepted. Second, to minimize the throughput demand on large public blockchain such as Bitcoin and Ethereum and the fnancial cost of using blockchain services, only a small amount of raw sensing data are placed on the blockchain through an aggregation process, where a group of raw sensing data is converted to one condensed data time. A Merkle tree based mechanism is used to protect the security of the of-chain data (raw sensing data) with the condensed Data placed on the blockchain. The system was tested with the IOTA Shimmer test network, and the Ethereum test network. The second phase focuses on developing an Ethereum smart contract to manage access control for storing and retrieving condensed data on the blockchain. The smart contract introduces three levels of authorization (read, write, and admin) to regulate data access securely. Gas consumption optimization is achieved through a tag-based secure data-store mechanism embedded in the smart contract design. In the fnal phase, a deep learning model using Convolution Neural Networks (CNN) is introduced to detect vulnerabilities in smart contracts. Four input techniques—Word2Vec, FastText, Bag of Words (BoW), and TF-IDF—are compared for their efectiveness in identifying six types of vulnerabilities. TF-IDF emerges as the most efcient input technique, consistently achieving high detection rates (90% to 100%) across all vulnerability types. In particular, TF-IDF excels in detecting the Reentrancy vulnerability, achieving performance metrics of 96% to 97%. Word2Vec and FastText performed comparably with slight changes, however BoW consistently dropped behind (open full item for complete abstract)

    Committee: Wenbing Zhao (Advisor); Timothy V Arndt (Committee Member); Hongkai Yu (Committee Member); Lili Dong (Committee Member); Sun S. Chung (Committee Member) Subjects: Computer Engineering; Computer Science
  • 20. Wang, Xufei Time Serials Data Processing with Neural Networks: From Classification to Decision Making

    Doctor of Philosophy, Case Western Reserve University, 2024, EECS - Computer Engineering

    This dissertation demonstrates the efficacy of neural networks in processing time series data, particularly through the lens of Human Activity Recognition (HAR) and Congestion Control (CC). The study is anchored in the detailed exploration of fundamental neural architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and advanced reinforcement learning techniques, which underpin the subsequent application-specific innovations. For data classification, the focus is on HAR, leveraging the ubiquitous presence of mobile devices equipped with accelerometers. This research introduces the Personalized Recurrent Neural Network (PerRNN), which uses spatiotemporal predictive learning to dynamically segment and interpret human activity from accelerometer data. Our PerRNN model, tested on the WISDM dataset, has been shown to significantly enhance activity recognition accuracy, achieving 96.44%, a marked improvement over existing methods. In the area of decision-making, we address the challenges of congestion control in dynamic and unpredictable mobile network environments. The Fair and Friendly Congestion Control (FFCC) algorithm, developed through Meta-Reinforcement Learning, optimizes network performance by prioritizing low latency and Quality of Experience (QoE). FFCC not only surpasses traditional Congestion Control Algorithms (CCAs) in critical performance metrics but also demonstrates adaptability to real-world network fluctuations, making it ideal for applications in complex and mobile settings. This dissertation underscores the transformative potential of neural networks in enhancing both classification accuracy and decision-making efficacy in time series data processing. By tailoring neural network architectures to specific challenges, this work not only advances theoretical knowledge but also catalyzes significant practical improvements in handling time-dependent data.

    Committee: Pan Li (Committee Chair); An Wang (Committee Member); Yu Yin (Committee Member); Daniel Saab (Committee Member) Subjects: Computer Engineering; Computer Science