Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 271)

Mini-Tools

 
 

Search Report

  • 1. Khan, Mahfizur Rahman Distributed UAV-Based Wireless Communications Using Multi-Agent Deep Reinforcement Learning

    Master of Science, Miami University, 2024, Electrical and Computer Engineering

    In this thesis, a thorough investigation into the optimization of user connectivity in ad hoc communication networks using robust policy creation and intelligent UAV location in stochastic environments is presented. In order to handle the dynamic and decentralized character of ad hoc networks, we identified the optimal UAV positions by applying a multi-agent deep Q-learning technique. To train stochastic environment-adaptive policies, a novel simple algorithm was devised with an emphasis on the usefulness of these policies under different scenarios. Through an empirical investigation, the study offered information on the generalizability and adaptability of learnt behaviors by examining how well policies based on one distribution of settings performed when applied to different, unseen distributions. In this thesis, we also explored the resilience of UAV networks against jamming attempts and propose a method for unaffected UAVs to self-adjust their placements. This approach ensured optimal user coverage even in adversarial situations. By demonstrating the potential of machine learning techniques to maximize network performance and enhance user connectivity in the face of environmental uncertainties and security risks, these contributions will collectively advance the field of UAV-assisted communication.

    Committee: Dr. Bryan Van Scoy (Advisor); Dr. Mark Scott (Committee Member); Dr. Veena Chidurala (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 2. Fettes, Quintin Optimizing Power Consumption, Resource Utilization, and Performance for Manycore Architectures using Reinforcement Learning

    Doctor of Philosophy (PhD), Ohio University, 2022, Electrical Engineering & Computer Science (Engineering and Technology)

    As process technology and transistor size continue to shrink into the sub-nanometer regime, the number of cores that can be integrated on-chip continues to increase exponentially. Due to the power wall, which has stalled increases in processor clock frequency indefinitely, computer architects have realized that improving performance with billions of transistors is possible only with manycores operating at lower clock frequencies. It is often the case that realizing the full performance and efficiency benefits offered by manycore architectures requires solving a host of relatively complex resource management issues, which are only present when computation is parallelized. However, the complexity of these resource management tasks often increases with the parallelism of the architectures themselves, and traditional, manually-engineered algorithms leave an ever-increasing amount of performance and efficiency benefits on the table. With the goal of exploiting maximum parallelism in manycore architectures to improve execution time, reduce energy consumption, and efficiently utilize hardware resources, this dissertation focuses on utilizing reinforcement learning on three critical resource allocation problems within the computing stack: network-on-chip-level dynamic voltage and frequency scaling, thread migration to reduce on-chip data movement, and CPU core allocation to microservices. These data-driven solutions will more easily scale-out as they automatically learn policies for resource management, which would be too time consuming or impractical for human engineers to design. At the network-on-chip level, I propose training a low-overhead, offline reinforcement learning algorithm to change the frequency of links and routers such that data will be sent along the network-on-chip using as little energy as possible without negatively affecting throughput. At the chip level, I propose using low overhead reinforcement learning to learn a thread migration policy. This (open full item for complete abstract)

    Committee: Razvan Bunescu (Advisor); Avinash Karanth (Advisor); Martin Mohlenkamp (Committee Member); Jundong Liu (Committee Member); Wei Lin (Committee Member); David Chelberg (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science
  • 3. Synakowski, Stuart Novel Instances and Applications of Shared Knowledge in Computer Vision and Machine Learning Systems

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    The fields of computer vision and machine learning have made enormous strides in developing models which solve tasks only humans have been capable of solving. However, the models constructed to solve these tasks came at an enormous price in terms of computational resources and data collection. Motivated by the sustainability of continually developing models from scratch to tackle every additional task humans can solve, researchers are interested in efficiently constructing new models for developing solutions to new tasks. The sub-fields of machine learning devoted to this line of research go by many names. Such names include multi-task learning, transfer learning, and few-shot learning. All of these frameworks use the same assumption that knowledge should be shared across models to solve a set of tasks. We define knowledge as the set of conditions used to construct a model that solves a given task. By shared knowledge, we are referring to conditions that are consistently used to construct a set of models which solve a set of tasks. In this work, we address two sets of tasks posed in the fields of computer vision and machine learning. While solving each of these sets of tasks, we show how each of our methods exhibits a novel implementation of shared knowledge leading to many implications for future work in developing systems that further emulate the abilities of human beings. The first set of tasks fall within the sub-field of action analysis, specifically the recognition of intent. Instead of a data-driven approach, we construct a hand-crafted model to infer between intentional/non-intentional movement using common knowledge concepts known by humans. These knowledge concepts are ultimately used to construct an unsupervised method to infer between intentional and non-intentional movement across levels of abstraction. By layers of abstraction we mean that the model needed to solve the most abstract instances of intent recognition, is useful in developing models whi (open full item for complete abstract)

    Committee: Aleix Martinez (Advisor); Abhishek Gupta (Committee Member); Yingbin Liang (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Computer Science
  • 4. Han, Kun Supervised Speech Separation And Processing

    Doctor of Philosophy, The Ohio State University, 2014, Computer Science and Engineering

    In real-world environments, speech often occurs simultaneously with acoustic interference, such as background noise or reverberation. The interference usually leads to adverse effects on speech perception, and results in performance degradation in many speech applications, including automatic speech recognition and speaker identification. Monaural speech separation and processing aim to separate or analyze speech from interference based on only one recording. Although significant progress has been made on this problem, it is a widely regarded challenge. Unlike traditional signal processing, this dissertation addresses the speech separation and processing problems using machine learning techniques. We first propose a classification approach to estimate the ideal binary mask (IBM) which is considered as a main goal of sound separation in computational auditory scene analysis (CASA). We employ support vector machines (SVMs) to classify time-frequency (T-F) units as either target-dominant or interference-dominant. A rethresholding method is incorporated to improve classification results and maximize hit minus false alarm rates. Systematic evaluations show that the proposed approach produces accurate estimated IBMs. In a supervised learning framework, the issue of generalization to conditions different from those in training is very important. We then present methods that require only a small training corpus and can generalize to unseen conditions. The system utilizes SVMs to learn classification cues and then employs a rethresholding technique to estimate the IBM. A distribution fitting method is introduced to generalize to unseen signal-to-noise ratio conditions and voice activity detection based adaptation is used to generalize to unseen noise conditions. In addition, we propose to use a novel metric learning method to learn invariant speech features in the kernel space. The learned features encode speech-related information and can generalize to unseen noise (open full item for complete abstract)

    Committee: DeLiang Wang (Advisor); Eric Fosler-Lussier (Committee Member); Mikhail Belkin (Committee Member) Subjects: Computer Science
  • 5. Javidi, Hamed DEEP NEURAL NETWORKS FOR COMPLEX DISEASE PREDICTION USING ELECTRONIC HEALTH RECORDS AND GENOMIC DATA

    Doctor of Philosophy in Engineering, Cleveland State University, 2024, Washkewicz College of Engineering

    Leveraging electronic health record data requires sophisticated methods that can optimally process this information to improve clinical decision-making. Artificial Intelligence (AI) promises to process healthcare data faster, for lower costs, and more accurately than conventional processes. Deep learning applied to longitudinal electronic health records (EHR) holds promise for disease prediction, but a systematic methods comparison has yet to be reported. Despite the promises of this technology, challenges remain in the current approaches to predicting a disease. There remains an unmet need for developing a gold-standard disease prediction framework for EHR data that can be reliably applied across many diseases. This research proposes a generalized deep learning approach that is amenable to predicting a vast number of diseases by integrating multiple streams of longitudinal clinical data and genomic features to maximize the predictive power over a broad spectrum of diseases. I provide empirical validation of the proposed solution using data from multiple datasets; including comprehensive simulated datasets and a real-world EHR datasets. The ultimate goal of this research is to develop a generalized deep learning approach that is amenable to predicting a vast number of diseases using longitudinal clinical data from the EHR.

    Committee: Daniel Rotroff (Advisor); Donald Allensworth-Davies (Committee Member); Hongkai Yu (Committee Member); Sathish Kumar (Committee Member) Subjects: Computer Science
  • 6. Gulhane, Radha Accelerated and Memory-Efficient Distributed Deep Learning: Leveraging Quantization, Parallelism Techniques, and Mix-Match Runtime Communication

    Master of Science, The Ohio State University, 2024, Computer Science and Engineering

    In recent years, there has been significant research and development in Deep Learning (DL) due to its efficiency and extensive applicability across diverse domains, including Computer Vision and Large Language Models. However, the architecture of large Deep Learning models, containing dense layers, makes them compute and memory intensive. Distributed Deep Learning (Distributed DL) is the successful adaption to accelerate and enable training and inference for large-scale DL models, where it also deals with various parallel approaches, inference and training techniques, and communication optimization strategies to enhance performance. In this thesis, we focus on accelerated and memory-efficient techniques to optimize distributed training and inference. It is broadly categorized into three different approaches: 1. Inference for scaled images using quantization, achieving a speedup of 6.5x with integer- only quantization and 1.58x with half-precision, with less than 1% accuracy degradation. 2. MPI4DL: Distributed Deep Learning Parallelism framework encompassing various parallelism techniques with integral components such as Spatial Parallelism, Bidirectional Parallelism, and Hybrid Parallelism 3. Communication optimization by leveraging MCR- DL: A distributed module for DL frameworks with support for mixed-backend communication, dynamic selection of the optimal backend, and communication optimization enhancements such as compression and tensor fusion.

    Committee: Prof. Dhabaleswar K. Panda (Advisor); Dr. Aamir Shafi (Committee Member); Prof. Hari Subramoni (Committee Member) Subjects: Computer Science
  • 7. Chen, Yijiang NOVEL COMPUTATIONAL PATHOLOGY-BASED PROGNOSTIC BIOMARKERS FOR GLOMERULAR DISEASES THROUGH DEEP PHOENOTYPING

    Doctor of Philosophy, Case Western Reserve University, 2024, Biomedical Engineering

    In the pursuit of enhancing prognostic methodologies for glomerular diseases, this dissertation introduces a comprehensive approach termed "deep phenotyping," which synergistically integrates artificial intelligence (AI) capabilities with engineered techniques to revolutionize outcome prediction. The primary objective is to harness the potential of AI to perform rigorous quality control on data, identify distinct tissue compartments, and subsequently extract features for downstream analyses that identifies the image biomarkers with strong associations to disease outcomes. The research journey commences by establishing a comprehensive machine learning prognosis pipeline rooted in deep phenotyping principles. Tailored specifically to glomerular diseases and utilizing digital kidney pathology images, this pipeline addresses the need for robust quality control in deep learning processes. It encompasses a multifaceted exploration, encompassing the impact of image compression artifacts on deep learning performance, the revelation of batch effects caused by digital pathology-related artifacts in multi-site repositories, and the introduction of a novel computer-aided quantitative quality control pipeline. Following this foundation, the dissertation advances to the creation and evaluation of intricate deep learning segmentation models. These models are developed to accommodate a range of kidney histologic primitives and stains commonly used in kidney pathology. The refined segmentation toolkit leads to the extraction of features, with a particular focus on peritubular capillaries (PTCs) due to their implications in kidney disease outcomes. This hierarchical feature extraction spans from fundamental attributes to complex aspects of PTCs. The subsequent sections delve into the investigation of the independent prognostic value of cortical PTCs and their interplay with interstitial fibrosis and tubular atrophy (IFTA). Machine learning prognostic models are built upon this PTC (open full item for complete abstract)

    Committee: Anant Madabhushi (Advisor); Laura Barisoni (Committee Member); David Wilson (Committee Chair); Satish Viswanath (Committee Member); John Sedor (Committee Member); Andrew Rollins (Committee Member) Subjects: Bioinformatics; Biomedical Engineering; Biomedical Research; Computer Science; Medical Imaging
  • 8. Rahman, M M Shaifur Empirical Analysis of Learnable Image Resizer for Large-Scale Medical Classification and Segmentation

    Master of Science in Computer Engineering, University of Dayton, 2023, Electrical and Computer Engineering

    Deep Convolutional Neural Networks demonstrate state-of-art performance in computer vision and medical image tasks. However, handling a large-scale image is still a challenging task that usually deals with resizing and patching methods to embed in the lower dimensional space. Recently, Learnable Resizer (LR) has been proposed to analyze large-scale images for computer vision tasks. This study proposes two DCNN models for classification and segmentation tasks constructed with LR in combination with successful classification and segmentation architectures. The performance of the proposed models is evaluated for the Diabetic Retinopathy (DR) analysis and skin cancer segmentation tasks. The proposed model demonstrated better performance than the existing methods for segmentation and classification tasks. For classification tasks, the proposed architectures achieved a 5.34% improvement in accuracy compared to ResNet50. Besides, around 0.62% accuracy over the base model and 0.28% in Intersection-over-Union (IoU) from state-of-the-art performance. The proposed model with the resizer network enhances the capability of the existing R2U-Net for medical image segmentation tasks. Moreover, the proposed methods enable a significant advantage in learning better with a few samples. The experimental results reveal that the proposed models are better than the current approaches.

    Committee: Tarek M Taha (Committee Chair); Eric Balster (Committee Member); Chris Yakopcic (Committee Member) Subjects: Artificial Intelligence; Biomedical Research; Computer Engineering; Computer Science; Engineering; Medical Imaging
  • 9. Taco Lopez, John A novel technique for multivariate time series classification using deep forest algorithm

    MS, University of Cincinnati, 2023, Engineering and Applied Science: Mechanical Engineering

    Data-driven health assessment approaches for anomaly detection and fault diagnosis rely on domain knowledge, algorithm performance, and computational power. Different techniques have been proposed to alleviate the requirements given the variety of problems in the industry. Thus, given its adaptability to identify patterns from data in several applications, deep learning is one of the most feasible approaches that exhibits great prediction capability in a variety of fields. However, deep learning algorithms require high training time, complex hyperparameter optimization, and high computational power. Furthermore, classical machine learning techniques show high performance but depend on domain knowledge and feature engineering. However, domain knowledge is not always available in the industry, and it can bias the algorithm due to human preferences, experiences, etc. Thus, in this thesis, due to the challenges of the current state-of-the-art algorithms, an alternative method is proposed to perform multivariate time series classification. By using deep forest algorithm which performs a layer-by-layer learning style inspired by deep neural networks and utilizing only raw data, the proposed methodology approaches the mentioned challenges of deep learning and machine learning. Three main steps are required for this method: data preparation, deep forest modeling, and prediction. Furthermore, the technique works with sequential raw data in the time, frequency, and time & frequency domain without any feature engineering. Moreover, two cases of study are presented to validate the proposed method: fault diagnosis of rock drills and anomaly detection of traumatic brain injury data. The proposed method is compared with deep learning algorithms in terms of accuracy, training time, hyperparameter sensitivity, and robustness. Deep learning is used as a benchmark in this thesis since works with raw data as the input of the algorithm like the proposed method does. The results s (open full item for complete abstract)

    Committee: Jay Lee Ph.D. (Committee Chair); Jing Shi Ph.D. (Committee Member); Thomas Richard Huston Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 10. Agarwal, Tushar Improving Deep Representations by Incorporating Domain Knowledge and Modularization for Synthetic Aperture Radar and Physiological Data

    Doctor of Philosophy, The Ohio State University, 2022, Electrical and Computer Engineering

    Machine Learning (ML) using Artificial Neural Networks (ANNs), referred to as Deep Learning (DL), is a very popular and powerful method of statistical inference. A primary advantage of deep-learning has been the automatic learning of informative features (that encodes the data referred to as deep-representations henceforth) based on gradient-descent optimization of an objective function. While DL is applicable to problem domains where hand-crafted features are not readily available, its performance is critically dependent on other factors like dataset size and model architecture. Despite recent advances in the field, the question of how to modify the DL framework to incorporate domain knowledge or to disentangle factors of variation warrants more research. Until recently, most popular works in the DL literature have primarily employed inductive-bias of architectures (e.g., translational invariance in convolutional neural-nets) and relied on the availability of large labeled datasets for improved representation learning. Unfortunately, curating such large datasets is costly and not practical for many application areas. In this dissertation, we study methods to improve learned representations by incorporating domain knowledge into the learning process and through disentangling factors of variation. First, we present a sparse-modeling based data augmentation method for tomographic images and use it to incorporate domain knowledge of Synthetic Aperture Radar (SAR) target phenomenology into deep representations. We validate the improvements in learned representations by using them for a benchmark classification problem of Automatic Target Recognition (ATR) where we establish new state-of-the-art on subsampled datasets. Second, we propose a DL-based hierarchical modeling strategy for a physiological signal generation process which in turn can be used for data augmentation. Based on the physiology of cardiovascular system function, we propose a (open full item for complete abstract)

    Committee: Emre Ertin (Advisor); Lee Potter (Committee Member); Kiryung Lee (Committee Member) Subjects: Artificial Intelligence; Biomedical Engineering; Computer Engineering; Computer Science; Electrical Engineering; Health Care; Remote Sensing; Scientific Imaging; Statistics
  • 11. Adams, William Analysis of Robustness in Lane Detection using Machine Learning Models

    Master of Science (MS), Ohio University, 2015, Electrical Engineering (Engineering and Technology)

    An appropriate approach to incorporating robustness into lane detection algorithms is beneficial to autonomous vehicle applications and other problems relying on fusion methods. While traditionally rigorous empirical methods were developed for mitigating lane detection error, an evidence-based model-driven approach yields robust results using multispectral video as input to various machine learning models. Branching beyond the few network structures considered for image understanding applications, deep networks with unique optimization functions are demonstrably more robust while making fewer assumptions. This work adopts a simple framework for data collection; retrieving image patches for comparison via regression through a learning model. Along a horizontal scanline, the most probable sample is selected to retrain the network. Models include simple regressors, various autoencoders, and a few specialized deep networks. Samples are compared by robustness and the results favor deep and highly specialized network structures.

    Committee: Mehmet Celenk (Advisor); Jeffrey Dill (Committee Member); Maarten Uijt de Haag (Committee Member); Rida Benhaddou (Committee Member) Subjects: Artificial Intelligence; Automotive Engineering; Computer Science; Engineering
  • 12. Essig, David Comparison of a Transformer-Based Single-Image Super-Resolution Model for the CONNECT Compression Framework

    Master of Science in Computer Engineering, University of Dayton, 2024, Electrical and Computer Engineering

    Single-image super-resolution (SISR) is the task of increasing an image's resolution using one lower resolution image. This task has been used in many areas of life to capture finer details in medical imagery, images with distant objects, and compressed images. Compressing images can save computational resources and bandwidth. Deep Learning (DL) techniques for image compression and SISR have become abundant as such methods have yielded promising results, such as in the Convolutional Neural Network for Enhanced Compression Techniques (CONNECT) compression framework [1] [2] and SwinIR [3], the multi-scale attention network [4], and the Real-ESRGAN [5] super-resolution models. In this thesis, these super-resolution models are to be analyzed and compared with each other using previous work and direct testing on the Set14 dataset with one being selected to be used on the backend of CONNECT as an alternative compression framework. This framework could yield higher compression ratios while maintaining or improving reconstructed image quality. This thesis attempts to improve the existing CONNECT compression framework by analyzing and selecting a DL-based super-resolution model to reconstruct the compressed images after they have been fed through CONNECT. Varying compression methods are then compared using widely used image quality metrics and the compression ratio metric.

    Committee: Bradley Ratliff (Committee Chair); Barath Narayanan (Committee Member); Russell Hardie (Committee Member) Subjects: Computer Engineering; Computer Science
  • 13. Mohammed, Sarfaraz Ahmed Learning Effective Features and Inferences from Healthcare Data

    PhD, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science and Engineering

    The field of medicine has witnessed remarkable progress in recent years, largely owing to the technological advancements in machine learning and deep learning frameworks. The healthcare industry has been a significant contributor to this massive influx of data, generating approximately 30% of the world's data volume. While data mining has been a crucial tool for discovering hidden patterns from data and to extract valuable insights. Effective feature learning, on the other hand, plays an important role in the performance of machine learning models in attaining increased predictive accuracies and learning efficiencies. This research aims to understand and explore the feature selection techniques on both clinical data and medical image analysis, and to attain comprehensive insights into image understanding (IU) by focusing on the segmentation methods for both object recognition and scene classification. The first part of this research studies two feature selection approaches namely, the principal component analysis (PCA) and particle swarm optimization (PSO) on the clinical data using Wisconsin Diagnostic Breast Cancer (WDBC) dataset, to study and extract the top features and evaluate the predictive performances across the five of the most widely used supervised classification algorithms. It is inferred that the study yields significant insights into the effectiveness and efficiency of various classification algorithms for predicting breast cancer type. In the context of PCA, it is imperative to have a good understanding of how features may have a positive/negative impact on the PCs. The study emphasizes the critical role of feature selection in enhancing classification accuracy. The second part of this research delves into IU, as it plays a pivotal role in various computer vision tasks, such as extraction of essential features from images, object detection, and segmentation. At a higher level of granularity, both semantic and instance segmentation are (open full item for complete abstract)

    Committee: Anca Ralescu Ph.D. (Committee Chair); Wen-Ben Jone Ph.D. (Committee Member); Chong Yu Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member); Boyang Wang Ph.D. (Committee Member) Subjects: Computer Science
  • 14. Bensaid, Mohsine Transformative Teaching: A Self-Study of 3S Understanding from Theory to Practice

    PHD, Kent State University, 2024, College of Education, Health and Human Services / School of Teaching, Learning and Curriculum Studies

    The purpose of this self-study was to examine my enactment of 3S Understanding, a holistic, democratic and inquiry-based curriculum theory, in a university-based, English-to-speakers-of-other-languages (ESOL) writing course grounded in Subject Learning, Self Learning, and Social Learning. Through disciplined reflective inquiry and collaboration with critical friends, this study set out to unpack the complexities of course planning and teaching. Data collection and analysis involved a structured, five-stage approach to identify themes, compare relationships, and interpret findings within the 3S framework. The Subject Learning findings stressed identifying the “wiggle room” for teaching artistry to address students' learning challenges and advocated for a multimodal approach to accommodate diverse learning needs. These findings also highlighted the significance of reflective inquiry in improving pedagogical practices. The findings on Self Learning emphasized fostering self-awareness and autonomy among students through reflective practices and empowering activities. They also highlighted the value of incorporating personal stories into teaching to strengthen teacher-student connections. The Social Learning findings foregrounded the importance of a collaborative, authentic, and critical thinking-focused educational environment to deepen learning and prepare students for societal participation. This study emphasizes a holistic ESOL pedagogical shift, urging teachers to integrate comprehensive, reflective, and collaborative approaches. It recommends inquiry-based, reflective practices for teacher educators, and supportive, diverse teaching environments by administrators. Learners are encouraged towards active, self-reflective engagement, connecting learning to real-world relevance. Such an approach aims to enhance language proficiency and democratic participation, fostering a deeper understanding and engagement in ESOL education across various educational roles.

    Committee: William Bintz (Committee Chair); Lori Wilfong (Committee Member); Alicia Crowe (Committee Member) Subjects: Adult Education; Composition; Curricula; Curriculum Development; Education; Educational Theory; Language; Multicultural Education
  • 15. Adnan, Mian Refined Neural Network for Time Series Predictions

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2024, Statistics

    Deep learning, neural network, has been penetrating into almost every corner of data analyses. With advantages on computing power and speed, adding more layers in a neural network analysis becomes a common practice to improve the prediction accuracy. However, over depleting information in the training dataset may consequently carry data noises into the learning process of neural network and result in over-fitting errors. Neural Network has been used to predict the future time series data. It had been claimed by several authors that the Neural Network (Recurrent Neural Network) can predict the time series data, although time series models have also been used to predict the future time series data. This dissertation is thus motivated to investigate the prediction performances of neural networks versus the conventional inference method of time series analysis. After introducing basic concepts and theoretical background of neural network and time series prediction in Chapter 1, Chapter 2 analyzes fundamental structure of time series, along with estimation, hypothesis testing, and prediction methods. Chapter 3 discusses details of computing algorithms and procedures in neural network with theoretical adjustment for time series prediction. In conjunction with terminologies and methodologies in the previous chapters, Chapter 4 directly compares the prediction results of neural networks and conventional time series for the squared error function. In terms of methodology assessment, the evaluation criterion plays a critical role. The performance of the existing neural network models for time series predictions has been observed. It has been experimentally observed that the time series predictions by time series models are better compared to the neural network models both computationally and theoretically. The conditions for the better performances of the Time Series Models over the Neural Network Models have been discovered. Theorems have also been pro (open full item for complete abstract)

    Committee: John Chen Ph.D. (Committee Chair); Hanfeng Chen Ph.D. (Committee Member); Umar Islambekov Ph.D. (Committee Member); Brigid Burke Ph.D. (Other) Subjects: Applied Mathematics; Artificial Intelligence; Behavioral Sciences; Computer Science; Education Finance; Finance; Information Systems; Operations Research; Statistics
  • 16. Bhatta, Niraj Prasad ML-Assisted Side Channel Security Approaches for Hardware Trojan Detection and PUF Modeling Attacks

    Master of Science in Computer Engineering (MSCE), Wright State University, 2024, Computer Engineering

    Hardware components are becoming prone to threats with increasing technological advances. Malicious modifications to such components are increasing and are known as hardware Trojans. Traditional approaches rely on functional assessments and are not sufficient to detect such malicious actions of Trojans. Machine learning (ML) assisted techniques play a vital role in the overall detection and improvement of Trojan. Our novel approach using various ML models brings an improvement in hardware Trojan identification with power signal side channel analysis. This study brings a paradigm shift in the improvement of Trojan detection in integrated circuits (ICs). In addition to this, our further analysis towards hardware authentication extends towards PUFs (Physical Unclonable Functions). Arbiter PUFs were chosen for this purpose. These are also Vulnerable towards ML attacks. Advanced ML assisted techniques predict the responses and perform attacks which leads to the integrity of PUFs. Our study helps improve ML-assisted hardware authentication for ML attacks. In addition, our study also focused on the defense part with the addition of noise and applying the same attack ML-assisted model. Detection of Trojan in hardware components is achieved by implementing machine learning techniques. For this Purpose, several Machine learning models were chosen. Among them, Random Forest classifier (RFC) and Deep neural network shows the highest accuracy. This analysis plays a vital role in the security aspect of all hardware components and sets a benchmark for the overall security aspects of hardware. Feature extraction process plays major role for the improvement of accuracy and reliability of hardware Trojan classification. Overall, this study brings significant improvement in the field of overall hardware security. Our study shows that RFC performs best in hardware classification with an average of 98. 33% precision of all chips, and deep learning techniques give 93. 16% prec (open full item for complete abstract)

    Committee: Fathi Amsaad Ph.D. (Advisor); Kenneth Hopkinson Ph.D. (Committee Member); Wen Zhang Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science; Engineering; Information Technology; Technical Communication; Technology
  • 17. Siddiqui, Nimra Dr. Lego: AI-Driven Assessment Instrument for Analyzing Block-Based Codes

    Master of Computing and Information Systems, Youngstown State University, 2024, Department of Computer Science and Information Systems

    The field of coding education is rapidly evolving, with emerging technologies playing a pivotal role in transforming traditional learning methodologies. This thesis introduces Dr. Lego, an innovative framework designed to revolutionize the assessment and understanding of block-based coding through the integration of sophisticated deep learning models. Dr. Lego combines cutting-edge technologies such as MobileNetV3 (Howard, 2019), for visual recognition and BERT (Devlin et al., 2018), and XLNet (Yang et al., 2019) for natural language processing to offer a comprehensive approach to evaluating coding proficiency. The research methodology involves the meticulous curation of a diverse dataset comprising projects from the LEGO SPIKE app (LEGO Education, 2022), ensuring that the models are subjected to a broad range of coding scenarios. Leveraging the dynamic educational environment provided by the LEGO SPIKE app (LEGO Education, 2022), Dr. Lego empowers users to design and implement various coding projects, fostering hands-on learning experiences. This thesis delves into methodologies aimed at enhancing coding education by exploring model integration, data generation, and fine-tuning of pre-trained models. Dr. Lego not only evaluates coding proficiency but also provides cohesive and insightful feedback, enhancing the learning experience for users. The adaptability of the framework highlights its potential to shape the future of coding education, paving the way for a new era of interactive and engaging learning experiences.

    Committee: Abdu Arslanyilmaz PhD (Advisor); Feng Yu PhD (Committee Member); Carrie Jackson EdD, BCBA (Committee Member) Subjects: Computer Science; Engineering; Information Systems; Robotics; Teaching
  • 18. Monabbati, Shayan AI-DRIVEN PIPELINES FOR IMPROVING CLINICAL UTILITY ACROSS CYTOPATHOLOGY & HISTOPATHOLOGY

    Doctor of Philosophy, Case Western Reserve University, 2024, EECS - System and Control Engineering

    This dissertation investigates the application of digital pathology for developing diagnostic and prognostic tools for 2 diseases: Biliary tract adenocarcinoma and Papillary Thyroid Carcinoma (PTC). We explore the realms of cytopathology, which studies exclusively the morphologies of epithelial cells, and histopathology, which includes the entire tissue region. Bile duct brush specimens are difficult to interpret as they often present inflammatory and reactive backgrounds due to the local effects of stricture, atypical reactive changes, or previously installed stents, and often have low to intermediate cellularity. As a result, diagnosis of biliary adenocarcinomas is challenging and often results in large interobserver variability and low sensitivity. In this dissertation, we first used computational image analysis to evaluate the role of nuclear morphological and texture features of epithelial cell clusters to predict the presence of biliary tract adenocarcinoma on digitized brush cytology specimens. We improved the sensitivity of diagnosis with a machine learning approach from 46% to 68% when atypical cases were included and treated as nonmalignant false negatives. The specificity of our model was 100% within the atypical category. PTC is the most prevalent form of thyroid cancer, with the classical form and the follicular variant representing the majority of cases. Despite generally favorable prognoses, approximately 10% of patients experience recurrence post- surgery and radioactive iodine therapy. Attempts to stratify risk of recurrence have relied on gene expression-based prognostic and predictive signatures with a focus on mutations of well-known driver genes, while hallmarks of tumor morphology have been ignored. In this dissertation, we introduce a new computational pathology approach to develop prognostic gene signatures for thyroid cancer that is informed by quantitative features of tumor and immune cell morphology. We show that integrating gene express (open full item for complete abstract)

    Committee: Kenneth Loparo (Committee Chair); Anant Madabhushi (Advisor); Satish Viswanath (Committee Member); Sylvia Asa (Committee Member); Aparna Harbhajanka (Committee Member) Subjects: Artificial Intelligence; Biomedical Engineering; Biomedical Research; Biostatistics; Computer Engineering; Medical Imaging; Oncology; Systems Design
  • 19. Lenau, Ashley Physics-Informed Deep Learning Networks for Increased Accuracy and Reliability of Material Simulations

    Doctor of Philosophy, The Ohio State University, 2024, Materials Science and Engineering

    Simulations of materials are a cost-efcient way to study materials that aid in experimental planning and material design. For example, stress and plasticity analysis is readily performed by numerically-based simulations, like fnite element or spectral based methods, and are typically faster than performing the experiment itself. However, slow computation times of more complex simulations limit their use in the design space. Deep learning (DL) networks have been shown to be orders of magnitude faster than numerically-based simulations but are lacking in numerical accuracy by comparison. Furthermore, large datasets are required to train a DL network and collecting a sufcient amount is a difcult task in materials science. Incorporating physical laws of the material system within the DL model has been shown to create a more physically accurate network, but can be difcult to implement. In this thesis, DL networks are physically informed through the data, network architecture, or loss function to create a model that accurately refects the underlying physics of the material system. First, a network is proposed to study the feasibility of 3D grain reconstruction from mid-feld high energy difraction refections. Each refection corresponds to its own subnetwork, tailoring the weights to a specifc refection. In a diferent network, a U-Net is used to simulate the micromechanical evolution of a 3D polycrystal at small strain increments and predict the full-feld orientation and elastic strain. The network is physically informed about the Von Mises stress relationship from the predicted elastic strain tensors. The training requirements of networks having physics-informed characteristics are studied in more depth using stress feld prediction as a case study. A Pix2Pix model is used to translate a two-phase composite having high elastic contrast to the corresponding stress fields. Several diferent physics-based regularization methods are implemented to enforce stress equilibrium in t (open full item for complete abstract)

    Committee: Stephen Niezgoda (Advisor); Dennis Dimiduk (Committee Member); Reeju Pokharel (Committee Member); Aeriel Leonard (Committee Member); Michael Groeber (Committee Member) Subjects: Materials Science
  • 20. Santhis, Ishaan Detecting Deepfakes : Fine-tuning VGG16 for Improved Accuracy

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Computer Science

    The continuous threat of deepfakes, cleverly crafted deceptions masquerading as reality, necessitates cutting-edge detection methods. While there are many methods available, this project dives into the realm of fine-tuning the VGG16 convolutional neural network (CNN) and synergistically integrating Natural Language Processing (NLP) to unveil deepfake images effectively. By using the Keras API and machine learning principles, we empower the model to discern authentic images from their manipulated counterparts, drawing inspiration from real-world cases like the notorious Jennifer Aniston deepfake scam. Firstly, we establish a robust foundation for feature extraction by pre-training the VGG16 architecture on vast image datasets. Subsequently, we meticulously curate a comprehensive deepfake image dataset encompassing diverse manipulation techniques and real-world scenarios. This tailor-made dataset fuels the fine-tuning of specific VGG16 layers, accurately crafting a model with exceptional generalizability. Intriguingly. The project rigorously evaluates the fine-tuned VGG16 model's performance on unseen deepfakes through a battery of meticulous metrics, including accuracy, and loss while detecting the deepfakes. We delve into a comprehensive comparison, carefully analyzing these results not only against the baseline performance of a model I created from scratch, the untrained VGG16, the VGG16 after I applied transfer learning. This project aspires to make a significant contribution to the ongoing battle against deepfakes by showcasing the remarkable potential of fine-tuning the VGG16 that helps us in achieving superior detection accuracy. By thoroughly incorporating real-world examples and harnessing the synergistic power of CNNs, we strive to develop a robust and adaptable solution capable of combating the ever-evolving landscape of deepfakes. Ultimately, this endeavor aims to safeguard online safety and trust, mitigating the detrimental effects of deepfakes on (open full item for complete abstract)

    Committee: Yizong Cheng Ph.D. (Committee Chair); William Hawkins Ph.D. (Committee Member); Jun Bai Ph.D. (Committee Member) Subjects: Computer Science