Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 27)

Mini-Tools

 
 

Search Report

  • 1. de Moura Souza, Diego Optimization and Control of Vapor Compression Systems through Data-Enabled Modeling

    Master of Science, The Ohio State University, 2024, Mechanical Engineering

    Cooling indoor spaces is energy intensive but essential to ensure occupant comfort, regardless of environmental conditions. Therefore, increasing the energy efficiency of cooling systems can yield significant energy savings. This thesis presents data-enabled hybrid modeling approaches to optimize cooling systems in both commercial buildings and light-duty vehicles, aiming to enhance energy efficiency through both static and dynamic optimization strategies. First, a static optimization strategy is developed for the operation of individual chillers in a central chiller plant, with the goal of reducing power demand while meeting the cooling load. This is achieved by developing a hybrid model that combines energy-based and data-driven methods to describe the energy demand of the plant under varying cooling loads and environmental conditions. The model is calibrated and validated using operational data from The Ohio State University. The validated model is then integrated into a particle swarm optimization algorithm to determine the optimal load distribution for each chiller under different weather and operational conditions. Simulation results for a year of operation in Central Ohio show that the optimized strategy achieves, on average, a 4% reduction in daily peak power consumption during four mild weather months, with reductions reaching up to 12% in certain instances. Second, a dynamic optimization strategy is presented to improve the energy efficiency of a light-duty vehicle air conditioning system. By employing data-driven Koopman operator theory to characterize the non-linear dynamics of the system, a linear Model Predictive Control problem is formulated within the Koopman subspace. The computational efficiency of this quadratic programming problem is demonstrated by average computation times ranging from 2 to 50 milliseconds, depending on the lengths of the control and prediction horizons. When tested across four different driving routes, th (open full item for complete abstract)

    Committee: Marcello Canova (Committee Member); Stephanie Stockar (Advisor) Subjects: Automotive Engineering; Mechanical Engineering
  • 2. David, Deepak Antony Enhancing Spatiotemporal PDE-based Epidemic Model Analysis using Advanced Computational Techniques

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Mechanical Engineering

    The COVID-19 pandemic highlighted the need for improved and precise prediction of the spatiotemporal trends of epidemic transmission. An optimized epidemic model is crucial for effectively forecasting flow of infection. By optimizing the model parameters, they can provide valuable insights into the dynamics of infection transmission and this degree of tuning helps health officials and policymakers to make data-driven decisions regarding disease control strategies, allocation of resources, and planning for healthcare. Therefore, it highlights the need of implementing reliable optimizing strategies in case of epidemic models. Similarly, the basic and effective reproductive numbers (R0, Re) are quantitative metrics widely used for estimating the rate at which the infection propagates. The limitations of existing techniques for estimating R0 and Re points the need for novel approaches to accurately estimate them using the available data. This initial part of this study presents the development of a custom GA which is capable of efficiently searching for the parameters of an epidemic model in any specified geographical region and time period. Following this, a novel computational framework for predicting the reproduction numbers from true infection data has been presented. The computational framework is derived from a reaction-diffusion based PDE epidemic model which involves fundamental mathematical derivations for obtaining their values. The PDE model is optimized using the proposed GA and the model output using the optimized parameters is found to be in correspondence with the ground truth COVID-19 data of Hamilton county, Ohio. Subsequently, the established framework for calculating the reproduction numbers was applied on the optimized model and their predictions are found to correlate with the true incidence data. In addition, these predictions are compared with a commonly used retrospective method (Wallinga-Teunis) and are found to be in harmony thereby est (open full item for complete abstract)

    Committee: Manish Kumar Ph.D. (Committee Chair); Subramanian Ramakrishnan Ph.D. (Committee Member); Shelley Ehrlich M.D. (Committee Member); Derek Wolf Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 3. Shih, Hanniel Anomaly Detection in Irregular Time Series using Long Short-Term Memory with Attention

    MS, University of Cincinnati, 2023, Engineering and Applied Science: Computer Science

    Anomaly Detection in Irregular Time Series is an under-explored topic, especially in the healthcare domain. An example of this is weight entry errors. Erroneous weight records pose significant challenges to healthcare data quality, impacting clinical decision-making and patient safety. Existing studies primarily utilize rule-based methods, achieving an Area Under the Receiver Operating Characteristic Curve (AUROC) ranging from 0.546 to 0.620. This thesis introduces a two-module method, employing bi-directional Long Short-Term Memory (bi-LSTM) with Attention Mechanism, for the prospective detection of anomalous weight entries in electronic health records. The proposed method consists of a predictor and a classifier module, both leveraging bi-LSTM and Attention Mechanism. The predictor module learns the normal pattern of weight changes, and the classifier module identifies anomalous weight entries. The performance of both modules was evaluated, exhibiting a clear superiority over other methods in distinguishing normal from anomalous data points. Notably, the proposed approach achieved an AUROC of 0.986 and a precision of 9.28%, significantly outperforming other methods when calibrated for a similar sensitivity. This study contributes to the field of entry error detection in healthcare data, offering a promising solution for real-time anomaly detection in electronic health records.

    Committee: Raj Bhatnagar Ph.D. (Committee Chair); Danny T. Y. Wu PhD (Committee Member); Vikram Ravindra Ph.D. (Committee Member) Subjects: Computer Science
  • 4. Upadhyaya, Barsha Anomaly Detection in Distribution Power Grids Using Recurrent Neural Networks: A Digital Twin Simulation Approach

    Master of Science, University of Toledo, 2023, Engineering (Computer Science)

    Anomaly detection in power grids has become a significant challenge in recent years. The heterogeneous nature and integration of different smart grid appliances make it difficult to detect system faults or energy thefts leading to substantial cumulative losses over long periods. However, implementing the anomaly detection mechanism at every node in the grid network can be costly. Therefore, it is crucial to optimize the anomaly detection technique to not only detect local anomalies but also those further down the network. By doing so, we can ensure minimal resource usage and maximum reliability. In this study, we investigate anomaly detection capabilities at various levels in the distribution power system. We utilize Recurrent Neural Networks (RNNs) for anomaly detection and evaluate their performance compared to other machine learning techniques. The power grid analyzed in this research is a Digital Twin - a digital replica of a real-world power grid modeled using Gridlab-D and Helics. To ensure accurate simulation behavior and simulate with real consumption data and voltage properties. This paper presents two aspects of the study: building the Digital Twin and conducting anomaly detection in the Digital Twin Simulation at various grid levels.

    Committee: Ahmad Y Javaid (Committee Chair); Weiqing Sun (Committee Co-Chair); Raghav Khanna (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 5. Scharf da Costa, Lucas The Effect of Applying Artificial Intelligence to Improve the Effectiveness of the Inventory Management in a Specialty Pharmacy

    PhD, University of Cincinnati, 2023, Pharmacy: Pharmaceutical Sciences

    Introduction: Demand forecasting is a challenge which requires relevant data and advanced statistical procedures to address new growth and other opportunities. On the pharmacy perspective, if the inventory is high, patients will always have their medication in stock, but it will increase the inventory holding and storage costs of medications as well as increase the chance of a medication reach its expiration date. Even though low inventory may generate savings on inventory holding cost and storage costs, it may also increase the chances of stock-outs, when a medication is not available to patients. Thus, optimizing the demand forecasting system would financially benefit any pharmacy. The present study applied analytical methods to the demand forecasting of the top-ten most-prescribed medications in a specialty pharmacy and assessed the impact of weather conditions in the demand of four migraine medications. Methods: This research was a collaboration with the James L. Winkle College of Pharmacy, the University of Cincinnati Medical Center, LLC (UC Health Specialty Pharmacy), and the Advanced Research Computing Center. The data consisted of 26 months of pre-recorded real-world dispensing data of the top-ten most prescribed medications: Aimovig, Ajovy, Biktarvy, Cellcept, Emgality, Enbrel, Epidiolex, Nurtec ODT, Prograf, and Temodar. The data of the most-prescribed medication in the pharmacy was preprocessed and deployed into AWS Amazon Forecasting and Microsoft Azure Machine Learning Studio. The effectiveness of the demand forecasting in the pharmacy was determined by either good or high accuracy metrics. After preprocessing the data, the variables considered for the forecasting models were monthly demand and the date of each medication purchase. The forecasting methods used were ARIMA (Autoregressive Integrated Moving Average, VARMA (Vector Autoregressive Moving Average, and LSTM (Long Short-term Memory). Results: The best-performing models were ARIM (open full item for complete abstract)

    Committee: Alex Lin Ph.D. (Committee Chair); Andrew Eisenhart Ph.D. (Committee Member); Bingfang Yan D.V.M. Ph.D. (Committee Member); Xiaodong Jia Ph.D. (Committee Member); Jianfei (Jeff) Guo Ph.D. (Committee Member) Subjects: Pharmaceuticals
  • 6. Ahmed, Jishan Cost-Aware Machine Learning and Deep Learning for Extremely Imbalanced Data

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2023, Data Science

    Many real-world datasets, such as those used for failure and anomaly detection, are severely imbalanced, with a relatively small number of failed instances compared to the number of normal instances. This imbalance often results in bias towards the majority class during learning, making mitigation a serious challenge. To address these issues, this dissertation leverages the Backblaze HDD data and makes several contributions to hard drive failure prediction. It begins with an evaluation of the current state of the art techniques, and the identification of any existing shortcomings. Multiple facets of machine learning (ML) and deep learning (DL) approaches to address these challenges are explored. The synthetic minority over-sampling technique (SMOTE) is investigated by evaluating its performance with different distance metrics and nearest neighbor search algorithms, and a novel approach that integrates SMOTE with Gaussian mixture models (GMM), called GMM SMOTE, is proposed to address various issues. Subsequently, a comprehensive analysis of different cost-aware ML techniques applied to disk failure prediction is provided, emphasizing the challenges in current implementations. The research also expands to create explore a variety of cost-aware DL models, from 1D convolutional neural networks (CNN) and long short-term memory (LSTM) models to a hybrid model combining 1D CNN and bidirectional LSTM (BLSTM) approaches to utilize the sequential nature of hard drive sensor data. A modified focal loss function is introduced to address the class imbalance issue prevalent in the hard drive dataset. The performance of DL models is compared to traditional ML algorithms, such as random forest (RF) and logistic regression (LR), demonstrating superior results, suggesting the potential effectiveness of the proposed focal loss function. In addition to these efforts, this dissertation aims to provide a comprehensive understanding of hard drive longevity and the critical factors contrib (open full item for complete abstract)

    Committee: Robert C. Green II Ph.D. (Committee Chair); Liuling Liu Ph.D. (Other); Umar D Islambekov Ph.D. (Committee Member); Junfeng Shang Ph.D. (Committee Member) Subjects: Computer Science; Statistics
  • 7. Khuntia, Satvik Energy Prediction in Heavy Duty Long Haul Trucks

    Master of Science, The Ohio State University, 2022, Mechanical Engineering

    Truck drivers idle their trucks for their comfort in the Cab. They might need air conditioning to maintain a comfortable temperature and use the onboard appliances like TV, radio, etc. while they rest during their long journeys. On average idling requires 0.8 gallons of diesel per hour for an engine and up to 0.5 gallons per hour for a diesel APU. For a journey greater than 500 miles, a driver rests for 10 hours for every 11 hours of driving. Drivers tend to leave the truck idling throughout the 10 hours. With today's cost of diesel in the US, for one 10-hour period, the average cost incurred by the owner only on idling is $32. About a million truck drivers idle their trucks overnight for more than 300 days a year. Super Truck II is a 48V mild hybrid class 8 truck with all auxiliary loads powered purely by the battery pack. This offers an opportunity to reduce the idling from the whole 10 hours to whatever is necessary to charge the battery enough to power the auxiliaries. To quantify this “necessary idling” during the hoteling period we need to predict what the power load requirement in the future would be. The total power estimation is divided into two portions, (1) Cabin Hotel loads except HVAC and (2) HVAC load. A physics-based grey box models are developed for components in the vapor compression cycle and cabin using system dynamics which is used to estimate the HVAC power consumption. A special kind of Recurrent Neural Network (RNN) called Long, and Short Term Memory (LSTM) is used to predict the cabin hotel loads by user activity tracking. Synthetic load profiles are synthesized to overcome the limitation of lack of availability of data, about the user activity inside the cabin for training the LSTM algorithm, using rules and observations derived from the existing load profile for the hotel period from a survey conducted for SuperTruck project and literature survey on driver sleeping behavior. Dynamic Time Warping along with pointwise Euclidian distance is us (open full item for complete abstract)

    Committee: Qadeer Ahmed Dr (Advisor); Marcello Canova Dr (Committee Member); Athar Hanif Dr (Other) Subjects: Artificial Intelligence; Automotive Engineering; Engineering; Mechanical Engineering; Statistics; Sustainability; Systems Design; Transportation
  • 8. Kekuda, Akshay Long Document Understanding using Hierarchical Self Attention Networks

    Master of Science, The Ohio State University, 2022, Computer Science and Engineering

    Natural Language Processing techniques are being widely used in the industry these days to solve a variety of business problems. In this work, we experiment with the application of NLP techniques for the use case of understanding the call interactions between customers and customer service representatives and to extract interesting insights from these conversations. We focus our methodologies on understanding call transcripts of these interactions which fall under the category of long document understanding. Existing works in text encoding typically address short form text encoding. Deep Learning models like Vanilla Transformer, BERT and DistilBERT have achieved state of the art performance on a variety of tasks involving short form text but perform poorly on long documents. To address this issue, modifications to the Transformer model have been released in the form of Longformer and BigBird. However, all these models require heavy computational resources which are often unavailable in small scale companies that run on budget constraints. To address these concerns, we survey a variety of explainable and light weight text encoders that can be trained easily in a resource constrained environment. We also propose Hierarchical Self Attention based models that outperform DistilBERT, Doc2Vec and single layer self-attention networks for downstream tasks like text classification. The proposed architecture has been put into production at the local industry organization that sponsored the research (SafeAuto Inc.) and helps the company to monitor the performance of its customer service representatives.

    Committee: Eric Fosler-Lussier (Committee Chair); Rajiv Ramnath (Advisor) Subjects: Artificial Intelligence; Computer Science
  • 9. Jacome, Olivia Forecasting Human Response in The loop with Eco-Driving Advanced Driver Assistance Systems (ADAS): A Modeling and Experimental Study

    Master of Science, The Ohio State University, 2022, Mechanical Engineering

    In recent years, vehicle electrification has risen due to the increasingly stringent polices put in place to reduce greenhouse gas emissions in the transportation industry. At the same time, research and development efforts in Connected and Autonomous Vehicles (CAVs) has grown substantially due to the advancement of new technologies that has encouraged the deployment of semi-autonomous vehicles. Vehicles with partial or conditional automation require a collaboration between the vehicle control system and the human driver for safe execution of maneuvers. As a result, humans play a critical role in the development and deployment of Advanced Driver Assistance Systems (ADAS), warranting the need to understand the human-machine interaction issues related to these systems, and to analyze their effects on vehicle performance and energy consumption. This work investigates the effects of the interactions between a human driver and a vehicle equipped with ADAS, focusing on the case of a human in the loop with a vehicle speed advisory system. To this end, a simulation study is conducted to evaluate the importance of modeling the driver behavior when optimizing the vehicle velocity for Eco-Driving. An optimization study is conducted via dynamic programming, incorporating driver behavior and its response to a velocity advisory. Next, an investigation is conducted to evaluate the accuracy of different mathematical models predicting driver behavior in the context of ADAS. To this end, an experimental study was conducted on a driving simulator where human drivers were compared with respect to their ability to follow a velocity advisor. Data collected from the driver simulator were used to calibrate a deterministic and a stochastic driver model, and compare their ability to replicate realistic velocity profiles and driver error.

    Committee: Marcello Canova (Advisor); Giorgio Rizzioni (Committee Member); Stephanie Stockar (Committee Member) Subjects: Mechanical Engineering
  • 10. Tanveer, Hafsa Prediction of Covid'19 Cases using LSTM

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    The covid-19 pandemic is the most recent and significant problem that the whole world is facing. The spread of the COVID-19 has many long and short time lasting effects and needs strict special plans and policies. Therefore, to help the government and health sectors prepare themselves for any emergency situations, understanding the spread and forecasting the future cases can play an essential role. The response of the global community to the epidemic, including the deployment of nurses, doctors, epidemiologists, beds, supplies, and security, can be shaped by our understanding of the pattern of this virus. In this research, deep learning based on different variants of the LSTM model has been used to predict the cases for Hamilton County, Ohio. Almost eight months of data were considered for this work. The findings indicate that single-layer LSTM with one unit and hundred epochs and twenty-time steps outperformed the other four models based on error parameters, performance metrics.

    Committee: Yizong Cheng Ph.D. (Committee Member); Anca Ralescu Ph.D. (Committee Member); Ali Minai Ph.D. (Committee Member) Subjects: Computer Science
  • 11. Thomas, Brennan LSTM Neural Networks for Detection and Assessment of Back Pain Risk in Manual Lifting

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    Repetitive occupational lifting of objects has been shown to create an increased risk for incidence of back pain. Ergonomic workstations that promote proper lifting technique can reduce risk, but it is difficult to assess such workstations without constant risk monitoring. Inertial measurement units (IMU) such as accelerometers and gyroscopes have been used with success in human activity recognition (HAR) systems to determine when specified actions occur, but largely only for general activities for which it is easy to collect a significant amount of data. There has been considerably less work towards assessment of specialized tasks, such as lifting action. This work presents a dual system utilizing machine learning for detection of lifting action and assessment of the risks involved for that action. The proposed system achieves good performance in both the detection and assessment tasks using raw time-series IMU data with minimal preprocessing. Application of data augmentation provides additional increases in performance for the assessment task, and saliency mapping determines optimal sensor configurations for system modifications. The presented system can be used to monitor the risks involved with lifting action required in a workplace, guiding efforts to mitigate long-term risk.

    Committee: Rashmi Jha Ph.D. (Committee Chair); Ming-Lun Lu (Committee Member); Fred Annexstein Ph.D. (Committee Member) Subjects: Computer Science
  • 12. Adewopo, Victor Exploring Open Source Intelligence for cyber threat Prediction

    MS, University of Cincinnati, 2021, Education, Criminal Justice, and Human Services: Information Technology

    The cyberspace is one of the most complex systems ever built by humans, the utilization of cybertechnology resources are used ubiquitously by many, but sparsely understood by the majority of the users. In the past, cyberattacks are usually orchestrated in a random pattern of attack to lure unsuspecting targets. More evidence has demonstrated that cyberattack knowledge is shared among individuals using social media and hacker forums in the virtual ecosystem. Previous research work focused on using machine learning algorithms (SVM) to identify threats [1]. Rodriguez et al. utilized sentiments and data mining techniques in classifying threats [2]. This research developed a novel framework for identifying threats and predicting vulnerability exposure. The methodology used in this research combined information extracted from the deep web and surface web containing technical indicators of threats. This thesis showcased that potential cyberthreat can be predicted from open-source data using a deep learning algorithm (LSTM). The developed model utilized open-source intelligence to identify existing threat in an input search and identify the severity level of the threat by crawling the National vulnerability Database(NVD) and Common Vulnerabilities and Exposures (CVE) Database for a list of published threats related to the search term with an accuracy of 91%, precision of 90% and recall of 91% on test data

    Committee: Bilal Gonen Ph.D. (Committee Chair); Nelly Elsayed Ph.D. (Committee Member); M. Murat Ozer Ph.D. (Committee Member) Subjects: Information Technology
  • 13. Azumah, Sylvia Deep Learning -Based Anomaly Detection System for Guarding Internet of Things Devices

    MS, University of Cincinnati, 2021, Education, Criminal Justice, and Human Services: Information Technology

    The ever-expanding scope of the third industrial revolution spawned a dynamic digital age of computers and the world wide web (internet). The Internet of Things (IoT) is one of the latest technologies that will forever change how humans interact with information systems. These technologies involve embedding sensors and software applications into physical objects which allow them to transmit and share data with other devices over the Internet ranging from simple smart home appliance management to self-driving cars. The universal applications of IoT technology cannot be overemphasized. It is currently being utilized in remote healthcare and the telecommunications industry with a projected large-scale deployment in critical infrastructure such as power grids and water purification. As with everything else related to information systems, this presents an increased amount of vulnerabilities and security issues that could have dire consequences if left unattended. Research shows that 70% of current IoT devices are moderately easy to compromise or hack. [37]. Therefore, an efficient mechanism is needed to safeguard these devices as they are connected to the internet. This thesis introduces a novel deep learning-based anomaly detection model to predict cyberattacks on IoT devices and to identify new outliers as they occur over time. Long Short-Term Memory(LSTM) is an efficient deep learning architecture that addresses spatial and temporal information. Therefore, it could perform effectively in an anomaly detection model for IoT security. The model recorded a high detection accuracy of 98%, precision of 85%, recall of 84%, and finally an F1-score of 83% using the IoT network intrusion detection dataset. The performance of the LSTM based model approach developed in this study was analyzed and compared to the state-of-the-art deep learning-based anomaly detection for IoT devices.

    Committee: Nelly Elsayed Ph.D. (Committee Chair); M. Murat Ozer Ph.D. (Committee Member); Hazem Said Ph.D. (Committee Member) Subjects: Information Technology
  • 14. Alsulami, Khalil Application-Based Network Traffic Generator for Networking AI Model Development

    Master of Science in Computer Engineering, University of Dayton, 2021, Electrical and Computer Engineering

    The growing demands for communication and complex network infrastructure relay on overcoming the network measurement and management challenges. Lately, artificial intelligence (AI) algorithms have considered to improve the network system, e.g., AI-based network traffic classification, traffic prediction, intrusion detection system, etc. Most of the development of networking AI models require abundant traffic data samples to have a proper measuring or managing. However, such databases are rare to be found publicly. To counter this issue, we develop a real-time network traffic generator to be used by network AI models. This network traffic generator has a data enabler that reads data from real applications and establishes packet payload database and a traffic pattern database. The packet payload database has the data packets of real application, where network traffic generator locates the payload in the capture file (PCAP). The other database is traffic pattern database that contains the traffic patterns of a real application. This database depends on the timestamp in each packet and the number of packets in the traffic sample to form a traffic database. The network traffic generator has a built-in network simulator that allows to mimic the real application network traffic flows using these databases to simulate the real-traffic application. The simulator provides a configurable network environment as an interface. To assess our work, we tested the network traffic generator on two network AI models based on simulated traffic, i.e., AI classification model, and AI traffic prediction. The simulation performance and the evaluation result showed improvement in networking AI models using the proposed network traffic generator, which reduce time consuming and data efficiency challenges.

    Committee: Feng Ye (Committee Chair); Tarek Taha (Committee Member); John Loomis (Committee Member) Subjects: Artificial Intelligence; Communication; Computer Engineering; Computer Science; Educational Software; Educational Technology; Electrical Engineering; Information Science; Information Systems; Information Technology; Systems Science; Technical Communication; Technology
  • 15. Carman, Benjamin Translating LaTeX to Coq: A Recurrent Neural Network Approach to Formalizing Natural Language Proofs

    Bachelor of Science (BS), Ohio University, 2021, Computer Science

    There is a strong desire to be able to more easily formalize existing mathematical statements and develop machine-checked proofs to verify their validity. Doing this by hand can be a painstaking process with a steep learning curve. In this paper, we propose a model that could automatically parse natural language proofs written in LaTeX into the language of the interactive theorem prover, Coq, using a recurrent neural network. We aim to show the ability for such a model to work well within a very limited domain of proofs about even and odd expressions and exhibit generalization at test time. We demonstrate the model's ability to generalize well given small variations in natural language and even demonstrate early promising results for the model to generalize to expressions of intermediate lengths unseen at training time.

    Committee: David Juedes (Advisor) Subjects: Computer Science
  • 16. Shojaee, Ali Bacteria Growth Modeling using Long-Short-Term-Memory Networks

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    Modeling of bacteria growth under different environmental conditions provides a useful tool to predict food and consumer goods safety. This study introduces a flexible, unique, and data-driven model to predict the bacteria growth under different pH conditions, using a one-to-many Long-Short-Term Memory (LSTM) model. When compared with a benchmark model the proposed model showed a good predictive power for different bacteria behaviors. In addition to its predictive ability, the model architecture is flexible and can be adapted for different bacteria behavior patterns without additional prior assumptions.

    Committee: Anca Ralescu Ph.D. (Committee Chair); Kenneth Berman Ph.D. (Committee Member); Mark Maupin Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 17. McWhorter, Tanner Cognitive Electronic Warfare System

    Master of Science, Miami University, 2020, Computational Science and Engineering

    The decision processes that are made while engaging a hostile radar system is pivotal to a mission's success, as well as the pilot's survivability. These decisions can be difficult to make considering the large amounts of ambiguity that is involved with electronic warfare (EW) processing. Due to the large amounts of variability in EW processing, a cognitive system consisting of various machine learning tools was developed in this project for EW decision making. The developed system analyzes a hostile emitter's characteristics to determine the optimal course of actions. This system differs from previously developed EW systems because it consists of integrated machine learning models, while previously developed systems only consisted of cognitive agents. Various machine learning tools and cognitive reasoners have been tested and an optimal cognitive system architecture was developed. This thesis delivers an integrated system that can provide an optimal course of actions for a pilot to take while intercepting an enemy radar signature. The system can be used to test engagement protocols for different scenarios and decide optimal courses of actions.

    Committee: Chi-Hao Cheng Dr. (Advisor); Dmitriy Garmatyuk Dr. (Committee Member); Mark Scott Dr. (Committee Member) Subjects: Computer Science; Electrical Engineering
  • 18. Korte, Christopher A Preliminary Investigation into using Artificial Neural Networks to Generate Surgical Trajectories to Enable Semi-Autonomous Surgery in Space

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Aerospace Engineering

    This thesis is a preliminary investigation into using artificial neural networks to generate surgical trajectories towards space-based surgery in non-near earth environments. The first study focused on the communication-delay associated with tele-operation. This study was performed to find the threshold of the communication delay where tele-surgery would not be feasible. This is important because if humans are going to travel to Mars, semi-autonomous surgical methods must be developed as an alternative means of providing surgical intervention. We found a delay of 1.5 seconds was about the threshold where tele-operation becomes excessively difficult. An IRB study was conducted using surgeons and surgical residents who performed virtual surgical procedures in a virtual dissection simulator to acquire training data to use to train LSTM-RNNs. The data obtained from that study was reduced, because the sampling rate of the simulator was too high and the datasets contained too many data points to train the LSTM-RNNs effectively. The procedure was segmented into three subtasks. Several LSTM-RNN were trained using a custom cost function and evaluated using custom metrics. This method was compared with another algorithm, which was used to generate surgical trajectories. We tested the LSTM-RNN several shifted set of fiducial markers to assess its robustness. We found the LSTM-RNNs were robust enough to handle slight changes in anatomy.

    Committee: Catharine McGhan Ph.D. (Committee Chair); Kelly Cohen Ph.D. (Committee Member); Ou Ma Ph.D. (Committee Member); Grant Schaffner Ph.D. (Committee Member) Subjects: Aerospace Materials
  • 19. Shu, Xingliang Electrocardiograph Signal Classification By Using Neural Network

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Electrical Engineering

    Electrocardiogram reflects the electrical signal of the heart, and it is an important tool for diagnosing patients' cardiac conditions. The goal of our research is to develop a neural network system which can classify nine categories of Electrocardiograms. The dataset is from the 2018 China Psychology Competition [1]. In Chapter 1, we will start to introduce Electrocardiogram classification and why we want to use neural network approach for this work. In Chapter 2, we will describe our dataset and the evaluation metric. In Chapter 3, our system design will be explained in very detail. In Chapter 4, we will discuss the results and analyze the misclassified data. There are many systems that have been developed by using the dataset we have, and they are evaluated and ranked during the competition. Therefore, in Chapter 5, we will show the performance of our system by comparing with the models from the competition in a reasonable way. Finally, we will conclude our work.

    Committee: William Wee Ph.D. (Committee Chair); Chia Han Ph.D. (Committee Member); Xuefu Zhou Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 20. Liu, Yiran Consistent and Accurate Face Tracking and Recognition in Videos

    Master of Science (MS), Ohio University, 2020, Computer Science (Engineering and Technology)

    Automatically tracking and recognizing human faces in videos and live streams is often a crucial component in many high-level applications such as security, visual surveillance and human-computer interaction. Deep learning has recently revolutionized artificial intelligence areas, including face recognition and detection. Most of the existing video analysis solutions, however, rely on certain 2D convolutional neural network (CNN) to process video clips upon a frame-to-frame basis. The temporal contextual information between consecutive frames is often inadvertently overlooked, resulting in inconsistent tracking outcomes, which also negatively affect the accuracy of human identification. To provide a remedy, we propose a novel network framework that allows history information be carried along video frames. More specifically, we take the single short scale-invariant face detection (S3FD) as the baseline face detection network and combine it with long short-term memory (LSTM) components to integrate temporal context. Taking the images and detection results of previous frames as additional inputs, our S3FD + LSTM framework is well posed to produce more consistent and smoother face detection results along time, which in return leads to more robust and accurate face recognition in videos and live streams. We evaluated our face tracking and recognition model with both public (YouTube Face) and self-made datasets. Experimental results demonstrate that our S3FD+LSTM approach constantly produces smoother and more stable bounding boxes than S3FD alone. Recognition accuracy is also improved over the baseline model, and our model significantly outperforms the state-of-the-art face tracking solutions in the public domain.

    Committee: Jundong Liu (Advisor) Subjects: Computer Science