Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 35)

Mini-Tools

 
 

Search Report

  • 1. Rickman, William Surrogate Markov Models for Validation and Comparative Analysis of Proper Orthogonal Decomposition and Dynamic Mode Decomposition Reduced Order Models

    Master of Science, Miami University, 2025, Mechanical and Manufacturing Engineering

    Reduced order modeling (ROM) methods, such as those based upon Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD), offer data-based turbulence modeling with potential applications for flow control. While these models are often cheaper than numerical approaches, their results require validation with source data. Within the literature, the metrics and standards used to validate these models are often inconsistent. Chabot (2014) produced a data-driven framework for validating these ROMs that used surrogate Markov models (SMMs) to compare how the system dynamics evolved rather than how any single metric evolved. These SMMs were constructed by clustering the flow data into different states of suitably similar flow fields, and the Markov model then mapped how likely each state was to transition into another. While this method was successful, there persisted an amount of uncertainty in how the outlier states within this clustering scheme were determined. Additionally, the study only examined the application of this procedure to POD-Galerkin ROMs. This study aims to tie the outlier state determination directly to the models' parent data. The study will also apply this procedure to ROMs generated from DMD to investigate how this framework's effectiveness carries over to different classes of ROMs.

    Committee: Edgar Caraballo (Advisor); Andrew Sommers (Committee Member); Mehdi Zanjani (Committee Member) Subjects: Aerospace Engineering; Fluid Dynamics; Mathematics; Mechanical Engineering; Statistics
  • 2. Su, Weizhe Bayesian Hidden Markov Model in Multiple Testing on Dependent Count Data

    PhD, University of Cincinnati, 2020, Arts and Sciences: Mathematical Sciences

    Multiple testing on large-scale dependent count data faces challenges in two basic modeling elements, that is, modeling of the dependence structure across observations and the distribution specification on the null and non-null states. We propose three Poisson hidden Markov models (PHMM) under the Bayesian hierarchical model framework to handle these challenges. The dependence structure across hypotheses is modeled through the hidden Markov process. To address the challenge of the distribution specification under the non-null state, several model selection methods are employed and compared to determine the number of mixture components in the non-null distribution. Furthermore, we examine two different ways to include covariate effects, PHMM with homogeneous covariate effects (PHMM-HO) and PHMM with heterogeneous covariate effects (PHMM-HE). Modeling covariate effects helps take consideration of multiple factors which are directly or indirectly related to the hypotheses under investigation. We carry out extensive simulation studies to demonstrate the performance of the proposed hidden Markov models. The stable and robust results show the significant advantages of our proposed models in handling complex data structure in dependent counts. Multiple hypotheses testing with PHMM is valid and optimal compared with a group of commonly used testing procedures. Both PHMM-HO and PHMM-HE improve the multiple testing performance and are able to detect the dynamic data pattern along with the covariate effects.

    Committee: Xia Wang Ph.D. (Committee Chair); Hang Joon Kim Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member); Seongho Song Ph.D. (Committee Member); Bin Zhang Ph.D. (Committee Member) Subjects: Statistics
  • 3. Jing, Junbo Vehicle Fuel Consumption Optimization using Model Predictive Control based on V2V communication

    Master of Science, The Ohio State University, 2014, Electrical and Computer Engineering

    As people are working hard on improving vehicle's fuel economy, a large portion of fuel consumption in everyday driving is wasted by vehicle driver's inexperienced operations and inefficient judgments. This thesis proposes a system that optimizes the vehicle's fuel consumption in automated car-following scenarios. The system is designed able to work in the initial stage of implementing Vehicle-to-Vehicle (V2V) communications. The system is developed based on Model Predictive Control (MPC). With a given prediction of the preceding vehicle's speed, the system controls the vehicle's throttle and brake to follow the preceding vehicle with an optimal velocity profile. The control problem is formed into a quadratic programming optimization problem using real vehicle parameters. Active-set algorithm is adopted for optimization, and the computation speed can satisfy real-time computations. The control results show a significant fuel saving benefit of up to 15%, with car-following safety ensured and ride comfort cared. To provide the prediction horizon for the MPC based system, a preceding vehicle speed prediction algorithm and a leading vehicle speed prediction algorithm are developed in this thesis. The preceding vehicle's speed is predicted by analyzing the transmission of speed disturbances along the convoy using Intelligent Driver Model (IDM). The information needed is obtained through V2V communication, and the algorithm does not require a high V2V penetration rate. The estimated car-following behavioral parameters are clustered online for improved prediction accuracy. The algorithm can provide a prediction horizon of seconds depending on the convoy length. The leading vehicle speed prediction algorithm is developed to extend the prediction horizon. The algorithm predicts the leading vehicle's free road driving and approaching speed when a rather large gap to the next vehicle appears. The leading vehicle's historical speed profile is decoded into a driver opera (open full item for complete abstract)

    Committee: Umit Ozguner (Advisor); Giorgio Rizzoni (Committee Member) Subjects: Automotive Engineering; Electrical Engineering; Energy; Engineering
  • 4. Wang, Guojun Some Bayesian Methods in the Estimation of Parameters in the Measurement Error Models and Crossover Trial

    PhD, University of Cincinnati, 2004, Arts and Sciences : Mathematics

    In this dissertation, we use Bayesian methods to estimate parameters in measurement error models and in the two-period crossover trial. The reference prior approach is used to estimate parameters in the measurement error models, including simple normal structural models, Berkson models, structural models with replicates, and the hybrid models. Reference priors are derived. Jeffreys prior is obtained as a special case of reference priors. The posterior properties are studied. Simulation-based comparisons are made between the reference prior approach and the maximum likelihood method. A fractional Bayes factor (FBF) approach is used to estimate the treatment effect in the two-period crossover trial. The reference priors and the FBF are derived. The FBF is used to combine the carryover-effect model and the no-carryover-effect model. Markov chain Monte Carlo simulation is used to implement the Bayesian analysis.

    Committee: Dr. Siva Sivaganesan (Advisor) Subjects: Mathematics; Statistics
  • 5. Samara, Marko Limit Theorems for the Rotational Isomeric State Model

    Doctor of Philosophy, The Ohio State University, 2011, Mathematics

    In late 1950's M.V. Volkenstein and other chemists suggested a discretized model of polymers called Rotational Isomeric State approximation model (RIS model), in which torsional angles at each step of polymer configuration take values from a fixed finite set of angles. This model was further studied by P. Flory and others. One of the natural questions is what happens with polymer when number of its bonds tends to infinity. Investigation related to this question was not completely done at the time, while some results found then were not quite rigorously proved and remained justified by intuitive or heuristic arguments. The reason for this is because some mathematical techniques and results were either not known at the time RIS model was developed, or were discovered not long before. The work presented in this thesis is continuation of study on the RIS model done by Volkenstein, Flory and others. We consider what happens with the RIS polymer when the number of its bonds tends to infinity, and show that, under suitable scaling, it converges to the Kratky-Porod model. We rigorously prove (already known) convergence of the sequence of torsional angles of the polymer, which forms an inhomogeneous Markov chain, to some homogeneous Markov chain. We also show that the rate of this convergence is geometric. To prove that the RIS model converges to the Kratky-Porod model, we use sequence of stochastic rotations whose limit satisfies linear Stratonovich stochastic equation. Driving process of this equation is antisymmetric Gaussian stochastic matrix, which rises from the sequence of torsional angles.

    Committee: Peter March PhD (Advisor); Saleh Tanveer PhD (Committee Member); Yuan Lou PhD (Committee Member); Dorinda Gallant PhD (Committee Member) Subjects: Mathematics; Molecular Chemistry
  • 6. Zhuang, Lili Bayesian Dynamical Modeling of Count Data

    Doctor of Philosophy, The Ohio State University, 2011, Statistics

    Problems involving discrete data, especially count data, are increasingly common in many important fields, such as cancer mapping and influenza epidemiology. Compared to a large amount of highly developed attractive models for spatio-temporal continuous data (e.g., Cressie and Wikle, 2011), modeling the underlying dynamical process for count data is less well advanced. Thus, the primary goal that ties together the two main chapters of this dissertation, is to develop dynamical approaches for better capturing the true process that underlies count data. Typically, the statistical dependence in the underlying process can be defined through a mathematical graph consisting of nodes (vertices) and edges. Nodes represent individuals or objects, while edges represent the (dependence) relationships between them. Mathematical graphs can be further divided into different classes based on the properties of their edges and the paths formed by edges (e.g., Lauritzen, 1996). In this dissertation, we use graphs that define spatio-temporal dependence (Chapter 2) and temporal dependence (Chapter 3). Specifically, we start with spatio-temporal count data in the field of non-contagious disease mapping, namely, yearly sudden infant death syndrome (SIDS) information, from 1979 to 1984, for the counties of North Carolina. These data have been analyzed before as temporally aggregated spatial data (Cressie and Chan, 1989). We incorporate the new temporal aspect by presenting a spatio-temporal model from which optimal smoothing of SIDS rates can be derived. Specifically, we use a Bayesian hierarchical statistical model (BHM) with a hidden dynamical Markov random field and extra-Poisson variability. The graph arises by evolving the Markov random field via an autoregressive matrix. Potential confounding of sources of variability is avoided by calibrating the extra-Poisson variability with the microscale variation in an approximate Gaussian model. We also consider temporal (but non-spatial) (open full item for complete abstract)

    Committee: Noel Cressie (Advisor); Desheng Liu (Committee Member); Laura Kubatko (Committee Member) Subjects: Statistics
  • 7. EDWARDS, KARLA Site-Specific Point Positioning and GPS Code Multipath Parameterization and Prediction

    Doctor of Philosophy, The Ohio State University, 2011, Geodetic Science and Surveying

    In spite of the fact that many multipath mitigation techniques are currently implemented in GPS hardware and firmware, there remains a post-receiver-processed multipath error which plagues GPS observations stored in the RINEX files. At static Continuously Operating Receiver Station (CORS) sites, this multipath error expresses sidereal periodicity by virtue of the repeated configuration of the GPS satellite relative to the ground-borne GPS antenna and its environment. In this study, this repeatability was capitalized upon to investigate the multipath signal (its variability inclusive) generated at a highly multipath-prone Suominet CORS site (named SG03). In this regard, the multipath was parameterized using a Fourier Analysis and a Wavelet Analysis technique. The latter was found to capture in excess of 90% of the identified signal, outperforming the Fourier parameterization. Therefore, investigation of an optimal multipath modeling technique (in this case using a third order one-dimensional Daubechies wavelet) served as a pre-cursor to the main focus of this study, that being to construct an Integrated Point Position and Multipath Prediction (IPPMP) algorithm at a static CORS site. To this end, five different IPPMP algorithms were developed in MATLAB based on three types of prior-information models, namely, the Extended Gauss-Markov Model, the Mixed Linear Model and the Random Effects Model. While it is true that other wavelet-based mitigation schemes (such as the trademarked Wavesmooth technique) exist, the implementation of this study emphasized both the mitigation as well as the inclusion of multipath parameters in the IPPMP schemes developed. The results of this study suggest that use of an elevation-dependent variance model for the high frequency wavelet parameters (called wavelet details) is insignificant to the position determination. It was confirmed that the use of the Random Effects Model is useful where the prior information is actually bias-prone, this (open full item for complete abstract)

    Committee: Dorota Grejner-Brzezinska PhD (Advisor); Alper Yilmaz PhD (Committee Member); Andria Bilich PhD (Committee Member) Subjects: Applied Mathematics; Civil Engineering; Engineering; Mathematics; Meteorology; Remote Sensing; Statistics
  • 8. Wang, Chao Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data

    Doctor of Philosophy, The Ohio State University, 2008, Computer and Information Science

    This work seeks to develop a probabilistic framework for modeling, querying and analyzing large-scale structured and semi-structured data. The framework has three components: (1) Mining non-redundant local patterns from data; (2) Gluing these local patterns together by employing probabilistic models (e.g., Markov random field (MRF), Bayesian network); and (3) Reasoning over the data for solving various data analysis tasks. Our contributions are as follows: (a) We present an approach of employing probabilistic models to identify non-redundant itemset patterns from a large collection of frequent itemsets on transactional data. Our approach can effectively eliminate redundancies from a large collection of itemset patterns. (b) We propose a technique of employing local probabilistic models to glue non-redundant itemset patterns together in tackling the link prediction task in co-authorship network analysis. Our technique effectively combines topology analysis on network structure data and frequency analysis on network event log data. The main idea is to consider the co-occurrence probability of two end nodes associated with a candidate link. We propose a method of building MRFs over local data regions to compute this co-occurrence probability. Experimental results demonstrate that the co-occurrence probability inferred from the local probabilistic models is very useful for link prediction. (c) We explore employing global models, models over large data regions, to glue non-redundant itemset patterns together. We investigate learning approximate global MRFs on large transactional data and propose a divide-and-conquer style modeling approach. Empirical study shows that the models are effective in modeling the data and approximately answering queries on the data. (d) We propose a technique of identifying non-redundant tree patterns from a large collection of structural tree patterns on semi-structured XML data. Our approach can effectively eliminate redundancies from a larg (open full item for complete abstract)

    Committee: Srinivasan Parthasarathy (Advisor) Subjects: Computer Science
  • 9. Cothren, Jackson Reliability in constrained Gauss-Markov models: an analytical and differential approach with applications in photogrammetry

    Doctor of Philosophy, The Ohio State University, 2004, Geodetic Science and Surveying

    Reliability analysis explains the contribution of each observation in an estimation model to the overall redundancy of the model, taking into account the geometry of the network as well as the precision of the observations themselves. It is principally used to design networks resistant to outliers in the observations by making the outliers more detectible using standard statistical tests.It has been studied extensively, and principally, in Gauss-Markov models. We show how the same analysis may be extended to various constrained Gauss-Markov models and present preliminary work for its use in unconstrained Gauss-Helmert models. In particular, we analyze the prominent reliability matrix of the constrained model to separate the contribution of the constraints to the redundancy of the observations from the observations themselves. In addition, we make extensive use of matrix differential calculus to find the Jacobian of the reliability matrix with respect to the parameters that define the network through both the original design and constraint matrices. The resulting Jacobian matrix reveals the sensitivity of reliability matrix elements highlighting weak areas in the network where changes in observations may result in unreliable observations. We apply the analytical framework to photogrammetric networks in which exterior orientation parameters are directly observed by GPS/INS systems. Tie-point observations provide some redundancy and even a few collinear tie-point and tie-point distance constraints improve the reliability of these direct observations by as much as 33%. Using the same theory we compare networks in which tie-points are observed on multiple images (n-fold points) and tie-points are observed in photo pairs only (two-fold points). Apparently, the use of two-fold tie-points does not significantly degrade the reliability of the direct exterior observation observations. Coplanarity constraints added to the common two-fold points do not add significantly to the (open full item for complete abstract)

    Committee: Burkhard Schaffrin (Advisor) Subjects: Engineering, Civil; Geodesy
  • 10. Li, Xibao Learning From the Implementation of Residential Optional Time of Use Pricing in the U.S. Electricity Industry

    Doctor of Philosophy, The Ohio State University, 2003, Public Policy and Management

    Residential time-of-use (TOU) rates have been in practice in the U.S. since the 1970s. However, for institutional, political, and regulatory reasons, only a very small proportion of residential customers are actually on these schedules. In this thesis, I explore why this is the case by empirically investigating two groups of questions. (1) On the "supply" side: Do utilities choose to offer TOU rates in residential sectors on their own initiative if state commissions do not order them to do so? Since utilities have other options, what is the relationship between the TOU rate and other alternatives? To answer these questions, I survey residential tariffs offered by more than 100 major investor-owned utilities, study the impact of various factors on utilities' rate-making behavior, and examine utility revealed preferences among four rate options: seasonal rates, inverted block rates, demand charges, and TOU rates. Estimated results suggest that the scale of residential sectors and the revenue contribution from residential sectors are the only two significant factors that influence utility decisions on offering TOU rates. Technical and economic considerations are not significant statistically. This implies that the little acceptance of TOU rates is partly attributed to utilities' inadequate attention to TOU rate design. (2) On the "demand" side: For utilities offering TOU tariffs, why do only a very small proportion of residential customers choose these tariffs? What factors influence customer choices? Unlike previous studies that used individual-level experimental data, this research employs actual aggregated information from 29 utilities offering optional TOU rates. By incorporating neo-classical demand analysis into an aggregated random coefficient logit model, I investigate the impact of both price and non-price tariff characteristics and non-tariff factors on customer choice behavior. The analysis indicates that customer pure tariff preference (which captures the e (open full item for complete abstract)

    Committee: Charles Adams (Advisor) Subjects:
  • 11. Leinbach, Josiah A Hidden Markov Approach to Authorship Attribution of the Pastoral Epistles

    Master of Science (MS), Bowling Green State University, 2024, Applied Statistics (ASOR)

    The New Testament contains thirteen epistles written in the name of the Apostle Paul, and from the earliest records of church history, Christian theologians received all thirteen as authentically Pauline. Since the 19th century, however, many scholars have doubted Paul's authorship of some epistles based on, among other factors, their vocabulary and writing style, which differ from undisputed Pauline epistles. In particular, three epistles called the Pastoral Epistles (1 Timothy, 2 Timothy, and Titus) have been subject to the most doubt. This thesis will use a Hidden Markov Model that analyzes the transitions between different parts of speech in the whole Pauline corpus and classifies sentences as belonging to a “Pauline” or “non-Pauline” style. Then, informed by New Testament scholarship, we will interpret these results and judge the possibility of Pauline authorship for the Pastoral Epistles.

    Committee: Shuchismita Sarkar (Committee Chair); Riddhi Ghosh (Committee Member); Christopher Rump (Committee Member) Subjects: Statistics
  • 12. Fagbamigbe, Kehinde Examining Gender Equality in the United States Undergraduate Enrollment Using Hidden Markov Model

    Master of Science (MS), Bowling Green State University, 2023, Applied Statistics (ASOR)

    This thesis delves into the intricate analysis of multidimensional enrollment records spanning two and half decades, covering various regions within U.S. academic institutions. Leveraging Hidden Markov Models (HMMs) with hypothesis testing, we seek to unearth underlying patterns and trends in higher education enrollment. The study commences with data preparation, involving the extraction and aggregation of data across institutions, majors, and genders over two and half decades, resulting in a rich, 4-dimensional dataset. To streamline the analysis, the dataset is divided into nine geographic divisions. The results offer a granular view of enrollment trends, enabling institutions and policymakers to make informed decisions regarding diversity and education equality. This research bridges the gap in modelling matrix-variate data within the HMM framework, addressing the unique challenges posed by multidimensional datasets. By adopting this innovative approach, we contribute valuable insights to the field of higher education enrollment analysis, facilitating more inclusive and data-driven educational policies. This thesis comprises four chapters. The first chapter talks about the motivation and objective of this paper, and the second chapter introduces the HMM tailored for matrix-variate time series data. The third chapter demonstrates the application of this model with hypothesis testing using undergraduate enrollment data from US institutions, and the fourth chapter concludes with the key findings and the significance of this method.

    Committee: Shuchismita Sarkar Ph.D. (Committee Chair); Umar Islambekov Ph.D. (Committee Member); Yuhang Xu Ph.D. (Committee Member) Subjects: Education; Education Policy; Statistics
  • 13. Mehrani, Ramin Study of Protein-protein Interactions using Molecular Dynamics Simulation

    Doctor of Philosophy (PhD), Ohio University, 2022, Mechanical and Systems Engineering (Engineering and Technology)

    IRF3 dimerization is an important step during the innate immune signal transduction in the human body. However, abnormal dimerization of IRF3 has been linked to a number of diseases, including autoimmune diseases [46], diabetes [35], and cancer [73]. In this dissertation, I focus on understanding the molecular-level processes involved in the dimerization of two interferon regulatory factor 3 (IRF3) proteins. This investigation begins with a fundamental study in which I explore the role of water-mediated interactions in the association of the hydrophobic domains of the protein. I examine the role of hydrophobicity, flexibility, and density of the flexible side chains in systematic simulations using a highly simplified model. Next, I study molecular processes involved in IRF3 protein dimerization. For this study, I first calculate free energy along pathways associated with the association of two IRF3 chains and then determine the most stable dimeric configuration. I extend this investigation to determine how the stable dimeric configuration of IRF3 changes with its phosphorylation state by studying phosphorylation at six different serine residues. This work shows that phosphorylation of different residues of IRF3 protein can significantly change the stable dimeric configuration. Finally, I estimate the protein-ligand binding affinity via Markov state analysis applied to fully atomistic simulations. I apply this technique to study the binding affinities of six different ligands for the ATP binding site of Glycogen synthase kinase-3β (GSK-3β). The binding affinity prediction from Markov state analysis is in strong agreement with the experimental results. My MD simulations show that the ligands that have a higher propensity of attaining a compact conformation due to the stacking of their aromatic groups also show stronger inhibition of GSK-3β activity.

    Committee: Sumit Sharma (Advisor) Subjects: Bioinformatics; Biomedical Engineering; Biomedical Research; Biophysics; Biostatistics; Chemical Engineering; Chemistry; Pharmaceuticals; Physical Chemistry
  • 14. Rook, Jayson Detecting Anomalous Behavior in Radar Data

    Master of Science, Miami University, 2021, Computational Science and Engineering

    This project seeks to investigate and apply anomaly detection algorithms for a wideband receiver, that will flag anomalous radar behaviors sent by some transmitter external to the receiver's system. Flagging these anomalies will indicate to the receiver system that the radar's behavior has been reprogrammed, knowledge which is important for determining optimal countermeasures. The programs developed have investigated several approaches to accomplish this. Firstly, clustering methods like DBSCAN can group the observed pulses into distinct classes, reducing the problem to finding disruptions in patterns of numerical labels. Semi-supervised techniques like Hidden Markov Models and Long Short-Term Memories can be applied to learn these patterns for normal behavior and flag anomalies where the patterns are broken. Lastly, an unsupervised technique based on cross-correlations takes an alternative approach of flagging all the different behaviors in a sequence, without any initial training. Simulation results on test data demonstrate the functionality of these techniques, which are offered as potential suggestions for implementation in a real system.

    Committee: Chi-Hao Cheng Ph.D. (Advisor); Dmitriy Garmatyuk Ph.D. (Committee Member); Mark Scott Ph.D. (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Remote Sensing
  • 15. Do, Hoan Parameter Recovery for the Four-Parameter Unidimensional Binary IRT Model: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Approaches

    Doctor of Philosophy (PhD), Ohio University, 2021, Educational Research and Evaluation (Education)

    This study assesses the parameter recovery accuracy of MML and two MCMC methods, Gibbs and HMC, under the four-parameter unidimensional binary item response function. Data were simulated under the fully crossed design with three sample size levels (1,000, 2,500 and 5,000 respondents) and two types of latent trait distribution (normal and negatively skewed). Results indicated that in general, MML took a more substantive impact of latent trait skewness but also absorbed the momentum from sample size increase to improve its performance more strongly than MCMC. Two MCMC methods remained advantageous with lower RMSE of item parameter recovery across all conditions under investigation, but sample size increase brought a correspondingly narrower gap between MML and MCMC regardless of latent trait distributions. Gibbs and HMC provided nearly identical outcomes across all conditions, and no considerable difference between two MCMC methods was detected. Specifically, when θs were generated from a normal distribution, MML and MCMC estimated the b, c and d parameters with little mean bias, even at N = 1,000. Estimates of the a parameter were positively biased for MML and negatively biased for MCMC, and mean bias by all methods was considerably large in absolute value (> 0.10) even at N = 5,000. MML item parameter recovery became less biased than Gibbs and HMC at N = 5,000. Under normal θ, all methods consistently improved RMSE of item parameter recovery in conjunction with sample size increase, except for MCMC estimation of the c parameter which did not exhibit a clear trend. When latent trait scores were skewed to the left, there was a concomitant deterioration in the quality of item parameter recovery by both MML and MCMC generally. Under skewed θ, MML had total errors of item parameter recovery diminished as more examinees took a test, yet sample size increase did not appear to benefit mean bias. Indeed, MML became increasingly negatively biased in estimation of the d param (open full item for complete abstract)

    Committee: Gordon Brooks PhD (Committee Chair); Bruce Carlson PhD (Committee Member); Adah Ward Randolph PhD (Committee Member); Diaz Sebastian PhD (Committee Member) Subjects: Educational Psychology; Educational Tests and Measurements; Quantitative Psychology; Statistics
  • 16. Han, Teawon An Online Evolving Method and Framework for Optimal Decision-Making in Reinforcement Learning-based Automated Vehicle Control Systems

    Doctor of Philosophy, The Ohio State University, 2020, Electrical and Computer Engineering

    Automated Vehicles (AVs) must choose optimal driving actions under various traffic situations. However, designing their decision-making systems is challenging because it's impossible to anticipate all possible traffic situations. Previous studies proposed handcrafting decision-making rules for expected situations or for training an approximated model, using a limited amount of data to enable AVs to make proper decisions. However, these approaches could not recognize unexpected situations, so their decision-making was incomplete. Although the model-free Reinforcement-Learning (RL) approach is gaining attention as an alternative method these limitations, the RL controller's performance is highly dependent on the selection of the reward function and the simulation data. Given the challenge of hand-crafting a perfect set of decision-making rules or reward functions, a novel online evolving method, named evolving Finite State Machine (e-FSM), is proposed to develop an optimal Markov driving model from scratch. Although the Markov driving model has the same properties as the standard Markov model, its states and transition dynamics evolve by determining new states and identifying transition dynamics in real-time, without human supervision. To Implement the Markov model evolved by the e-FSM, two approaches have been studied for directly and indirectly supporting the RL controller's decision-making. First, an Online Evolving Framework (OEF), which consists of an RL controller, the e-FSM, and an action-reviser, is proposed. In the framework, the RL controller learns a Q-function (and control-policy) given a reward function and returns the optimal actions in various situations while the e-FSM develops a Markov driving model. The action-reviser checks the validity of the RL controller's actions using the recently evolved Markov driving model. If the chosen action is invalid, the action-reviser explores and applies an alternative action instead of the RL controller's cho (open full item for complete abstract)

    Committee: Umit Ozguner (Advisor); Keith Redmill (Committee Member); Yingbin Liang (Committee Member); Dimitar Filev (Committee Member) Subjects: Artificial Intelligence; Computer Engineering; Electrical Engineering; Robotics
  • 17. Sysoeva, Viktoriia Hidden Markov Model-Supported Machine Learning for Condition Monitoring of DC-Link Capacitors

    Master of Science, Miami University, 2020, Computational Science and Engineering

    Power electronics are critical components in society's modern infrastructure. In electrified vehicles and aircraft, losing power jeopardizes personal safety and incur financial penalties. Because of these concerns, many researchers explore condition monitoring (CM) methods that provide real-time information about a system';s health. This thesis develops a CM method that determines the health of a DC-link capacitor in a three-phase inverter. The approach uses measurements from a current transducer in two Machine Learning (ML) algorithms, a Support Vector Machine (SVM), and an Artificial Neural Network (ANN), that classify the data into groups corresponding to the capacitor's health. This research evaluates six sets of data: time-domain, frequency-domain, and frequency-domain data subjected to four smoothing filters: the moving average with a rectangular window (MARF) and a Hanning window, the locally weighted linear regression, and the Savitzky-Golay filter. The results show that both ML algorithms estimate the DC-link capacitor health with the highest accuracy being 91.8% for the SVM and 90.7% for the ANN. The MARF-smoothed data is an optimal input data type for the ML classifiers due to its low computational cost and high accuracy. Additionally, a Hidden Markov Model increases the classification accuracy up to 98% when utilized with the ANN.

    Committee: Mark Scott Dr. (Advisor); Chi-Hao Cheng Dr. (Committee Member); Peter Jamieson Dr. (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 18. Weng, Huibin A Social Interaction Model with Endogenous Network Formation

    PhD, University of Cincinnati, 2020, Business: Business Administration

    Individuals' behaviors, attitudes or efforts are often influenced by peers through social interactions. Spatial econometrics is one of the most popular statistical tools used to evaluate the effect of social network on socioeconomic outcomes. However, real-world networks often violate the exogeneity assumption of the adjacency matrix representing network relationships. My dissertation relaxes this assumption and focuses on the mechanism behind social network formation. Whereas the first two essays relies on undirected networks, the third essay develops new network statistics for directed networks. The first essay extents the Double Metropolis Hastings algorithm to include latent unobserved nodal heterogeneity in order to estimate the impact of network formation on individual outcomes. The second essay applies this method using the Add Health dataset to estimate the impact of peer effects on students' academic performance. The third essay tests for network misspecification by introducing new higher-order configurations that combine individual attributes with purely structural network effects. Essay 1: We extend the dyadic-dependent Exponential Random Graph Model (ERGM) to include observed agent characteristics and unobserved agent-level heterogeneity in order to properly model the mechanism behind link formation. Monte Carlo simulations are designed to compare our proposed Double Metropolis Hastings estimation procedure with the traditional logistic regression. Results show that omitting higher-order dependence statistics or unobserved individual heterogeneity might lead to biased estimates. Essay 2: An empirical illustration using friendship networks across 12 US high schools is discussed in the second essay. A higher-order spatial Durbin model (SDM) is implemented in order to capture within and across grade-level peer effects on students' academic performance on four subjects. The estimation results reveal that peer effects are strongly significant across (open full item for complete abstract)

    Committee: Olivier Parent Ph.D. (Committee Chair); Bledar Konomi Ph.D. (Committee Member); James LeSage Ph.D. (Committee Member); Jeffrey Mills Ph.D. (Committee Member) Subjects: Economics
  • 19. Jin, Chao A Sequential Process Monitoring Approach using Hidden Markov Model for Unobservable Process Drift

    MS, University of Cincinnati, 2015, Engineering and Applied Science: Mechanical Engineering

    In the field of prognostics and health management, process monitoring is an essential technique to equip the system with the intelligence of being “aware” of any faults. Owing to tool fatigue, upstream material variation and electronic component drift, machine characteristics will often shift from initial states. As a result, sensor signals collected from the same equipment will possess varying correlation structures and offset in distributions, even if the health condition does not change. In order to build an effective data-driven process monitoring model, the constructed model has to be able to robustly differentiate the drifting healthy states from faulty conditions. In this thesis, a sequential process monitoring approach using hidden Markov model is proposed for process monitoring to overcome influences of such drifts. During training stage, a discrete hidden Markov model is constructed using only healthy condition data. A health threshold is determined based on the deviation of normal condition health index, which is the normalized slope of negative log-likelihood. During monitoring stage, the health index of the new process from the same machine is calculated. Faults will be detected when the metric goes beyond the threshold. The developed approach has been validated using a case study for semiconductor etching process. And result of the proposed approach is benchmarked with both global and regime-specific local models using principal component analysis and self-organizing maps.

    Committee: Jay Lee Ph.D. (Committee Chair); J. Kim Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Mechanics
  • 20. Chabot, John VALIDATING STEADY TURBULENT FLOW SIMULATIONS USING STOCHASTIC MODELS

    Master of Science, Miami University, 2015, Computational Science and Engineering

    Proper Orthogonal Decomposition was heralded as an objective means of extracting coherent structures from turbulent flows. Prior to its introduction, coherent structures were subjectively defined and discussed among experts. Since its introduction many reduced order models have been developed with inconsistent and often flow dependent validation procedures. This work sets up a framework for a data driven approach to validation of reduced order models derived from steady turbulent flows. Here it is proposed that the `goodness' of a model can be scored by how similar experimental and simulated data move through the model space. This is achieved by generating a Markov model for both data sets, using clustering techniques and maximum likelihood estimates. Results show increasing scores correlate with improved turbulent kinetic energy and modal amplitude for 3 data sets and 14 models. Additionally the generation of a surrogate Markov model can be used to identify missing dynamics in a simulation.

    Committee: Caraballo Edgar Dr. (Advisor); Mehdi Safari Dr. (Committee Member); Andrew Sommers Dr. (Committee Member) Subjects: Fluid Dynamics; Mathematics; Statistics