Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 8)

Mini-Tools

 
 

Search Report

  • 1. Do, Hoan Parameter Recovery for the Four-Parameter Unidimensional Binary IRT Model: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Approaches

    Doctor of Philosophy (PhD), Ohio University, 2021, Educational Research and Evaluation (Education)

    This study assesses the parameter recovery accuracy of MML and two MCMC methods, Gibbs and HMC, under the four-parameter unidimensional binary item response function. Data were simulated under the fully crossed design with three sample size levels (1,000, 2,500 and 5,000 respondents) and two types of latent trait distribution (normal and negatively skewed). Results indicated that in general, MML took a more substantive impact of latent trait skewness but also absorbed the momentum from sample size increase to improve its performance more strongly than MCMC. Two MCMC methods remained advantageous with lower RMSE of item parameter recovery across all conditions under investigation, but sample size increase brought a correspondingly narrower gap between MML and MCMC regardless of latent trait distributions. Gibbs and HMC provided nearly identical outcomes across all conditions, and no considerable difference between two MCMC methods was detected. Specifically, when θs were generated from a normal distribution, MML and MCMC estimated the b, c and d parameters with little mean bias, even at N = 1,000. Estimates of the a parameter were positively biased for MML and negatively biased for MCMC, and mean bias by all methods was considerably large in absolute value (> 0.10) even at N = 5,000. MML item parameter recovery became less biased than Gibbs and HMC at N = 5,000. Under normal θ, all methods consistently improved RMSE of item parameter recovery in conjunction with sample size increase, except for MCMC estimation of the c parameter which did not exhibit a clear trend. When latent trait scores were skewed to the left, there was a concomitant deterioration in the quality of item parameter recovery by both MML and MCMC generally. Under skewed θ, MML had total errors of item parameter recovery diminished as more examinees took a test, yet sample size increase did not appear to benefit mean bias. Indeed, MML became increasingly negatively biased in estimation of the d param (open full item for complete abstract)

    Committee: Gordon Brooks PhD (Committee Chair); Bruce Carlson PhD (Committee Member); Adah Ward Randolph PhD (Committee Member); Diaz Sebastian PhD (Committee Member) Subjects: Educational Psychology; Educational Tests and Measurements; Quantitative Psychology; Statistics
  • 2. Thapa, Badal Reliability Analysis of Linear Dynamic Systems by Importance Sampling-Separable Monte Carlo Technique

    Master of Science, University of Toledo, 2020, Mechanical Engineering

    For many problems, especially nonlinear systems, the reliability assessment must be done in the time domain. Monte-Carlo simulation (MCS) can accurately assess the reliability of the system. However, its computational cost is highly expensive for the complex dynamic system. Importance Sampling (IS) method is a more efficient method than standard MCS for the reliability assessment of a system. It has been applied to dynamic systems when the excitation is defined by a Power Spectral Density (PSD) function. The central idea of the IS method is about generating sample time histories using a sampling PSD and introducing the likelihood ratio to each replication to give the unbiased estimator of the probability of failure. Another more efficient method than MCS for the reliability assessment of the dynamic system is the Separable Monte-Carlo (SMC) method. However, this method has been applied to linear dynamic systems as following. It starts with the step of drawing frequencies from PSD of excitation, calculation of system responses to each frequency, and storing them in a database. Then the stored frequencies and the respective responses are chosen randomly with the replacement for each replication to find the system response to the linear combination of the respective sinusoidal functions. Therefore, SMC can assess the reliability of the system with a proper database. The size of the database would depend on the shape of the PSD function and the complexity of the system. This research proposed a new method by combining IS with SMC to assess the reliability of linear dynamic systems. In this method, the database of the proposed method formed by using a sampling PSD is used to estimate the reliability of the system for the true spectrum The proposed method is more efficient than both IS or SMC methods individually in terms of both computational time and accuracy. The proposed method is demonstrated using a 10-bar truss.

    Committee: Mohammad Elahinia (Committee Chair); Mahdi Norouzi (Committee Co-Chair); Shawn P. Capser (Committee Member) Subjects: Mechanical Engineering
  • 3. Gao, Yong A Degradation-based Burn-in Optimization for Light Display Devices with Two-phase Degradation Patterns considering Warranty Durations and Measurement Errors

    Master of Science (MS), Ohio University, 2017, Industrial and Systems Engineering (Engineering and Technology)

    This thesis proposes to plan a degradation-based burn-in test for light display devices with two-phase degradation patterns by using Bayesian approach. The main focus of the burn-in test concerned in this study is to eliminate the initial rapid degradation phase, and a hierarchical Bayesian bi-exponential model is proposed and applied to define the two-phase degradation patterns of the burn-in population. Measurement uncertainty is the main focus of the burn-in test. Measurement uncertainty is an important factor during degradation observation of the burn-in population. The expected degradation path cannot represent the same as the observed degradation path. Warranty duration can also affect the optimal burn decisions. Mission reliability and expected cost criterions are considered with the available pre-burn-in data of a plasma display panel (PDP) example. To make the optimal burn-in decision, a cost optimization (minimization) model is developed for this thesis.

    Committee: Tao Yuan (Advisor); Gary Weckman (Committee Member); Diana Schwerha (Committee Member); Andy Snow (Committee Member) Subjects: Industrial Engineering
  • 4. Gassama, Edrissa PIEZOELECTRIC INVERSE PROBLEMS WITH RESONANCE DATA: A SEQUENTIAL MONTE CARLO ANALYSIS

    Doctor of Philosophy, Case Western Reserve University, 2014, Applied Mathematics

    Piezoelectricity is a property of certain materials that allows the conversion of mechanic deformation into electric voltage potential, and vice versa. The wide use of piezoelectric materials, e.g., in transducer technology and energy harvesting makes the design problem of optimizing the material parameters and geometry an important target in scientific computing. In energy harvesting in particular, the design of devices with impedance resonances in a predetermined range is of special interest: Matching the resonances with the ambient vibration frequencies may lead potentially to higher efficiency of the device. Material scientist can rely on numerical simulations in the design and production of piezoelectric devices. Numerical simulations employ numerical techniques like finite element methods to generate information about the design starting from an input of material parameters. In the context of this thesis, these material parameters include the elastic, electromagnetic and piezoelectric constants. Because the quantitative values of the material parameters are often determined from simplified experiments based on some assumptions, the reliability of the results of the simulations depends on the validity of these assumptions. Optimization based approaches to the numerical acquisition of the material parameters normally give a single set of values which, in turn, identifies one specific material as the approximation of the target. From a practical point of view this may be too restrictive, because it leaves little flexibility when trying to develop materials with a certain desired response. In this thesis we approach the inverse problem of material characterization for piezoelectric materials from a Bayesian perspective. The main question addressed in this thesis is, how to choose the elastic, electromagnetic, and piezoelectric material parameters so that a target resonance frequency is achieved, and the band-pass impedance response outside the resonance (open full item for complete abstract)

    Committee: Erkki Somersalo Dr (Advisor); Daniela Calvetti Dr (Advisor) Subjects: Applied Mathematics; Materials Science; Mathematics
  • 5. Green, Robert Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques

    Doctor of Philosophy in Engineering, University of Toledo, 2012, College of Engineering

    The probabilistic reliability evaluation of power systems is a complex and highly dimensional problem that often requires a large amount of computational resources, particularly processing power and time. The complexity of this problem is only increasing with the advent of the smart grid and its accompanying technologies, such as plug-in hybrid electric vehicles (PHEVs). Such technologies, while they add convenience, intelligence, and reduce environmental impacts, also add dynamic and stochastic loads that challenge the current reliability and security of the power grid. One method that is often used to evaluate the reliability of power systems is Monte Carlo simulation (MCS). As the complexity and dimensionality of a power system grows, MCS requires more and more resources leading to longer computational times. Multiple methods have previously been developed that aid in reducing the computational resources necessary for MCS in order to achieve a more efficient and timely convergence while continuing to accurately assess the reliability of a given system. Examples include analytical state space decomposition, population based metaheuristic algorithms (PBMs), and the use of high performance computing (HPC). In order to address these issues, this dissertation is focused on improving the performance of algorithms used to examine the level of reliability in composite power systems through the use of computational intelligence (CI) and HPC, while also investigating the impact of PHEVs on the power grid at the composite and distribution levels. Contributions include the development and exploration of 3 variations of a new, hybrid algorithm called intelligent state space pruning (ISSP) that combines PBMs with non-sequential MCS in order to intelligently decompose, or prune, a given state space and improve computational efficiency, an evaluation of the use of latin hypercube sampling and low discrepancy sequences in place of MCS, the use of serial and parallel support vecto (open full item for complete abstract)

    Committee: Lingfeng Wang Ph.D. (Committee Chair); Mansoor Alam Ph.D. (Committee Co-Chair); Jackson Carvalho Ph.D. (Committee Member); Vijay Devabhaktuni Ph.D. (Committee Member); Mohsin Jamali Ph.D. (Committee Member); Weiqing Sun Ph.D. (Committee Member) Subjects: Artificial Intelligence; Computer Science; Electrical Engineering
  • 6. Grabaskas, David Efficient Approaches to the Treatment of Uncertainty in Satisfying Regulatory Limits

    Doctor of Philosophy, The Ohio State University, 2012, Nuclear Engineering

    Utilities operating nuclear power plants in the United States are required to demonstrate that their plants comply with the safety requirements set by the U.S. Nuclear Regulatory Commission (NRC). How to show adherence to these limits through the use of computer code surrogates is not always straightforward, and different techniques have been proposed and approved by the regulator. The issue of compliance with regulatory limits is examined by rephrasing the problem in terms of hypothesis testing. By using this more rigorous framework, guidance is proposed to choose techniques to increase the probability of arriving at the correct conclusion of the analysis. The findings of this study show that the most straightforward way to achieve this goal is to reduce the variance of the output result of the computer code experiments. By analyzing different variance reduction techniques, and different methods of satisfying the NRC's requirements, recommendations can be made about the best-practices, that would result in a more accurate and precise result. This study began with an investigation into the point estimate of the 0.95-quantile using traditional sampling methods, and new orthogonal designs. From there, new work on how to establish confidence intervals for the outputs of experiments designed using variance reduction techniques was compared to current, regulator-approved methods. Lastly, a more direct interpretation of the regulator's probability requirement was used, and confidence intervals were established for the probability of exceeding a safety limit. From there, efforts were made at combining methods, in order to take advantage of positive aspects of different techniques. The results of this analysis show that these variance reduction techniques can provide a more accurate and precise result compared to current methods. This means an increased probability of arriving at the correct conclusion, and a more accurate characterization of the risk associated with even (open full item for complete abstract)

    Committee: Tunc Aldemir PhD (Advisor); Richard Denning PhD (Committee Member); Marvin Nakayama PhD (Committee Member); Alper Yilmaz PhD (Committee Member) Subjects: Nuclear Engineering; Statistics
  • 7. Chen, Wen-shiang Bayesian estimation by sequential Monte Carlo sampling for nonlinear dynamic systems

    Doctor of Philosophy, The Ohio State University, 2004, Chemical Engineering

    Precise estimation of state variables and model parameters is essential for efficient process operation, including model predictive control, abnormal situation management, and decision making under uncertainty. Bayesian formulation of the estimation problem suggests a general solution for all types of systems. Even though the theory of Bayesian estimation of nonlinear dynamic systems has been available for decades, practical implementation has not been feasible due to computational and methodological challenges. Consequently, most existing methods rely on simplifying assumptions to obtain a tractable but approximate solution. For example, extended Kalman filtering (EKF) linearizes the process model and assumes Gaussian prior and noise. Moving horizon based least-squares estimation (MHE) also assumes Gaussian or other fixed-shape prior and noise to obtain a least-squares optimization problem. MHE can impose constraints, but is non-recursive and requires computationally expensive nonlinear or quadratic programming. This dissertation introduces sequential Monte Carlo sampling (SMC) for Bayesian estimation of chemical process systems. This recent approach approximates computationally expensive integration by recursive Monte Carlo sampling, and can obtain accurate estimates of posterior distributions efficiently with minimum assumptions. This approach has not been systematically compared with estimation methods popular for chemical processes, including EKF and MHE. In addition to comparing various estimation methods, this dissertation also develops a practical framework of SMC for handling process constraints based on an acceptance/rejection algorithm. Furthermore, a novel empirical Bayes approach is developed to deal with practical challenges due to degeneracy and a poor initial guess. The ability of the proposed approach to be more computationally efficient and at least as accurate as MHE is demonstrated via several case studies. A high-dimensional polymerization proce (open full item for complete abstract)

    Committee: Bhavik Bakshi (Advisor) Subjects: Engineering, Chemical
  • 8. Skrivanek, Zachary Sequential Imputation and Linkage Analysis

    Doctor of Philosophy, The Ohio State University, 2002, Statistics

    Multilocus calculations using all available information on all pedigree members are important for linkage analysis. Exact calculation methods in linkage analysis are limited in either the number of loci or the number of pedigree members they can handle. In this thesis, we propose a Monte Carlo method for linkage analysis based on sequential imputation. Unlike exact methods, sequential imputation can handle both a moderate number of loci and a large number of pedigree members. Sequential imputation does not have the problem of slow mixing encountered by Markov chain Monte Carlo methods because of high correlation between samples from pedigree data. This Monte Carlo method is an application of importance sampling in which we sequentially impute ordered genotypes locus by locus and then impute inheritance vectors conditioned on these genotypes. The resulting inheritance vectors together with the importance sampling weights are used to derive a consistent estimator of any linkage statistic of interest. The linkage statistic can be parametric or nonparametric; we focus on nonparametric linkage statistics. We showed that sequential imputation can produce accurate estimates within reasonable computing time. Then we performed a simulation study to illustrate the potential gain in power using our method for multilocus linkage analysis with large pedigrees. We also showed how sequential imputation can be used in haplotype reconstruction, an important step in genetic mapping. In all of the applications of sequential imputation we can incorporate interference, which often is ignored in linkage analysis due to computational problems. We demonstrated the effect of interference on haplotyping and linkage analysis. We have implemented sequential imputation for multilocus linkage analysis in a user-friendly software package called SIMPLE (Sequential Imputation for Multi-Point Linkage Estimation). SIMPLE currently can estimate LOD scores, IBD sharing statistics and haplotype configur (open full item for complete abstract)

    Committee: Shili Lin (Advisor) Subjects: Statistics