Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 19)

Mini-Tools

 
 

Search Report

  • 1. Asare, Felix Data Analytics and Design of Experiment for Bivariate Degradation Phenomena

    Doctor of Philosophy (PhD), Ohio University, 2024, Mechanical and Systems Engineering (Engineering and Technology)

    In this research, we develop an innovative approach to assessing the reliability of complex engineering systems, which are typically characterized by multiple interdependent performance characteristics (PCs). Recognizing that the degradation of these PCs often follows a positive, increasing trend, we employ the gamma process as the foundational model for degradation due to its properties of independent and non-negative increments. A critical aspect of our model is the incorporation of random-effect bivariate Gamma process degradation models, which utilize a variety of copula functions. These functions are instrumental in accurately modeling the dependency structure between the PCs, a factor that significantly influences the overall system reliability. In conventional degradation modeling, fixed and predetermined failure thresholds are commonly used to determine system failure. However, this method can be inadequate as different systems may fail at varying times due to uncontrollable factors. Our model addresses this limitation by considering random failure thresholds, which enhances the accuracy of predicting when a system might fail. We implement a hierarchical Bayesian framework for the degradation modeling, data analysis, and reliability prediction processes. This approach is validated through the analysis of a practical dataset, demonstrating the model's applicability in real-world scenarios. Furthermore, our study responds to the increasing market demand for manufacturers to provide reliable information about the longevity of their products. Manufacturers are particularly interested in the 100p-th percentile of a product's lifetime distribution. Degradation tests are vital for this, as they offer insights into the product's lifespan under various conditions over time. Utilizing our proposed model, we propose a method for designing degradation tests. This method optimizes the number of systems to be tested, the (open full item for complete abstract)

    Committee: Tao Yuan (Advisor); Felipe Aros-Vera (Committee Member); Bhaven Naik (Committee Member); William Young (Committee Member); Ashley Metcalf (Committee Member) Subjects: Industrial Engineering
  • 2. Chen, Yiyang Hierarchical Bayesian approaches to the exploration of mechanisms underlying group and individual differences

    Doctor of Philosophy, The Ohio State University, 2021, Psychology

    Populations and individuals diverge from each other in their cognitive abilities, and re- searchers have a great interest in characterizing and explaining these group and individual differences. Among research tools, behavioral tasks are widely adopted to assess cognitive abilities due to their simplicity and applicability. In behavioral tasks, descriptive statistics are commonly used as measurement indices for the cognitive abilities of interest. However, because these statistics have a limited ability to characterize the mechanisms underlying each task based on cognitive theories, they cannot fully explain the reasons that may cause group and individual differences. In this dissertation, I adopt hierarchical Bayesian approaches to model several behav- ioral tasks for cognition, with the aim to explore the mechanisms underlying the group and individual differences in populations tested by these tasks. I incorporate existing cognitive theories into the hierarchical Bayesian models, and use estimated parameters to characterize the cognitive abilities of interest. At the group difference level, I show that the hierarchi- cal Bayesian models can be used to identify the potential deficits in populations that have poorer task performance. At the individual level, I show that these models can reveal the behavioral patterns of each individual, and identify potential causes of individual differences. I built theory-based hierarchical Bayesian models to three behavioral tasks respectively: the progressive ratio task that measures motivation; the continuous performance task that measures sustained attention; and the memory updating task that measures working memory abilities. I show that these models have reasonable parameter recovery abilities and good fits to data. I apply these models to several empirical data sets. The progressive ratio task model is applied to a data set measuring motivation of people with and without schizophrenia (Wolf et al., 2014) and first-degree (open full item for complete abstract)

    Committee: Trisha Van Zandt (Advisor); Jolynn Pek (Committee Member); Paul De Boeck (Committee Member); Mario Peruggia (Committee Member) Subjects: Psychology; Quantitative Psychology
  • 3. Zhang, Jieyan Bayesian Hierarchical Modeling for Dependent Data with Applications in Disease Mapping and Functional Data Analysis

    PhD, University of Cincinnati, 2022, Arts and Sciences: Mathematical Sciences

    Bayesian hierarchical modeling has a long history but did not receive wide attention until the past few decades. Its advantages include flexible structure and capability of incorporating uncertainty in the inference. This dissertation develops two Bayesian hierarchical models for the following two scenarios: first, spatial data of time to disease outbreak and disease duration, second, large or high dimensional functional data that may cause computational burden and require rank reduction. In the first case, we use cucurbit downy mildew data, an economically important plant disease data recorded in sentinel plot systems from 23 states in the eastern United States in 2009. The joint model is established on the dependency of the spatially correlated random effects, or frailty terms. We apply a parametric Weibull distribution to the censored time to disease outbreak data, and a zero-truncated Poisson distribution to the disease duration data. We consider several competing process models for the frailty terms in the simulation study. Given that the generalized multivariate conditionally autoregressive (GMCAR) model, which contains correlation and spatial structure, provides a preferred DIC and LOOIC results, we choose the GMCAR model for the real data. The proposed joint Bayesian hierarchical model indicates that states in the mid-Atlantic region tend to have a high risk of disease outbreak, and in the infected cases, they tend to have a long duration of cucurbit downy mildew. The second Bayesian hierarchical model smooths functional curves simultaneously and nonparametrically with improved computational efficiency. Similar to the frequentist counterpart, principal analysis by conditional expectation, the second model reduces rank through the multi-resolution spline basis functions in the process model. The proposed method outperforms the commonly used B-splines basis functions by providing a slightly better estimation within a much shorter computing time. The performanc (open full item for complete abstract)

    Committee: Emily Kang Ph.D. (Committee Member); Seongho Song Ph.D. (Committee Member); Bledar Konomi Ph.D. (Committee Member); Won Chang Ph.D. (Committee Member) Subjects: Statistics
  • 4. Herath, Gonagala Mudiyanselage Nilupika Some Aspects of Bayesian Multiple Testing

    PhD, University of Cincinnati, 2021, Arts and Sciences: Mathematical Sciences

    Multiple testing in statistics refers to carrying out several statistical tests simultaneously. As the number of tests increases, the probability of incorrectly rejecting the null hypothesis also increases (multiplicity problem). Therefore, some multiplicity adjustment should always be considered to control the error rate. Making decisions without multiplicity adjustment can lead to error rates that are higher than the nominal rate. While several approaches to multiplicity adjustment are available, the Bayesian method is the only approach that inherently adjusts for multiplicity. This thesis considers the Bayesian approach to the multiple testing problem for different types of data: continuous and discrete data.

    Committee: Sival Sivaganesan| Ph.D (Committee Chair); Xia Wang Ph.D (Committee Member); Hang Joon Kim Ph.D (Committee Member); Seongho| Song Ph.D (Committee Member) Subjects: Statistics
  • 5. Race, Jonathan Semi-parametric Survival Analysis via Dirichlet Process Mixtures of the First Hitting Time Model

    Doctor of Philosophy, The Ohio State University, 2019, Biostatistics

    Time-to-event data often violate the proportional hazards assumption inherent in the popular Cox regression model. Such violations are especially common in the sphere of biological and medical data where latent heterogeneity due to unmeasured covariates or time varying effects are common. A variety of parametric survival models have been proposed which make more appropriate assumptions on the hazard function, at least for certain applications. One such model is derived from the First Hitting Time (FHT) paradigm which assumes that a subject's event time is determined by a latent stochastic process reaching a threshold value. Several random effects specifications of the FHT model have also been proposed which allow for better modeling of data with unmeasured covariates. While often appropriate, these methods often display limited flexibility due to their inability to model a wide range of heterogeneities. To address this issue, we propose two Bayesian models which loosen assumptions on the mixing distribution inherent in the random effects FHT models currently in use. The first proposed model is ideally suited for standard regression analyses. The second model is designed for use in clinical trials where survival is the outcome of interest. We demonstrate via simulation study that the proposed models greatly improve both survival and parameter estimation in the presence of latent heterogeneity. We also apply the proposed methodologies to data from a toxicology/carcinogenicity study which exhibits nonproportional hazards and contrast the results with competing methods.

    Committee: Michael Pennell (Advisor) Subjects: Biostatistics; Statistics
  • 6. Wu, Xinying Reliability Assessment of a Continuous-state Fuel Cell Stack System with Multiple Degrading Components

    Doctor of Philosophy (PhD), Ohio University, 2019, Industrial and Systems Engineering (Engineering and Technology)

    This study is motivated by a degradation experiment conducted by an industrial collaborator on the long-term durability of a polymer electrolyte membrane fuel cell (PEMFC) stack. A PEMFC stack can be considered as a multi-component system composed of continuously degrading fuel cell components. The voltage degradation of the fuel cells causes the degradation of the stack system. The system degradation is assessed by using two system-level degradation measures: the overall stack output voltage and the minimum voltage of individual cells. This dissertation proposes a hierarchical Bayesian approach to predict the failure-time distribution of a stack which is randomly selected from its population and the remaining useful life (RUL) of a monitored PEMFC stack system by using the voltage degradation data collected from its fuel cell components. A two-term exponential function is adopted to describe the mean voltage degradation-paths of the fuel cell components and a hierarchical Bayesian degradation model is established to predict the stack system reliability. A Gibbs sampling algorithm is developed for the inference of the parameters in the hierarchical degradation model. Three alternative modeling approaches are explored and compared. This research will contribute to the degradation modeling and data analysis methods for continuous-state systems composed of continuous-state components.

    Committee: Tao Yuan (Advisor) Subjects: Industrial Engineering
  • 7. Gao, Yong A Degradation-based Burn-in Optimization for Light Display Devices with Two-phase Degradation Patterns considering Warranty Durations and Measurement Errors

    Master of Science (MS), Ohio University, 2017, Industrial and Systems Engineering (Engineering and Technology)

    This thesis proposes to plan a degradation-based burn-in test for light display devices with two-phase degradation patterns by using Bayesian approach. The main focus of the burn-in test concerned in this study is to eliminate the initial rapid degradation phase, and a hierarchical Bayesian bi-exponential model is proposed and applied to define the two-phase degradation patterns of the burn-in population. Measurement uncertainty is the main focus of the burn-in test. Measurement uncertainty is an important factor during degradation observation of the burn-in population. The expected degradation path cannot represent the same as the observed degradation path. Warranty duration can also affect the optimal burn decisions. Mission reliability and expected cost criterions are considered with the available pre-burn-in data of a plasma display panel (PDP) example. To make the optimal burn-in decision, a cost optimization (minimization) model is developed for this thesis.

    Committee: Tao Yuan (Advisor); Gary Weckman (Committee Member); Diana Schwerha (Committee Member); Andy Snow (Committee Member) Subjects: Industrial Engineering
  • 8. Hanandeh, Ahmad Nonstationary Nearest Neighbors Gaussian Process Models

    PhD, University of Cincinnati, 2017, Arts and Sciences: Mathematical Sciences

    Modeling is an essential part of research and development in almost every sphere of modern life. Computer models are frequently used to explore physical systems, but can be computationally expensive to evaluate (take days, weeks, or possibly months to run single simulation at one input value). In such settings, an emulator is used as a surrogate. Gaussian Process (GP) is a common and very useful way to develop emulators to describe the output of computer experiments and to describe computationally expensive simulations in uncertainty quantification. Recently, much attention has been paid about dealing with large datasets which can be found in various fields of the natural, social sciences and modern instruments. This resulted in an increasing need for methods to analyze large datasets. However, GP is nonparametric, meaning that the complexity of the model grows as more data points are received, and as a result, it faces several computational challenges for modeling large datasets, because of the need of calculating the inverse and determinant of large, dense and unstructured matrix; therefore we need alternative methods to analyze such large datasets. Various methods have been developed to deal with this problem, including a reduced rank approach and a sparse matrix approximation. However, most of them rely on unrealistic assumptions for the underlying process such as stationarity. We develop a new approximation

    Committee: Bledar Konomi Ph.D. (Committee Chair); Emily Kang Ph.D. (Committee Member); Hang Joon Kim Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member) Subjects: Statistics
  • 9. SUI, ZHENHUAN Hierarchical Text Topic Modeling with Applications in Social Media-Enabled Cyber Maintenance Decision Analysis and Quality Hypothesis Generation

    Doctor of Philosophy, The Ohio State University, 2017, Industrial and Systems Engineering

    Many decision problems are set in changing environments. For example, determining the optimal investment in cyber maintenance depends on whether there is evidence of an unusual vulnerability such as “Heartbleed” that is causing an especially high rate of incidents. This gives rise to the need for timely information to update decision models so that the optimal policies can be generated for each decision period. Social media provides a streaming source of relevant information, but that information needs to be efficiently transformed into numbers to enable the needed updates. This dissertation first explores the use of social media as an observation source for timely decision-making. To efficiently generate the observations for Bayesian updates, the dissertation proposes a novel computational method to fit an existing clustering model, called K-means Latent Dirichlet Allocation (KLDA). The method is illustrated using a cyber security problem related to changing maintenance policies during periods of elevated risk. Also, the dissertation studies four text corpora with 100 replications and show that KLDA is associated with significantly reduced computational times and more consistent model accuracy compared with collapsed Gibbs sampling. Because social media is becoming more popular, researchers have begun applying text analytics models and tools to extract information from these social media platforms. Many of the text analytics models are based on Latent Dirichlet Allocation (LDA). But these models are often poor estimators of topic proportions for emerging topics. Therefore, the second part of dissertation proposes a visual summarizing technique based on topic models, a point system, and Twitter feeds to support passive summarizing and sensemaking. The associated “importance score” point system is intended to mitigate the weakness of topic models. The proposed method is called TWitter Importance Score Topic (TWIST) summarizing method. TWIST employs the topic propor (open full item for complete abstract)

    Committee: Theodore Allen (Advisor); Steven MacEachern (Committee Member); Cathy Xia (Committee Member); Nena Couch (Other) Subjects: Finance; Industrial Engineering; Operations Research; Statistics; Systems Science
  • 10. Thomas, Zachary Bayesian Hierarchical Space-Time Clustering Methods

    Doctor of Philosophy, The Ohio State University, 2015, Statistics

    An important statistical challenge lies in the specification of flexible spatio-temporal models capable of capturing complex, nonstationary dependence structure in the underlying process of interest. Popular approaches based on stationary, often parametric, spatial autocorrelation functions or Markov random field constructions (e.g., the conditional autoregression (CAR) or simultaneous autoregression (SAR) models) often yield representations of spatial coherence that are overly simplistic, leading to procedures with limited capacity to reproduce structure suggested by the data. While there is much interest in developing nonstationary extensions of these common approaches, in this work, we propose alternative techniques with which to allow the borrowing of information (over space and time) in a data-driven manner. We develop a Bayesian hierarchical modeling approach in which dependence is modeled via random spatial or spatio-temporal partitioning obtained indirectly via the construction of local random networks in space and time. We demonstrate the approach within the problems of (1) mapping of disease incidence/mortality rates over space and time and (2) interpolation of processes over continuous domains. This is the focus of the first two chapters in this work. In the third chapter, the focus is a separate problem. In evaluating/communicating the human role in Earth's changing climate, it is essential that causal assessments make use of both scientific understanding of the underlying physical mechanisms at work as well as information contained in observations. Making use of the potential outcomes framework for causal inference in counterfactual problems, we propose a Bayesian hierarchical physical-statistical approach in which output from a simple climate model is combined with observational data, allowing fusion of the two sources of information in a statistically principled manner. We demonstrate how counterfactual inference regarding climatic events of inter (open full item for complete abstract)

    Committee: Mark Berliner (Advisor); Steven MacEachern (Committee Member); Kate Calder (Committee Member); Peter Craigmile (Committee Member) Subjects: Statistics
  • 11. Osth, Adam Sources of interference in item and associative recognition memory: Insights from a hierarchical Bayesian analysis of a global matching model

    Doctor of Philosophy, The Ohio State University, 2014, Psychology

    A powerful theoretical framework for exploring recognition memory is the global matching framework, in which a cue's memory strength reflects the similarity of the retrieval cues being matched against all of the contents of memory simultaneously. Contributions at retrieval can be categorized as matches and mismatches to the item and context cues, including the self match (match on item and context), item noise (match on context, mismatch on item), context noise (match on item, mismatch on context), and background noise (mismatch on item and context). I present a model that directly parameterizes the matches and mismatches to the item and context cues, which enables estimation of the magnitude of each interference contribution (item noise, context noise, and background noise). The model was fit within a hierarchical Bayesian framework to ten recognition memory datasets that employ manipulations of strength, list length, list strength, word frequency, study-test delay, and stimulus class in item and associative recognition. Estimates of the model parameters revealed at most a small contribution of item noise that varies by stimulus class, with virtually no item noise for single words and scenes. Despite the unpopularity of background noise in recognition memory models, background noise estimates dominated at retrieval across nearly all stimulus classes with the exception of high frequency words, which exhibited equivalent levels of context noise and background noise. These parameter estimates suggest that the majority of interference in recognition memory stems from experiences acquired prior to the learning episode.

    Committee: Per Sederberg PhD (Advisor); Roger Ratcliff PhD (Committee Member); Jay Myung PhD (Committee Member) Subjects: Psychology
  • 12. Poznyak, Dmytro The American Attitude: Priming Issue Agendas and Longitudinal Dynamic of Political Trust

    PhD, University of Cincinnati, 2012, Arts and Sciences: Political Science

    For over fifty years the American National Election Studies (ANES) program has been measuring citizens' evaluations of the trustworthiness of the “government in Washington.” The longitudinal dynamic of political trust attitude, suggests that in the last fifty years, Americans have generally become less positive and more critical towards the national government. This dissertation empirically explores the causes and consequences of changes in the level and components of political trust attitude over time. This research challenges the prevalent idea that public trust in government shifts in response to the changes in government performance. Building on the scholarship of cognitively oriented public opinion scholars, I instead advocate the view that people judge about the trustworthiness of the “government in Washington” based on the problems they consider important at any given point in time—a process defined as cognitive priming. The change in political trust is modeled using the ANES cross-sectional time-series (1964-2000) dataset augmented by the context level data, replicating the state of the national- and media agendas at the time of survey response. These contextual data include the macro-level measures of unemployment, inflation, consumer confidence, and the measures of media attention to the national economy and defense. Results from the multilevel structural equation models (SEM) with Bayesian MCMC estimation method suggest that issue priming plays a key role in the longitudinal dynamic of trust. First, I establish that priming occurs through the change in respondents' national importance judgments and economic evaluations. These mediators significantly carry the priming effect of mass media and real-world cues on political trust. Second, I demonstrate that change in the volume of media attention to economic and international affairs and national defense issue domains increases the weight people place on these issues when making judgments about the trustw (open full item for complete abstract)

    Committee: Stephen Mockabee PhD (Committee Chair); Marc Hetherington PhD (Committee Member); Patrick Miller PhD (Committee Member); Barbara Bardes PhD (Committee Member) Subjects: Political Science
  • 13. Li, Xia A Bayesian Hierarchical Model for Studying Inter-Occasion and Inter-Subject Variability in Pharmacokinetics

    PhD, University of Cincinnati, 2011, Arts and Sciences: Mathematical Sciences

    This dissertation includes two parts: developing a new model for individual pharmacokinetics (PK) and applying a Bayesian three-stage hierarchical model to population PK. As to individual PK, the standard methodology is compartment modeling characterized by physiological mechanisms. Parameters in individual PK are estimated based on data from a single individual. In the individual PK part, the relationship between drug concentration and time for an individual was modeled, and the kinetic parameters for an individual were characterized and quantified. Specifically, a piecewise absorption model without physiological compartment mechanisms was developed and applied for Mycophenolic acid (MPA) data that does not obey a one compartment first-order absorption pattern. In the second part of this dissertation, a Bayesian three-stage hierarchical model was applied to population PK using simulated multi-occasion PK data with both inter-individual variability (IIV) and inter-occasion variability (IOV). This Bayesian approach was applied to three PK models. First, a PK model with independent IOV was studied, and different variances at different occasions were estimated. Second, a PK model with multivariate covariates and correlated and constrained IOV was studied, and unequal constrains in the variance matrix was modeled. Third, a PK model with arbitrary IOV was studied, and four inverse Whishart priors for IOV with different scale matrices were investigated. Based on the result and analysis, a recommendation of choosing the prior distribution was made according to whether or not a reliable source of the covariance matrix exists. For all population PK models, Gibbs sampling and Metropolis-Hasting algorithm were implemented using SAS IML to generate samples from posterior distributions.

    Committee: Siva Sivaganesan PhD (Committee Chair); James Deddens PhD (Committee Member); Seongho Song PhD (Committee Member); Paul Horn PhD (Committee Member) Subjects: Statistics
  • 14. Agarwal, Kuldeep Physics Based Hierarchical Decomposition of Processes for Design of Complex Engineered Systems

    Doctor of Philosophy, The Ohio State University, 2011, Industrial and Systems Engineering

    Manufacturing processes like aeroengine manufacturing, sheet manufacturing or rod manufacturing are examples of complex engineered systems because they have many interconnected components and there is little knowledge about the interactions between these different components. These processes consist of a raw material in the form of powder or a preform (pre-manufactured shape) which is an input from another manufacturing process and through a series of steps, converts it into a product. These manufacturing processes are designed and operated to achieve a specific goal: the final quality of the product being output from that process. The current design cycle for these different manufacturing processes treats the different system components (sub-processes) in isolation. Each engineer and personnel looks at the problem in his own perspective and optimizes the solution pertaining to the requirements or specifications of his particular department or sub-process only.This approach leads to a non-optimal design and lot of variation in the quality attributes. To overcome these limitations, a new approach and methodology for the design of these systems is presented in this dissertation. This methodology decomposes the quality attribute into the various factors which affect it and determine its value. These factors are further decomposed into the physical phenomena which cause these factors to affect the quality attribute. The physical phenomena are finally decomposed into the manufacturing processes and material uncertainties which cause them to influence the quality attributes.Interdependencies are determined between the different sub-processes and only those sub-processes are concentrated on which affect the quality attributes. Novel approach of combining process models with the data obtained by testing sensors is developed through the use of Bayesian Hierarchical modeling. Case studies involving sheet, rod and aeroengine manufacturing are demonstrated. The developed Bayes (open full item for complete abstract)

    Committee: Rajiv Shivpuri PhD (Advisor); Jerald Brevick PhD (Committee Member); Prem Goel PhD (Committee Member); Srinivasan Parthasarathy PhD (Committee Member) Subjects: Industrial Engineering
  • 15. Brynjarsdóttir, Jenný Dimension Reduced Modeling of Spatio-Temporal Processes with Applications to Statistical Downscaling

    Doctor of Philosophy, The Ohio State University, 2011, Statistics

    The field of spatial and spatio-temporal statistics is increasingly faced with the challenge of very large datasets. Examples include data obtained from remote sensing satellites, global weather stations, outputs from climate models and medical imagery. The classical approach to spatial and spatio-temporal modeling is extremely computationally expensive when the datasets are large. Dimension-reduced modeling approach has proved to be effective in such situations. In this thesis I focus on the problem of modeling two spatio-temporal processes where the primary goal is to predict one process from the other and where the datasets for both processes are large. I outline a general dimension-reduced Bayesian hierarchical approach to modeling of two spatio-temporal processes. The spatial structures of both processes are modeled in terms of a low number of basis vectors, hence reducing the spatial dimension of the problem. The temporal evolution of the spatio-temporal processes is then modeled through the coefficients (i.e. amplitudes) of the basis functions. I demonstrate that known multivariate statistical methods, Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA), can be used to obtain basis vectors for dimension-reduced modeling of two spatio-temporal processes. Furthermore, I present a new method of obtaining data-dependent basis vectors that is geared to the goal of predicting one process from the other. The new basis vectors are called Maximum Covariance Patterns (MCPs) and an orthogonal version is called Orthogonal Maximum Covariance Patterns (OMCPs). I apply these methods to a statistical downscaling example, where surface temperatures on a coarse grid over the Antarctic are downscaled onto a finer grid.

    Committee: L. Mark Berliner PhD (Advisor); Peter F. Craigmile PhD (Committee Member); Steven N. MacEachern PhD (Committee Member) Subjects: Statistics
  • 16. Katzfuss, Matthias Hierarchical Spatial and Spatio-Temporal Modeling of Massive Datasets, with Application to Global Mapping of CO2

    Doctor of Philosophy, The Ohio State University, 2011, Statistics

    This dissertation is comprised of an introductory chapter and three stand-alone chapters, tied together by a unifying theme: the statistical analysis of very large spatial and spatio-temporal datasets. These datasets now arise in many fields, but our focus here is on environmental remote-sensing data. Due to sparseness of daily datasets, there is a need to fill spatial gaps and to borrow strength from adjacent days. Nonetheless, many satellite instruments are capable of conducting on the order of 100,000 retrievals per day, which makes it computationally challenging to apply traditional spatial and spatio-temporal statistical methods, even in supercomputing environments. In addition, the datasets are often observed on the entire globe. For such large domains, spatial stationarity assumptions are typically unrealistic. We address these challenges using dimension-reduction techniques based on a flexible spatial random effects (SRE) model, where dimension reduction is achieved by projecting the process onto a basis-function space of low dimension. The spatio-temporal random effects (STRE) model extends the SRE model to the spatio-temporal case by modeling the temporal evolution, on the reduced space, using a dynamical autoregressive model in time. Another focus of this work is the modeling of fine-scale variation. Such variability is typically not part of the reduced space spanned by the basis functions, and one needs to account for a component of variability at a fine scale. We address this issue throughout the dissertation with increasingly complex and realistic models for a component of fine-scale variation. After a general introductory chapter, the subsequent two chapters focus on estimation of the reduced-dimensional parameters in the STRE model from both an empirical-Bayes and a fully Bayesian perspective, respectively. In Chapter 2, we develop maximum likelihood estimation via an expectation-maximization (EM) algorithm, which offers stable computation (open full item for complete abstract)

    Committee: Noel Cressie PhD (Advisor); Peter Craigmile PhD (Committee Member); Tao Shi PhD (Committee Member) Subjects: Statistics
  • 17. Ning, Shuluo Bayesian Degradation Analysis Considering Competing Risks and Residual-Life Prediction for Two-Phase Degradation

    Master of Science (MS), Ohio University, 2012, Industrial and Systems Engineering (Engineering and Technology)

    This thesis develops different Bayesian hierarchical models to study two degradation phenomena. One phenomenon is the accelerated degradation path associated with catastrophic failures, which is called the competing risks problem. The other phenomenon is the two-phase degradation pattern which is caused by the remaining defects or contaminants developed during the manufacturing process. By incorporating prior information accumulated from the field testing into the proposed models, the Bayesian approach will play a significant role in improving the modeling accuracy for the observed degradation data. Gibbs sampling algorithms were developed for the inference of the parameters in the proposed models as well as for the prediction of failure-time distributions and the residual-life distribution. The proposed models and algorithms are validated by experimental data of light emitting diodes (LEDs) and plasma display panels (PDPs), respectively.

    Committee: Tao Yuan PhD (Committee Chair); Diana Schwercha PhD (Committee Member); David Koonce PhD (Committee Member); Andrew Snow PhD (Committee Member) Subjects: Industrial Engineering
  • 18. Flask, Thomas An Application of Multi-Level Bayesian Negative Binomial Models with Mixed Effects on Motorcycle Crashes in Ohio

    Master of Science, University of Akron, 2012, Civil Engineering

    Motorcycle crashes compose an increasing proportion of United States motor vehicle crashes and fatalities. Through Full Bayesian negative binomial models with various types of random effects, this study compiles two distinct types of motorcycle crash models after literature reviews in the fields of motorcycle safety and statistical research in transportation safety. Using Ohio motorcycle crash data, motorcycle crashes are analyzed at the regional and roadway segment levels. At the regional level, single vehicle motorcycle crash models at the Ohio township are found to be improved in goodness-of-fit by both county level and township level spatial random effects with different levels of neighborship. First order township neighbors were found to create the most improvement in the Deviance Information Criterion (DIC). At the segment level, multi-vehicle motorcycle crashes models were found to be improved in goodness-of-fit if neighborship was defined at a sufficiently large radius to include prior knowledge of the surrounding regions. In both types of models, including regional information at the county and township levels helped avoid the pitfalls associated with motorcycle data, which is often unavailable, such as a lack of motorcycle specific Annual Average Daily Traffic (ADT) and Vehicle Miles Traveled (VMT). Each model produced descriptive parameter results that show the whether a predictor has a positive or negative influence on the frequency of motorcycle crashes and its magnitude. Conclusions based on the DIC and parameter results are made in terms of motorcycle riding and safety, as well as recommendations for implementation of the models developed in these studies on other data sets. Finally, the models are compared and contrasted to highlight the advantages and disadvantages of each approach.

    Committee: William Schneider IV Dr. (Advisor); Ping Yi Dr. (Committee Member); Anil Patnaik Dr. (Committee Member) Subjects: Civil Engineering; Engineering; Geographic Information Science; Geography; Statistics; Transportation; Transportation Planning
  • 19. Tang, Yun Hierarchical Generalization Models for Cognitive Decision-making Processes

    Doctor of Philosophy, The Ohio State University, 2013, Psychology

    An important but challenging perspective of cognitive modeling is the generalization of a model. That is, whether a cognitive model can account for data observed under alternative experimental settings, either with new participants, under new experimental designs, or in different experimental tasks tapping the same cognitive processes. Most extant generalizability measures for model evaluation focus on the generalization of a model to a new participant sample, but within a given experimental setting. The present dissertation project develops a model evaluation method, dubbed as the hierarchical generalization models, to expand the current generalizability measures and assess the between-experiment generalization. This method utilizes the hierarchical Bayesian modeling approach to account for multilevel data combined from different experimental settings, to separate various sources of variability, and to identify generalizable model assumptions that are related to underlying cognitive processes. This dissertation examines the applicability and plausibility of the hierarchical generalization modeling framework in the context of studies of behavioral decision-making. Two major experimental paradigms, the decision-from-description and decision-from-experience experiments, are extensively discussed with regard to the modeling of experimental data and the theoretical implication on the generalization of decision-making processes. The hierarchical generalization modeling framework demonstrates its suitability for these decision-making paradigms through simulation studies and secondary data analyses. Simulation studies in Study 1 and Study 2 demonstrate how to develop hierarchical generalization models to simultaneously model the experimental data from multiple decision-making paradigms. It also shows that hierarchical generalization models can appropriately recover the model structures of Cumulative Prospect Theory based models, which have been one of the mainstream the (open full item for complete abstract)

    Committee: Jay I. Myung (Advisor); Paulus De Boeck (Committee Member); Mark A. Pitt (Committee Member) Subjects: Psychology