Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 34)

Mini-Tools

 
 

Search Report

  • 1. Liu, Ge Statistical Inference for Multivariate Stochastic Differential Equations

    Doctor of Philosophy, The Ohio State University, 2019, Statistics

    Multivariate stochastic differential equations (MVSDEs) are commonly used in many applications in fields such as biology, economics, mathematical finance, oceanography and many other scientific areas. Statistical inference based on discretely observed data requires estimating the transition density which is unknown for most models. Typically, one would estimate the transition density and use the approximation for statistical inference. However, many estimation methods will fail when the observations are too sparse or when the SDE models have a hierarchical structure. Making statistical inference for such models is also computationally demanding. We aim to implement an approximation method to make accurate and reliable statistical inference while taking the computation complexity into consideration. In this dissertation, we compare several approximation methods to estimate the transition density of MVSDEs and propose to use the data imputation method. We perform a thorough analysis of the data imputation strategy, in terms of where to impute the data and the amount of data imputation needed. We design data imputation strategies for a univariate SDE model and a MVSDE model. The strategy is generalized to be applicable to general MVSDE models that do not have explicitly known solutions. To demonstrate the data imputation approximation method, we study simulated data from the multivariate Ornstein-Uhlenbeck (MVOU) model and a latent hierarchical model. We explore the posterior distribution of the MVSDE model parameters in a Bayesian approach. In the Bayesian Markov Chain Monte Carlo algorithm we use data augmentation to understand how the approximation of the transition density affects the inference procedure. We give practical guidelines on balancing the computational demands with the need to provide reliable and accurate posterior inference. Simulations are used to evaluate these guidelines with two MVSDE models, one fully observed and one partially observed. We del (open full item for complete abstract)

    Committee: Peter Craigmile (Advisor); Radu Herbei (Advisor) Subjects: Statistics
  • 2. Kaleeswaran Mani, Shankar Short Term Influenza Forecasting in the Hospital Environment Using a Bayesian Kalman Filter

    Master of Science, The Ohio State University, 2024, Biostatistics

    Accurate forecasting of weekly number of influenza (flu) lab tests and positive cases is vital for hospitals to provide adequate patient care at the right time. It also helps prevent shortages or overages of staffs and supplies. In this paper we present a practical implementation of a Bayesian Kalman filter to forecast weekly flu test and positive cases in a hospital environment. By integrating real time hospital data, this framework offers a robust methodology for accurately predicting flu volume one to four weeks out with a reasonable accuracy.

    Committee: Grzegorz Rempala (Advisor); Eben Kenah (Committee Member); Fernanda Schumacher (Committee Member) Subjects: Biostatistics; Health Care Management; Mathematics; Medicine; Statistics
  • 3. Rana, S. M. Masud Considerations in Parameter Estimation, and Optimal Operations in Urban Water Infrastructure

    PhD, University of Cincinnati, 2023, Engineering and Applied Science: Environmental Engineering

    Parameter estimation problems are ubiquitous in the field of environmental engineering, for example, in natural systems, an accurate assessment of the nutrient processing capacity of mountain streams is important for the estimation of nutrient load delivered by these streams to downstream water bodies. Similarly, in urban water systems, the ability to optimize pump operations in drinking water networks (DWN) to reduce energy costs is critically dependent on the ability to predict consumer demands that are often estimated from indirect measurements. Parameter estimation problems are challenging, as they are often ill-posed, and without proper considerations given to parameter uncertainty and observability, incomplete or sometimes incorrect conclusions might be drawn. The objectives of this research focus on considerations in parameter estimation, in the field of hydrology and hydraulics, in the fist two studies, and the remaining two studies focus on developing real-time optimal operation frameworks for urban water systems. In the first study, uncertainty in the parameters of the transient storage model (TSM) was estimated using the Markov chain Monte Carlo method, revealing the presence of large uncertainty in the TSM parameters. The TSM is a popular model used by researches to characterize the nutrient (e.g., nitrogen and phosphorus) processing capacity of small streams. The presence of broad uncertainty in the TSM parameters can be of significant interest to regulatory bodies, such as the Chesapeake Bay program who uses these parameter values to develop guidelines for different stakeholders. The second study introduces a consumer node clustering method using self-organizing map (SOM) in DWNS to improve the observability of estimated demands of the clusters. High frequency (e.g., hourly) consumer demands in DWNs are key parameters that drive system hydraulics and are rarely measured directly, and hence are estimated from indirect measurements. Consum (open full item for complete abstract)

    Committee: Patrick Ray Ph.D. (Committee Chair); Drew McAvoy Ph.D. (Committee Member); Xi Chen Ph.D. (Committee Member); Dominic Boccelli Ph.D. (Committee Member) Subjects: Environmental Engineering
  • 4. Liu, Xin Bayesian Data Augmentation for Recurrent Events under Intermittent Assessments in Overlapping Intervals

    Doctor of Philosophy, The Ohio State University, 2023, Biostatistics

    Electronic medical records (EMR) data contain rich information that can facilitate health-related studies but are collected primarily for purposes other than research. For recurrent events, EMR data often do not record event times or counts but only contain intermittently assessed and censored observations (i.e. upper and/or lower bounds for counts in a time interval) at uncontrolled times. This can result in non-contiguous or overlapping assessment intervals with censored event counts. Existing methods analyzing intermittently assessed interval-censored recurrent events assume disjoint assessment intervals (interval count data) due to a focus on prospective studies with controlled assessment times. We propose two Bayesian data augmentation methods to analyze the complicated assessments in EMR data for recurrent events. In a Gibbs sampler, the first method imputes exact event times by rejecting simulations of non-homogeneous Poisson process times that are incompatible with the assessments. Based on the independent increments property of Poisson processes, we implement a series of efficiency improvement techniques to speed up the rejection sampling. The second method applies a random walk reversible jump MCMC algorithm (RJMCMC) where the new event history is proposed by perturbing a previously accepted event history with birth, death, and jitter moves.

    Committee: Patrick Schnell (Advisor); Guy Brock (Committee Member); Matthew Pratola (Committee Member); Michael Pennell (Committee Member); Ajit Chaudhari (Committee Member) Subjects: Biostatistics
  • 5. Herath, Gonagala Mudiyanselage Nilupika Some Aspects of Bayesian Multiple Testing

    PhD, University of Cincinnati, 2021, Arts and Sciences: Mathematical Sciences

    Multiple testing in statistics refers to carrying out several statistical tests simultaneously. As the number of tests increases, the probability of incorrectly rejecting the null hypothesis also increases (multiplicity problem). Therefore, some multiplicity adjustment should always be considered to control the error rate. Making decisions without multiplicity adjustment can lead to error rates that are higher than the nominal rate. While several approaches to multiplicity adjustment are available, the Bayesian method is the only approach that inherently adjusts for multiplicity. This thesis considers the Bayesian approach to the multiple testing problem for different types of data: continuous and discrete data.

    Committee: Sival Sivaganesan| Ph.D (Committee Chair); Xia Wang Ph.D (Committee Member); Hang Joon Kim Ph.D (Committee Member); Seongho| Song Ph.D (Committee Member) Subjects: Statistics
  • 6. Civek, Burak Stochastic Signal Processing Techniques for Reconstruction of Multilayered Tissue Profiles Using UWB Radar

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    Sensors that can reliably assess physiology in the clinic and home environment are poised to revolutionize research and practice in the management of chronic diseases such as heart failure or pulmonary edema. Ultrawideband (UWB) radar sensors provide a viable and unobtrusive alternative to traditional sensor modalities for physiological sensing. In principle, a UWB radar system transmits a short-duration pulse and records the backscattered signal composed of reflections from the target object. In the human body, each tissue exhibits distinct dielectric properties, i.e., permittivity and conductivity, causing impedance mismatches at the interfaces and creating multiple reflection points for the impinging transmitted pulse. Therefore, a rich backscattered signal, which is strongly affected by the dielectric properties, is observed and can be processed to make inferences about the tissue composition underneath the skin. To this end, in this thesis, we investigate the problem of monitoring the tissue composition in the thoracic cavity using UWB radar sensors and present stochastic signal processing techniques to recover characteristic properties of the target tissues. We model the target tissue profile as a multilayered structure composed of planar homogeneous layers and work on a one-dimensional forward model simulating the electromagnetic (EM) wave propagation in layered media. We first tackle the problem from an indirect approach and aim to estimate the reflectivity profile, which is a function of characteristic properties of the target tissues, from the measured radar signal. We model the reflectivity profile as a sparse sequence in time-domain and pose the problem as a sparse blind deconvolution (BD) problem, where we simultaneously estimate the transmitted radar waveform as well to allow self-calibration. We study the problem under a Bayesian setting and present novel Markov Chain Monte Carlo (MCMC) methods, which incorporate the Normal-Inverse-Gamma prior to mode (open full item for complete abstract)

    Committee: Emre Ertin (Advisor); Kiryung Lee (Committee Member); Joel Johnson (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 7. Celli, Dino Stochastic Energy-Based Fatigue Life Prediction Framework Utilizing Bayesian Statistical Inference

    Doctor of Philosophy, The Ohio State University, 2021, Mechanical Engineering

    The fatigue life prediction framework developed and described in the proceeding chapters can concurrently approximate both typical stress versus cycle (SN) behavior as well as the inherent variability of fatigue using a limited amount of experimental data. The purpose of such a tool is for the rapid verification and quality assessment of cyclically loaded components with a limited knowledge-base or available fatigue data in the literature. This is motivated by the novelty of additive manufacturing (AM) processes and the necessity of part-specific structural assessment. Interest in AM technology is continually growing in many industries such as aerospace, automotive, or bio-medical but components often result in highly variable fatigue performance. The determination of optimal process parameters for the build process can be an extensive and costly endeavor due to either a limited knowledge-base or proprietary restrictions. Quantifying the significant variability of fatigue performance in AM components is a challenging task as there are many underlying causes including machine-to-machine differences, recycles of powder, and process parameter selection. Therefore, a life prediction method which can rapidly determine the fatigue performance of a material with little or no prior information of the material and a limited number of experimental tests is developed as an aid in AM process parameter optimization and fatigue performance qualification. Predicting fatigue life requires the use of a previously developed and simplistic energy-based method, or Two-Point method, to generate a collection of life predictions. Then the collected life predictions are used to approximate key statistical descriptions of SN fatigue behavior. The approximated fatigue life distributions are validated against an experimentally found population of SN data at 10^4 and 10^6 cycles failure describing low cycle and high cycle fatigue. A Monte Carlo method is employed to model fatigue life by fi (open full item for complete abstract)

    Committee: Mo-How Herman Shen Ph.D (Advisor); Jeremy Seidt Ph.D (Committee Member); Kiran D'Souza Ph.D (Committee Member); Onome Scott-Emuakpor Ph.D (Committee Member); Tommy George Ph.D (Committee Member) Subjects: Aerospace Engineering; Aerospace Materials; Mechanical Engineering
  • 8. Vossler, Harley Applying Dynamic Survival Analysis to the 2018-2020 Ebola Epidemic in the Democratic Republic of Congo

    Master of Science, The Ohio State University, 2021, Public Health

    The second largest Ebola Virus Disease outbreak in history was declared on August 1, 2018, by the Ministry of Health of The Democratic Republic of Congo. This epidemic affected the eastern most part of DRC, spanning the provinces of North Kivu, South Kivu, and Ituri. Lasting over 15 months, the outbreak resulted in 3470 cases (probable and confirmed) and 2287 deaths (CDC 2019). In collaboration with the University of Kinshasa, we obtained individual-level data spanning almost the entirety of the epidemic, presenting us with the unique opportunity of analyzing long-term Ebola epidemic dynamics and the effect of public health intervention. Exploratory analysis uncovered that this epidemic comprised many smaller, more isolated outbreaks, with pronounced spatial-temporal patterns. To reflect this, the data was split into three temporal segments. A statistical model for the data analysis was based on the new methodology known as Dynamic Survival Analysis (DSA), derived from the general stochastic model of spread of infection across a network of interconnected individuals (nodes). A key feature of the statistical model is that, unlike general stochastic network models, it does not require knowledge of the susceptible population size, the disease prevalence in the community, or the epidemic curve shape. The DSA-based model was applied to all three segments of the full epidemic, attempting to combine information across the three distinct waves of infections. The fitting of the model was based on individual data of infection and recovery times in each wave, and the estimated parameter values suggested that the epidemic was brought to an end because of increased effort in Ebola cases identification and prompt isolation. According to our findings, the time from infection onset to hospitalization was significantly decreased over the three waves, helping to contain the spread of disease.

    Committee: Grzegorz Rempala (Advisor); Eben Kenah (Committee Member) Subjects: Biostatistics; Epidemiology
  • 9. lim, woobeen Bayesian Semiparametric Joint Modeling of Longitudinal Predictors and Discrete Outcomes

    Doctor of Philosophy, The Ohio State University, 2021, Biostatistics

    Many prospective biomedical studies collect data on longitudinal variables that are predictive of a discrete outcome and oftentimes, primary interest lies in the association between the outcome and the values of the longitudinal measurements at a specific time point. A common problem in these longitudinal studies is inconsistency in timing of measurements and missing follow-ups since few subjects have values close to the time of interest. Another difficulty arises from the fact that numerous studies collect longitudinal measurements with different scales, as there is no known multivariate distribution that is capable of accommodating variables of mixed scale simultaneously. These challenges are well demonstrated in our motivating data example, the Life and Longevity After Cancer (LILAC), a cohort study of cancer survivors who participated in the Women's Health Initiative (WHI). One research area of interest in these studies is to determine the relationship between lifestyle or health measures recorded in the WHI with treatment-related outcomes measured in LILAC. For instance, a researcher may want to examine if sleep-related factors measured prior to initial cancer treatment, such as insomnia rating scale (a continuous variable), sleep duration (ordinal) and depression (binary) imputed at the time of cancer diagnosis can predict the incidence of adverse effects of cancer treatment. Despite the multitude of such applications in biostatistical areas, no previous methods exist that are able to tackle these challenges. In this work, we propose a new class of Bayesian joint models for a discrete outcome and longitudinal predictors of mixed scale. Our model consists of two submodels: 1) a longitudinal submodel which uses a latent normal random variable construction with regression splines to model time-dependent trends with a Dirichlet Process prior assigned to random effects to relax distribution assumptions and 2) an outcome submodel which standardizes timing of the pre (open full item for complete abstract)

    Committee: Michael Pennell Ph.D. (Advisor); Erinn Hade Ph.D. (Committee Member); Eloise Kaizar Ph.D. (Committee Member); Patrick Schnell Ph.D. (Committee Member) Subjects: Biostatistics
  • 10. Ganguly, Shreyan Modeling Nonstationarity Using Locally Stationary Basis Processes

    Doctor of Philosophy, The Ohio State University, 2019, Statistics

    Methods of estimation and forecasting for stationary models are well known and straightforward in classical time series analysis. For this reason, time series analysis involves using mathematical transformations to render the stochastic process approximately stationary and conduct inference on the transformed series. However, assuming stationarity, even for the transformed series, is at best an idealization. In practice, this assumption may be unrealistic, especially for processes with time varying statistical properties. We define a class of locally stationary processes called locally stationary basis (LSB) processes which can lead to more accurate uncertainty quantification over making an invalid assumption of stationarity. LSB processes assumes the model parameters to be time-varying and parameterizes them in terms of a transformation of basis functions. The transformation is required as it ensures that the processes are locally stationary, and as required, causal, invertible or identifiable, something that is generally ignored while defining locally stationary models. We develop methods and theory for parameter estimation in this class of models, and propose a test that allow us to examine certain departures from stationarity. We assess our methods using simulation studies and apply these techniques to the analysis of an electroencephalogram time series. We also extend our theory for LSB processes to the spatio-temporal case, in particular, LSB spatio-temporal processes. While there are several models in literature which explore the non-stationarities in the spatial domain, few have been developed for non-stationarities in time, while still maintaining space-time interactions. We give an overview of the theory for LSB processes and provide a Bayesian framework for carrying out inference with such models along with spatial predictions. A suitable algorithm is proposed for efficient posterior simulation for this class of models. We use this model to for (open full item for complete abstract)

    Committee: Peter Craigmile (Advisor); Christopher Hans (Committee Member); Lo-Bin Chang (Committee Member) Subjects: Statistics
  • 11. Qin, Tian Estimation of Water Demands Using an MCMC Algorithm with Clustering Methods

    PhD, University of Cincinnati, 2018, Engineering and Applied Science: Environmental Engineering

    Water demand estimation is important for representing the underlying hydraulics in the water distribution system that drives water quality dynamics. With respect to demand estimation, clustering water-use nodes within network models reduces the number of unknowns, improves the efficiency of algorithm and is needed to produce a feasible estimation problem. The objectives of this research are to propose a clustering algorithm to reduce parameterization for demand estimation, develop a Markov chain Monte Carlo demand estimation algorithm incorporating spatial correlation in demands and generating uncertainty estimates, and implement a real-world large-scale water distribution system case study to investigate complexity of demand estimation problems. The identification of monitoring or sensor locations within water distribution systems can be challenging given the size of realistic network. Approaches such as skeletonization or aggregation can effectively reduce the size of network models but are generally more appropriate for satisfying hydraulic objectives. The proposed approach uses an input-output relationship to assess hydraulic path between any two nodes, which serves as a surrogate for water quality dynamics. For two different case studies, as number of clusters increased, the nodes within each cluster became more similar. The resulting clusters provide opportunities, for example, to reduce the problem size for monitoring or sensor selection. The use of water distribution system models has been around for decades and requires good demand estimates to ensure adequate hydraulic and water quality representation. Traditional optimization approaches are often used to estimate demands, generally for highly skeletonized systems, with approximations to represent the uncertainties in demand estimates and hydraulic states. The proposed Markov chain Monte Carlo (MCMC) algorithm is capable of estimating both the expected values and uncertainties of demands esti (open full item for complete abstract)

    Committee: Dominic Boccelli Ph.D. (Committee Chair); Sivaraman Balachandran Ph.D. (Committee Member); Michael Eugene Tryby Ph.D. (Committee Member); James Uber Ph.D. (Committee Member) Subjects: Environmental Engineering
  • 12. Smith, Corey Exact Markov Chain Monte Carlo with Likelihood Approximations for Functional Linear Models

    Doctor of Philosophy, The Ohio State University, 2018, Statistics

    Functional data analysis is a branch of statistics that deals with the theory and analysis of data which may be comprised of functions in addition to scalar values. Here we consider the linear model that relates functional covariates to scalar responses. We introduce an exact MCMC algorithm which does not rely on likelihood evaluations to estimate the parameter function. The proposed method uses Barker's algorithm (as opposed to Metropolis-Hastings). Though Barker's has been shown to be asymptotically less efficient than Metropolis-Hastings, the form of its acceptance probability allows us to make the accept/reject decision efficiently without needing to evaluate the likelihood function. We utilize unbiased estimates of the log-likelihood function along with two nested Bernoulli factories to accomplish this. In addition, exact MCMC methods for logistic and Poisson regression settings with functional predictors are provided. These latter two models again feature Bernoulli factories and Barker's algorithm while also making use of debiasing techniques to aid in log-likelihood estimation.

    Committee: Radu Herbei PhD (Advisor); Laura Kubatko PhD (Committee Member); Matt Pratola PhD (Committee Member); Lisa Jordan PhD (Committee Member) Subjects: Statistics
  • 13. Shi, Hongxiang Hierarchical Statistical Models for Large Spatial Data in Uncertainty Quantification and Data Fusion

    PhD, University of Cincinnati, 2017, Arts and Sciences: Mathematical Sciences

    Modeling of spatial data often encounters computational bottleneck for large datasets and change- of-support effect for data at different resolutions. There is a rich literature on how to tackle these two problems but few gives a comprehensive solution for solving them together. This dissertation aims to develop hierarchical models that can alleviate those two problems together in uncertainty quantification and data fusion. For uncertainty quantification, a fully Bayesian hierarchical model combined with the nearest neighbor Gaussian process is proposed to produce consistent parameter inferences at different resolutions for a large spatial surface. Simulation studies demonstrate the ability of the proposed model to provide consistent parameter inferences at different resolutions with only a fraction of the computing time of the traditional method. This method is then applied to a real surface data. For data fusion, we propose a hierarchical model that can fuse two or more large spatial datasets with the exponential family of distributions. The ''change-of-support'' problem is handled along with the computational bottleneck by using a spatial random effect model for the underlying process. Through simulated and real data illustrations, the proposed data fusion method is demonstrated to possess predictive advantage over the univariate-process modeling approach by borrowing strength across processes.

    Committee: Emily Kang Ph.D. (Committee Chair); Hang Joon Kim Ph.D. (Committee Member); Bledar Konomi Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member); Xia Wang Ph.D. (Committee Member) Subjects: Statistics
  • 14. Zhang, Han Detecting Rare Haplotype-Environmental Interaction and Nonlinear Effects of Rare Haplotypes using Bayesian LASSO on Quantitative Traits

    Doctor of Philosophy, The Ohio State University, 2017, Statistics

    Rare variants and gene-environment interaction (GXE) are two important contributors to the etiology of many complex diseases. Since many diseases (e.g. dichotomous traits) are discretizations of some underlying quantitative measurements, it is important to study such quantitative traits directly as they may contain greater amount of information. Examples include obesity (based on body mass index measurements) and hypertension (based on blood pressure measurements). In recent years, several methods have been proposed for detecting associations of rare haplotype variants, environmental factors and their interacting effects on complex diseases. However, the focus of most existing methods has been on binary traits and case-control population data. In this dissertation, I will present a Quantitative Bayesian LASSO (QBL) method for detecting rare and common haplotype effects and GXE on quantitative traits for cohort data. By appropriately setting the priors for the effect size parameters, I can increase statistical power for detecting main, and interacting, effects involving rare haplotype variants. I will present simulation results with both continuous and discrete environmental factors and a range of disease models and distributions. I will also demonstrate the utility of QBL in a real data application. In QBL, the key assumption is the linear interaction effect between haplotypes and a continuous environmental covariate, which may hamper the discovery of novel variants on the trait and the true causal mechanism. Although assuming linearity as a working model may work fine in some situations, it would be obviously important to have statistical methods that correctly account for non-linearity if that is indeed the case. An example of non-linear GXE is hypertension based on blood pressure measurements for making diagnosis: a genetic variance may influence blood pressure differently depending on age, but the interaction is clearly non-linear (Wang et al., 2014). In rece (open full item for complete abstract)

    Committee: Shili Lin (Advisor); Asuman Seda Turkmen (Committee Member); Eloise Kaizar (Committee Member) Subjects: Statistics
  • 15. Yajima, Ayako Assessment of Soil Corrosion in Underground Pipelines via Statistical Inference

    Doctor of Philosophy, University of Akron, 2015, Civil Engineering

    In the oil industry, underground pipelines are the most preferred means of transporting a large amount of liquid product. However, a considerable number of unforeseen incidents due to corrosion failure are reported each year. Since corrosion in underground pipelines is caused by physicochemical interactions between the material (steel pipeline) and the environment (soil), the assessment of soil as a corrosive environment is indispensable. Because of the complex characteristics of soil as a corrosion precursor influencing the dissolution process, soil cannot be explained fully by conventional semi-empirical methodologies defined in controlled settings. The uncertainties inherited from the dynamic and heterogeneous underground environment should be considered. Therefore, this work presents the unification of direct assessment of soil and in-line inspection (ILI) with a probabilistic model to categorize soil corrosion. To pursue this task, we employed a model-based clustering analysis via Gaussian mixture models. The analysis was performed on data collected from southeastern Mexico. The clustering approach helps to prioritize areas to be inspected in terms of underground conditions and can improve repair decision making beyond what is offered by current assessment methodologies. This study also addresses two important issues related to in-situ data: missing data and truncated data. The typical approaches for treating missing data utilized in civil engineering are ad hoc methods. However, these conventional approaches may cause several critical problems such as biased estimates, artificially reduced variance, and loss of statistical power. Therefore, this study presents a variant of EM algorithms called Informative EM (IEM) to perform clustering analysis without filling in missing values prior to the analysis. This model-based method introduces additional cluster-specific Bernoulli parameters to exploit the nonuniformity of the frequency of missing values across cl (open full item for complete abstract)

    Committee: Robert Liang Dr. (Advisor); Chien-Chun Chan Dr. (Committee Member); Junliang Tao Dr. (Committee Member); Guo-Xiang Wang Dr. (Committee Member); Lan Zhang Dr. (Committee Member) Subjects: Civil Engineering
  • 16. Cheng, Yougan Computational Models of Brain Energy Metabolism at Different Scales

    Doctor of Philosophy, Case Western Reserve University, 2014, Applied Mathematics

    The mathematical modeling of brain energy metabolism in the literature has been approached in a spatially lumped framework, where the region of interest is represented in terms of well mixed compartments representing different cell types, extracellular space and capillary blood. These models shed some light on the brain metabolism, but they cannot account for some potentially important factors including, e.g., the locus of the synaptic activity in reference to capillaries, the effect of diffusion, pre- and postsynaptic neurons, and possible variations in mitochondrial density within the cells. In this thesis, we propose a novel multi-domain formalism to assemble a three dimensional distributed model of brain cellular metabolism, which can take into account some of the aforementioned factors. The model is governed by coupled reaction-diffusion equations in different cells and in the extracellular space, and it allows the inclusion of additional details, for example separate mitochondria for each cell type. This formalism allows to track the changes in metabolites and intermediates in mutually interacting domains without the need for detailed geometric modeling of the microscopic tissue structure. Acknowledging the complexity of the multidimensional model and the difficulty of finding suitable values of many parameters which specify it, we propose a way to reduce the complex model to a lower dimensional one, whose parameter values can be compared with literature values. More specifically, we derive a computational model for a brain sample of the size of a Krogh cylinder, with spatial distribution in tissue along the radial component. For this model, the different availability of oxygen and glucose away from the blood vessel could affect the cells' aerobic or anaerobic metabolism and trigger the uptake of lactate, highlighting the important role of diffusion. This spatially distributed model indicates that drawing conclusions about a complex spatially distributed sy (open full item for complete abstract)

    Committee: Daniela Calvetti (Advisor); Erkki Somersalo (Advisor); David Gurarie (Committee Member); Joseph LaManna (Committee Member) Subjects: Applied Mathematics; Biology; Mathematics
  • 17. Fan, Haijian Performance Based Design of Deep Foundations in Spatially Varying Soils

    Doctor of Philosophy, University of Akron, 2013, Civil Engineering

    With the implementation of load and resistance factor design (LRFD) by the U.S. Federal Highway Administration, the design of deep foundations is migrating from Level I (e.g., allowable stress design) codes to Level II codes (e.g., LRFD). Nevertheless, there are still unsolved issues regarding the implementation of load and resistance factor design. For example, there is no generally accepted guidance on the statistical characterization of soil properties. Moreover, the serviceability limit check in LRFD is still deterministic. No uncertainties arising in soil properties, loads and design criteria are taken into account in the implementation of LRFD. In current practice, the load factors and resistances are taken as unity, and deterministic models are applied to evaluate the displacements of geotechnical structures. In order to address the aforementioned issues of LRFD, there is a need for a computational method for conducting reliability analysis and computational tools for statistically characterizing the variability of soil properties. The objectives of this research are: 1) to develop a mathematically sound computational tool for conducting reliability analysis for deep foundations; and 2) to develop the associated computational method that can be used to determine the variability model of a soil property. To achieve consistency between the strength limit check and the serviceability limit check of the LRFD framework, performance-based design methodology is developed for deep foundation design. In the proposed methodology, the design criteria are defined in terms of the displacements of the structure that are induced by external loads. If the displacements are within the specified design criteria, the design is considered satisfactory. Otherwise, failure is said to occur. In order to calculate the probability of failure, Monte Carlo simulation is employed. In Monte Carlo simulation, the variability of the random variables that are involved in the reliability a (open full item for complete abstract)

    Committee: Robert Liang Dr. (Advisor); Lan Zhang Dr. (Committee Member); Qindan Huang Dr. (Committee Member); Xiaosheng Gao Dr. (Committee Member); Chien-Chung Chan Dr. (Committee Member) Subjects: Civil Engineering; Statistics
  • 18. Ren, Yan A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data

    PhD, University of Cincinnati, 2012, Arts and Sciences: Mathematical Sciences

    In longitudinal studies, we are often interested in simultaneously clustering observations at both subject- and time-levels. Current clustering approaches assume the exchangeability among clustering units, and they are not applicable for our clustering goal. Through the use of a specific base measure, we propose a more suitable method that improves upon the multivariate DP mixture model. A well-known MCMC algorithm, Gibbs sampler, is implemented for the Bayesian posterior distributions and estimates. We compare two kinds of specific base measures from simple to complex. The models are evaluated through simulation studies of multivariate data with different covariance specifications. Performance is assessed by the stationarity, the autocorrelation functions of the Markov chain, the correct classification rates, the 95% credible intervals for parameter estimates, and the CPU time. We illustrate the method with data from a prospective longitudinal study on sleep apnea, tracking the diastolic blood pressure and severity of sleep apnea of 97 children during 24 hours.

    Committee: Siva Sivaganesan PhD (Committee Chair); Mekibib Altaye PhD (Committee Member); James Deddens PhD (Committee Member); Paul Horn PhD (Committee Member); Seongho Song PhD (Committee Member); Rhonda VanDyke PhD (Committee Member) Subjects: Statistics
  • 19. HUNTER, TINA Gibbs Sampling and Expectation Maximization Methods for Estimation of Censored Values from Correlated Multivariate Distributions

    PhD, University of Cincinnati, 2008, Arts and Sciences : Mathematical Sciences

    Statisticians are often called upon to analyze censored data. Environmental and toxicological data is often left-censored due to reporting practices for measurements that are below a statistically defined detection limit. Although there is an abundance of literature on univariate methods for analyzing this type of data, a great need still exists for multivariate methods that take into account possiblecorrelation amongst variables. Two methods are developed here for that purpose. One is a Markov Chain Monte Carlo method that uses a Gibbs sampler to estimate censored data values as well as distributional and regression parameters. The second is an expectation maximization (EM) algorithm that solves for the distributional parameters that maximize the complete likelihood function in the presence of censored data. Both methods are applied to bivariate normal data and compared to each other and to two commonly used simple substitution methods with respect to bias and mean squared error of the resulting parameter estimates. The EM method is the most consistent for estimating all distributional and regression parameters across all levels of correlation and proportions of censoring. Both methods provide substantially better estimates of the correlation coefficient than the univariate methods.

    Committee: Dr. Siva Sivaganesan PhD (Committee Chair); Dr. James A. Deddens PhD (Committee Member); Dr. Paul S. Horn PhD (Committee Member); Dr. Seongho Song PhD (Committee Member); Dr. Xiaodong Lin PhD (Committee Member); Dr. Marc A. Mills PhD (Committee Member) Subjects: Biostatistics; Environmental Science; Mathematics; Statistics; Toxicology
  • 20. HUANG, BIN STATISTICAL ASSESSMENT OF THE CONTRIBUTION OF A MEDIATOR TO AN EXPOSURE OUTCOME PROCESS

    PhD, University of Cincinnati, 2001, Medicine : Environmental Health Sciences

    To achieve detailed understanding of an exposure-outcome association in public health studies, an investigator often needs to account for mediator(s). A mediator is a variable that occurs in a causal pathway from an independent to a dependent variable. The mediational model describes the associations among the exposure(s), mediator(s), and outcome(s). Statistics are needed to determine how much of the exposure-outcome association is due to a mediator. Although mediational models are widely applied in public health, sociological and psychological research, the statistical methods to define and test mediation effects are underdeveloped. The current available methods, path analysis and multi-step regression analyses, have some major limitations including: 1) lack of clear and meaningful definitions of mediation effects; 2) lack of significance testing procedures for the mediation effects; and 3) these methods have not been extended into a generalized form. The present study defined mediation effects, which allows for substantive accounting of the exposure-outcome process that is consistent across a class of generalized mediational models. The newly defined mediation effects have important epidemiological interpretations that are closely related to the concept of attributable risk (AR). Both linear and non-linear models were studied. Much attention has been given to the logistic mediational model due to its important role in the epidemiological studies. Asymptotic variance estimates for the mediation effects were derived using the multivariate delta method. Through Bayesian modeling and Monte Carol techniques, in particular, Markov chain Monte Carlo (MCMC), the posterior distributions for the mediation effects were estimated. Simulation studies, as well as case studies using a nationally representative database, compared the behavior of the asymptotic estimates and the non-informative Bayesian posterior estimates for the mediation effects of the linear and logistic medi (open full item for complete abstract)

    Committee: Dr. Paul Succop (Advisor) Subjects: Health Sciences, General