Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 22)

Mini-Tools

 
 

Search Report

  • 1. Noble, Gregory Application of Modern Principles to Demand Forecasting for Electronics, Domestic Appliances and Accessories

    Master of Science in Engineering (MSEgr), Wright State University, 2009, Industrial and Human Factors Engineering

    Royal Philips is a large scale producer of consumer electronics, personal appliances, lighting, and healthcare appliances. Demand data from 12 Business Units (BU) of Royal Philips was examined in the study; four business units from each of three divisions: DAP, PA, and CE. From the data supplied, different forecast techniques were evaluated to determine which procedure produces the highest quality forecasts. Three forecasting techniques were evaluated using the provided data. The three forecasting techniques evaluated are the exponential smoothing forecasting method, the exponential smoothing with a linear trend forecasting method, and the Winters forecasting method. The Visual Basic for Applications (VBA) language was used to implement the functionality of the exponential smoothing, exponential smoothing with linear trend, and the winters forecasting methods forecasting models into Microsoft Excel for this study. Additionally, VBA was used to compute the Mean Absolute Error, which was used to compare each of the models. Overall, the exponential smoothing with a linear trend forecasting method is the best forecasting model for the examined business units. The exponential smoothing with a linear trend model should be used in most cases where the coefficient of variance of the demand data is small. The exponential smoothing model should be used in most cases where the coefficient of variance is of the demand data is large. The Winters method forecasting models had much higher variability in the resulting forecasts of the examined business units. This higher variability may have been due to the complexity in the estimation of the model parameters. Thus, the Winters method, while good in theory, isn't necessarily the best choice for forecasting in practice with the examined business units and similar products.

    Committee: Frank Ciarallo PhD (Advisor); Xinhui Zhang PhD (Committee Member); Vikram Sethi PhD (Committee Member); Pratik Parikh PhD (Committee Member) Subjects: Engineering; Industrial Engineering; Operations Research
  • 2. Kang, Younghun Development of computational techniques for coupled 1D/2D hydrodynamic models

    Doctor of Philosophy, The Ohio State University, 2023, Civil Engineering

    Multidimensional (coupled one-dimensional (1D) and two-dimensional (2D)) hydrodynamic models are developed to achieve computational efficiency for study areas with small-scale channel networks. Fine-scale computational domains are required to adequately resolve geometry of such study areas with typical 2D hydrodynamic models, which results in high computational cost. Coupled 1D/2D hydrodynamic models, which use 1D models for small-scale areas (typically small-scale channels), allow preserving geometric features of the study area with moderate computational cost and have been applied in various numerical studies. In this dissertation, we present computational techniques that further enhance coupled 1D/2D hydrodynamic models. The first one is an automatic mesh generation tool for coupled 1D/2D hydrodynamic models. Meshes are a required input for hydrodynamic models, and automatic mesh generation tools for 2D hydrodynamic models are well developed. However, development of such tools becomes challenging when they are designed for coupled 1D/2D hydrodynamic models. The difficulty of mesh generation in this case comes from the fact that the resolutions of the 1D/2D domains are closely intertwined with each other; however, the desired mesh resolutions for each domain may be quite different. The proposed mesh generator provides features to automatically identify 1D domains from given input data and to generates collocated meshes with efficient sizing along 1D domains. The developed techniques are demonstrated on three test cases, including two inland watersheds and a coastal basin. Second, a new smoothing method for digital elevation models (DEMs) is developed to enhance the application of an existing coupled 1D/2D kinematic wave model based on discontinuous Galerkin (DG) methods. The model has shown great success in rainfall-runoff simulations; however, it is highly sensitive to the topography represented by the mesh. The proposed method is compared to straightforwar (open full item for complete abstract)

    Committee: Ethan J. Kubatko Dr. (Advisor); James H. Stagge Dr. (Committee Member); Yulong Xing Dr. (Committee Member); Ryan Winston Dr. (Committee Member) Subjects: Civil Engineering; Environmental Engineering; Fluid Dynamics
  • 3. Bhowmick, Sauradeep Advanced Smoothed Finite Element Modeling for Fracture Mechanics Analyses

    PhD, University of Cincinnati, 2021, Engineering and Applied Science: Engineering Mechanics

    Computational fracture mechanics has been an active area of research in the engineering community for decades. The classical objective of fracture mechanics is to determine the damage caused by defects originating from regions of stress intensification. Their response under different loading conditions in different media is of grave importance in structural analysis. In the traditional framework of widely used numerical techniques like the finite element method (FEM), meshfree methods, boundary element method (BEM) several tools have been proposed to solve such problems. However, due to inherent issues associated with the numerical techniques and the computational complexities relating to the study of crack propagation, we are yet to arrive at a standard. Addressing that, this dissertation proposes a robust, e?icient numerical technique to solve fracture mechanics problems in the framework of the smoothed finite element method (S-FEM). The main contributions are: 1.The singular edge-based smoothed finite element method (sES-FEM) using a special layer of five-node singular elements (sT5) connected to the singular point, is proposed to model stress singularity in solids. The aim is develop an analytical means for integration to obtain the smoothed strains. The sT5 element has an additional node at each of the two edges connected to the crack tip, and the displacements are enriched with necessary terms to simulate the singularity. Our analytical integration techniques reduce the dependency on the order of numerical integration during the computation of the smoothed strain matrix. 2.A novel smoothed finite-element and phase-field method (S-FEM+PFM) for simulating fracture paths in brittle materials is proposed. Our S-FEM+PFM is formulated and implemented in the commercial software ABAQUS, using user defined subroutines. The formulation, within the framework of the (CS-FEM), is further extended to formulate a smoothed phase field model (S-PFM). Because gradient smoothi (open full item for complete abstract)

    Committee: Gui-Rong Liu Ph.D. (Committee Chair); Yao Fu (Committee Member); Woo Kyun Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 4. Mao, Zirui A Novel Lagrangian Gradient Smoothing Method for Fluids and Flowing Solids

    PhD, University of Cincinnati, 2019, Engineering and Applied Science: Aerospace Engineering

    The Smoothed Particle Hydrodynamics (SPH) method is a Lagrangian meshfree method by solving Navier-Stokes differential governing equations. With the key features of `Lagrangian' and `meshfree', SPH has huge advantages in tracking the free interfaces and handling large deformation. However, SPH was born with a series instability problem when particles are subjected to tension, known as the `tensile instability'. Although the instability issue can be treated completely by adopting some ad-hoc correction techniques, these additional techniques would either affect the accuracy of numerical solution or lead to a much more complicated implementation procedure. Considering that the tensile instability problem is highly related to the loose connecting mechanism of SPH particles, in this work, a SPH-like Lagrangian meshfree method, named as Lagrangian Gradient Smoothing Method or L-GSM, is proposed by replacing the SPH gradient technique with a more rigid GSM gradient operator in order to avoid the `tensile instability' problem. The replacement of the gradient approximation technique requires a series of special treatments different from the existing SPH method in the aspects of particle-searching algorithm, supporting domain-construction algorithm, free surface with particles deficiency, boundary treatments, and how to guarantee the rigid conservation of flow in simulation. For the searching algorithm of neighboring particles, a global searching algorithm based on Delaunay triangulation and a novel local neighbor-searching (LNS) algorithm are proposed in this work for the L-GSM framework. Particularly, the newly proposed LNS algorithm can greatly enhance the computational efficiency of L-GSM and provide LL-GSM (Local L-GSM) a huge advantage over SPH in computational efficiency. For the construction of supporting domain, a 3D localized domain-constructing algorithm is developed effectively for the 3D L-GSM. To mimic the free surface effect accurately, I derived three ty (open full item for complete abstract)

    Committee: Gui-Rong Liu Ph.D. (Committee Chair); Shaaban Abdallah Ph.D. (Committee Member); Mark Turner Sc.D. (Committee Member) Subjects: Geotechnology
  • 5. Nguyen, Hanh Lasso for Autoregressive and Moving Average Coeffients via Residuals of Unobservable Time Series

    Doctor of Philosophy, University of Toledo, 2018, Mathematics

    This dissertation contains four topics in time series data analysis. First, we propose the oracle model selection for autoregressive time series when the observations are contaminated with trend. An adaptive least absolute shrinkage and selection operator (LASSO) type model selection method is used after the trend is estimated by non-parametric B-splines method. The first step is to estimate the trend by B-splines method and then calculate the detrended residuals. The second step is to use the residuals as if they were observations to optimize an adaptive LASSO type objective function. The oracle properties of such an Adaptive Lasso model selection procedure are established; that is, the proposed method can identify the true model with probability approaching one as the sample size increases, and the asymptotic properties of estimators are not affected by the replacement of observations with detrended residuals. The extensive simulation studies of several constrained and unconstrained autoregressive models also confirm the theoretical results. The method is illustrated by two time series data sets, the annual U.S. tobacco production and annual tree ring width measurements. Second, we generalize our first topic to a more general class of time series using the autoregressive and moving-average (ARMA) model. The ARMA model class is the building block for stationary time series analysis. We adopt the two-step method non-parametric trend estimation with B-spline and model selection and model estimation with the adaptive LASSO. We prove that such model selection and model estimation procedure possesses the oracle properties. Another important objective of this topic is forecasting time series with trend. We approach the forecasting problem by two methods: the empirical method by using the one-step ahead prediction in time series and the bagging method. Our simulation studies show that both methods are efficient with the decreased mean square error when the sample (open full item for complete abstract)

    Committee: Qin Shao (Committee Chair); Don White (Committee Member); Rong Liu (Committee Member); Tian Chen (Committee Member) Subjects: Statistics
  • 6. Ertezaei, Bahareh Real-Time Water Depth Logger Data as Input to PCSWMM to Estimate Tree Filter Performance

    Master of Science, University of Toledo, 2017, Civil Engineering

    There are many applications for real-time water level sensor data in environmental studies. In our study, water level sensors were deployed in a storage basin adjacent to a tree filter installed to treat parking lot runoff. Water level measurements were collected using pressure-based level sensors and a weather station installed at the University of Toledo Main Campus near the tree filter. Smoothed data from the water level sensors were then chosen as inputs for observed data in a comprehensive storm water runoff model for calibration. The calibrated model was compared to the uncalibrated model to demonstrate effectiveness of the calibration. The relative error described by methods such as Nash–Sutcliff Efficiency (NSE) indicated acceptable levels of calibrated model performance. Sensitivity analysis on the subcatchment characteristics, including slope, imperviousness, and roughness, showed that the imperviousness of subcatchments is most influential. Model validation was completed to verify that the parameter changes made through calibration had the desired effects on the tree filter model and that the calibrated model was applicable for other precipitation events. The calibrated model was then used to estimate annual tree filter performance including the volume of stormwater that was either treated by or diverted by the tree filter. The real-time water level sensor data was also used to measure tree filter performance, specifically infiltration rate (in/hr) during precipitation events to facilitate tree filter maintenance planning. The overarching goal of this study was to evaluate the potential for using real-time water level sensors to calibrate complex hydrologic models. Water level loggers are substantially cheaper and more robust than traditional alternatives including permanently installed flow meters. This project shows great potential for transferability to other sites and other hydrologic models.

    Committee: Cyndee Gruden (Committee Chair); Ashok Kumar (Committee Member); Defne Apul (Committee Member) Subjects: Environmental Engineering
  • 7. Reddinger, Kaitlin Numerical Stability & Numerical Smoothness of Ordinary Differential Equations

    Master of Arts (MA), Bowling Green State University, 2015, Mathematics

    Although numerically stable algorithms can be traced back to the Babylonian period, it is believed that the study of numerical methods for ordinary differential equations was not rigorously developed until the 1700s. Since then the field has expanded - first with Leonhard Euler's method and then with the works of Augustin Cauchy, Carl Runge and Germund Dahlquist. Now applications involving numerical methods can be found in a myriad of subjects. With several centuries worth of diligent study devoted to the crafting of well-conditioned problems, it is surprising that one issue in particular - numerical stability - continues to cloud the analysis and implementation of numerical approximation. According to professor Paul Glendinning from the University of Cambridge, “The stability of solutions of differential equations can be a very difficult property to pin down. Rigorous mathematical definitions are often too prescriptive and it is not always clear which properties of solutions or equations are most important in the context of any particular problem. In practice, different definitions are used (or defined) according to the problem being considered. The effect of this confusion is that there are more than 57 varieties of stability to choose from” [10]. Although this quote is primarily in reference to nonlinear problems, it can most certainly be applied to the topic of numerical stability in general. We will investigate three of the main issues surrounding numerical stability in the error analysis and show that numerical smoothing should have been the right concept in delivering better error estimations. Therefore, the materials on numerical stability in textbooks and classrooms should be replaced by numerical smoothness.

    Committee: Tong Sun Dr. (Advisor); So-Hsiang Chou Dr. (Committee Member); Kimberly Rogers Dr. (Committee Member) Subjects: Applied Mathematics; Mathematics Education
  • 8. Wu, Chaojiang Essays on High-dimensional Nonparametric Smoothing and Its Applications to Asset Pricing

    PhD, University of Cincinnati, 2013, Business: Business Administration

    Nonparametric smoothing, a method of estimating smooth functions, has gained increasing popularity in statistics and application literature during the last few decades. This dissertation has focused primarily on the nonparametric estimation in quantile regression (Chapter 1) and an application of nonparametric estimation to financial asset pricing (Chapter 2). In the first essay (Chapter 1), we consider the estimation problem of conditional quantile when multi-dimensional covariates are involved. To overcome the "curse of dimensionality" yet retain model flexibility, we propose two partially linear models for conditional quantiles: partially linear single-index models (QPLSIM) and partially linear additive models (QPLAM). The unknown univariate functions are estimated by penalized splines. An approximate iteratively reweighted penalized least square algorithm is developed. To facilitate model comparisons, we develop effective model degrees of freedom for penalized spline conditional quantiles. Two smoothing parameter selection criteria, Generalized Approximate Cross-validation (GACV) and Schwartz-type Information Criterion (SIC) are studied. Some asymptotic properties are established. Finite sample properties are investigated through simulation studies. Application to the Boston Housing data demonstrates the success of proposed approach. Both simulations and real applications show encouraging results of the proposed estimators. In the second essay (Chapter 2), we investigate whether the conditional CAPM helps explain the value premium using the single-index varying-coefficient model. Our empirical specification has two novel advantages relative to those commonly used in the previous studies. First, it not only allows for a flexible dependence of conditional beta on state variables but also modeling heteroskedasticity. Second, from a large set of candidate state variables, we identify the most influential ones through an exhaustive variable selection method. (open full item for complete abstract)

    Committee: Yan Yu Ph.D. (Committee Chair); Hui Guo Ph.D. (Committee Member); Martin Levy Ph.D. (Committee Member) Subjects: Statistics
  • 9. Al-Shaikh, Enas Longitudinal Regression Analysis Using Varying Coefficient Mixed Effect Model

    PhD, University of Cincinnati, 2012, Medicine: Biostatistics (Environmental Health)

    Linear and nonlinear mixed models are very powerful techniques for modeling the relationship between a response variable and covariates and for handling the within-subject correlations in longitudinal data. For many applications in real life, however, it is difficult to find the proper parametric model to fit the data. Therefore, the adequacy of the model assumptions and the potential consequences of model misspecifications on the analysis under the classical linear model framework are questionable. Thus, it is important to increase the flexibility of linear regression models and to relax the conditions imposed on traditional parametric models to explore the hidden structure. The varying coefficient model (VCM), which was proposed by Hastie and Tibshirani (1993), provides a versatile and flexibale analysis tool for relating longitudinal responses to longitudinal predictors. Specically, this approach provides a novel representation of varying coefficient functions through suitable covariance of the underlying stochastic processes, which is particularly advantageous for sparse and irregular designs, as often encountered in longitudinal studies. In this dissertation, we hypothesized that varying coefficient mixed effect model (VCMEM) accurately predict, explore and address the relationship between four different covariates and the antigen level of MsgC using penalized spline smoothing technique. The longitudinal data were obtained from the Multicenter AIDS Cohort Study (MACS). We have two specific aims to test this hypothesis. The first aim is to fit VCMEM to MACS data, where the variable antigen level of MsgC is continuous. The second aim is to perform goodness of fit test to investigate the significance of the model covariates in VCMEM in the first aim using bootstrap techniques. We focused on fitting the VCMEM for the MACS data, where both fixed and random effects were modeled non-parametrically with P-spline smoothing. This allows us to explore how the effects of (open full item for complete abstract)

    Committee: Linda Levin PhD (Committee Chair); Charles Ralph Buncher ScD (Committee Member); Paul Succop PhD (Committee Member); Peter Walzer MD MSc (Committee Member) Subjects: Biostatistics
  • 10. Hu, Xin An Improved 2D Adaptive Smoothing Algorithm in Image Noise Removal and Feature Preservation

    MS, University of Cincinnati, 2009, Engineering : Electrical Engineering

    We introduce an improved 2D adaptive smoothing algorithm for noise removal and feature preservation. Comparing to the original 2D adaptive smoothing algorithm, this new algorithm is also based on the novel idea of utilizing contextual discontinuity and local discontinuity jointly to detect and distinguish edges and noise. The new algorithm improves the main concept – contextual discontinuity by introducing a novel homogeneity region definition with a corresponding method for contextual discontinuity measurement. Comparing to the original algorithm and other smoothing algorithms, the improved algorithm can preserve edges more effectively while removing noise.The improved 2D algorithm has been implemented and extensive experiments have been carried out to compare the algorithm to the original algorithm and other smoothing strategies to quantitatively demonstrate improvement in performance. Measurements are applied to evaluate the noise removal and edge preservation performance. Simulation results show that this improved algorithm has a superior performance over both the original algorithm and other popular smoothing strategies in noise removal as well as feature preservation.

    Committee: William Wee (Committee Chair); Jing-huei Lee (Committee Member); Chia-Yung Han (Committee Member) Subjects:
  • 11. Wu, Zhou Two Essays on Single-index Models

    PhD, University of Cincinnati, 2008, Business Administration : Quantitative Analysis

    Single-index models, in the simplest form E(y|x)=g(xTb), generalize linear models by allowing flexible curvatures captured by the unknown function g(.), and at the same time, retain the same easy interpretability as in linear models, given the index parameter vector b that form the linear index xTb. In addition, compared with fully nonparametric models, single-index models avoid the “curse of dimensionality”. This dissertation consists of two essays on single-index models. The first essay is concerned with estimation of single-index varying coefficient models. Varying coefficient models assume that the regression coefficients vary with some threshold variables. Previous research focused on the case of a single threshold variable. It is common for the coefficients to depend on multiple threshold variables but the resulting model is difficult to estimate. Single-index coefficient models alleviate the difficulty by modeling each coefficient by a function of an index. Existing estimation approaches employ kernel smoothing or local linear approximation of the coefficient functions (Xia and Li, 1999; Cai, Fan and Yao, 2003) which entail heavy computational burden. Also, implementation of different bandwidths for different coefficient functions to allow different smoothness is difficult for local approaches. We propose a penalized spline approach to estimating single-index coefficient models that not only allows different smoothness for different coefficient functions but also is computationally fast. Asymptotic theory is established under dependency. Numerical studies demonstrate the proposed approach. The second essay is on single-index quantile regression. Nonparametric quantile regression with multivariate covariates is often a difficult estimation problem due to the “curse of dimensionality”. Single-index quantile regression, where the conditional quantile is modeled by a nonparametric link function of a linear combination of covariates, can reduce the dimensionality (open full item for complete abstract)

    Committee: Yan Yu PhD (Committee Chair); Martin S. Levy PhD (Committee Member); Weihong Song PhD (Committee Member) Subjects: Statistics
  • 12. Zhang, Qi Semi-Parametric Test Based on Spline Smoothing for Genetic Association Studies Under Stratified Populations

    PhD, University of Cincinnati, 2007, Medicine : Biostatistics (Environmental Health)

    Several approaches are available to adjust for population stratification in genetic association studies. However, these methods either have low power and can not control for population stratification when population stratification becomes strong or require unrealistic assumptions. Here, I propose a semi-parametric test based on penalized spline smoothing to adjust for population stratification in genetic association studies. In my method, at first, a genetic background variable is derived for each individual by using principal component analysis at a set of independent markers; and then the relationship between trait values, genotypic scores at candidate markers, and the genetic background variable are modeled through a semi-parametric model. Penalized spline smoothing can be expressed in a linear mixed model framework, which allows such model to be fitted using standard mixed model software, and also allows the fitting of complex models. I conducted coalescence simulations as well as the simulation using HapMap data to compare the power and type I error of this test with two existing methods, Genomic Control (GC) and Structure Association (SA). My method not only has lower false positive rate but also has higher sensitivity to detect the true association in the presence of population stratification compared with the GC and SA methods. In most cases I considered, my method has higher power; and in the case of very high level of genetic heterogeneity and greater differential prevalence, my method has substantially higher power than that of the other two methods.

    Committee: Dr. Ranjan Deka (Advisor) Subjects: Biology, Biostatistics
  • 13. ZHAO, SHUHONG STATISTICAL INFERENCE ON BINOMIAL PROPORTIONS

    PhD, University of Cincinnati, 2005, Arts and Sciences : Mathematical Sciences

    Several statistical methods, used to construct confidence intervals (CIs) for a single Binomial proportion, are selected based on literature recommendations and are compared. A new criterion ‘strict nesting condition' is defined when comparing different confidence interval methods. The focus is on Blaker's method and its potential shortness in practice is discussed. The continuity correction and smoothing technique are proposed to improve Blaker's method. The fundamental confidence interval obtained from a fundamentally defined p-value is also introduced and compared with the Blaker's and Clopper-Pearson's (C-P's) methods. The continuity correction (CC) and smoothing technique can also be used to improve the fundamental method. The modified fundamental method and Blaker's method have great similarities in term of coverage, length, confidence curve shape, etc. The power behaviors in modified CC Blaker's and fundamental methods are examined and compared with C-P's. For two sample proportion inference, the three existing exact confidence interval methods that are available in StatXact are discussed and compared for the difference of 2 Binomial proportions. A simple approach based on the one sample exact methods is introduced to obtain the exact confidence interval of the difference when data are balanced. Its performance is comparable to these existing methods but the computation is much simpler.

    Committee: Dr. James Deddens (Advisor) Subjects: Mathematics; Statistics
  • 14. LI, MIN TWO ESSAYS IN BAYESIAN PENALIZED SPLINES

    PhD, University of Cincinnati, 2002, Business Administration : Quantitative Analysis

    Spline curve fitting has attracted a great deal of attention in recent years. It is a powerful method for fitting nonlinear models when the true mean functions are unknown and need to be estimated. This dissertation consists of two essays in Bayesian penalized splines estimation. In the first essay, we propose Bayesian Adaptive Penalized Splines (BAPS), combining features of both penalized splines and regression splines. We first outline a hierarchical Bayesian approach to penalized splines using Markov Chain Monte Carlo for fixed locations of spline knots. We then propose Bayesian Adaptive Penalized Splines, employing a reversible jump Markov Chain Monte Carlo algorithm to adaptively and Simultaneously select the number of knots, the locations of knots, and the smoothing parameter. BAPS is applied to real examples and compares favorably with competing methods in simulation studies. The second essay provides a Bayesian approach to estimating Treasury and corporate term structures with a penalized spline model. First, we estimate the Treasury term structure with a Bayesian penalized spline model, considering both normal and double exponential disturbances. We then estimate the corporate term structure by adding a spread to the estimated Treasury term structure, incorporating the knowledge of positive credit spread into our Bayesian model as informative priors. This is the first work using a Bayesian approach in the term structure literature and several advantages for adopting such an approach are presented.

    Committee: Dr. Yan Yu (Advisor) Subjects: Statistics
  • 15. Svarch, Malena A Heterogeneous Household Model Of Consumption Smoothing With Imperfect Capital Markets And Income Risk-Sharing

    Doctor of Philosophy, The Ohio State University, 2011, Agricultural, Environmental and Developmental Economics

    This dissertation explores the choices between using capital markets and informal risk-sharing arrangements when poor households attempt to smooth their consumption in the presence both of systemic and idiosyncratic income shocks. Assuming that households have two types of instruments to smooth their consumption (i) inter-temporal reallocations of consumption, via saving and borrowing, and (ii) inter-personal risk-sharing arrangements, the objectives of this research are (i) to determine conditions that may explain the extent to which a household will employ only one or both of the two instruments available to cope with risk; (ii) to understand how these two instruments differ in their ability to smooth consumption in response to idiosyncratic and systemic income shocks; (iii) to proposes an econometric procedure, to test if and how households smooth consumption, by employing only one or both of the two instruments available to cope with risk; and (iv) to econometrically test the model using the Townsend Thai Data, to determine whether Thai households smooth consumption through inter-temporal or inter-personal mechanisms. The current literature does not consider (i) an empirical estimation procedure that allows identifying the households' ability to smooth consumption through capital markets and informal risk-sharing mechanism; (ii) the lack of literature considering the estimation of the savings equation the consumption equation simultaneously when analyzing consumption smoothing. I develop a structural develop a structural model of a dynamically optimizing household who faces systemic and idiosyncratic income risk and has access to borrowing, savings, informal risk-sharing, or some combination of these risk-coping mechanisms to explore the net benefits of accessing capital markets and risk-sharing. There are two main implications of the model. First, the development of capital markets explains some reasons for the existence of risk sharing. Second, households wit (open full item for complete abstract)

    Committee: Mario Miranda Dr. (Advisor) Subjects: Economics
  • 16. Kidoido, Michael Three Essays on Agricultural Production and Household Income Risk Management in Uganda

    Doctor of Philosophy, The Ohio State University, 2011, Agricultural, Environmental and Developmental Economics

    Poor soil fertility and unreliable rainfall are associated with crop failures in Uganda. However, adoption of technologies in Uganda is among the lowest in Sub-Saharan Africa. The first essay examines the impact of production risk on farmers' simultaneous use of high yielding maize varieties (HYM) and fertilizers. The study is implemented in two steps: the first step uses the flexible moments based approach to generate lagged output moments as proxy measures of production risk, the second step incorporates the lagged output moments in a technology adoption model. Simultaneous technology adoption decisions are estimated using a multinomial probit estimator. Results show that the expected output, the variance, and the probability of crop failure (skewness) are important factors affecting the adoption of the technology package. Other important factors include scale of production, access to extension services, access to credit, household assets, and access to output markets. Rural incomes are highly vulnerable to income shocks. Previously, the effects of income shocks on household welfare in Africa have mostly been examined at the household level, yet impacts vary at the individual level. The second essay examines the impact of household income shocks on intrahousehold off-farm labor supply in rural Uganda. Bivariate Tobit estimators are adopted to correct for the interrelatedness of the couples' decisions to participate in the off-farm labor markets. Results show that women's intrahousehold bargaining power increases their participation in the off-farm non-agricultural labor sector. Results further show that women participate more in the off-farm sector when faced with some idiosyncratic income shocks, and participate less in the off-farm sector when faced with covariate income shocks. However, women's intrahousehold bargaining power has no impact on reducing the effects of covariate income shocks. Remoteness and the level of economic development of an area are also im (open full item for complete abstract)

    Committee: David Kraybill PhD (Advisor); Brent Sohngen PhD (Committee Member); Matthew Roberts PhD (Committee Member) Subjects: Agricultural Economics
  • 17. Katzfuss, Matthias Hierarchical Spatial and Spatio-Temporal Modeling of Massive Datasets, with Application to Global Mapping of CO2

    Doctor of Philosophy, The Ohio State University, 2011, Statistics

    This dissertation is comprised of an introductory chapter and three stand-alone chapters, tied together by a unifying theme: the statistical analysis of very large spatial and spatio-temporal datasets. These datasets now arise in many fields, but our focus here is on environmental remote-sensing data. Due to sparseness of daily datasets, there is a need to fill spatial gaps and to borrow strength from adjacent days. Nonetheless, many satellite instruments are capable of conducting on the order of 100,000 retrievals per day, which makes it computationally challenging to apply traditional spatial and spatio-temporal statistical methods, even in supercomputing environments. In addition, the datasets are often observed on the entire globe. For such large domains, spatial stationarity assumptions are typically unrealistic. We address these challenges using dimension-reduction techniques based on a flexible spatial random effects (SRE) model, where dimension reduction is achieved by projecting the process onto a basis-function space of low dimension. The spatio-temporal random effects (STRE) model extends the SRE model to the spatio-temporal case by modeling the temporal evolution, on the reduced space, using a dynamical autoregressive model in time. Another focus of this work is the modeling of fine-scale variation. Such variability is typically not part of the reduced space spanned by the basis functions, and one needs to account for a component of variability at a fine scale. We address this issue throughout the dissertation with increasingly complex and realistic models for a component of fine-scale variation. After a general introductory chapter, the subsequent two chapters focus on estimation of the reduced-dimensional parameters in the STRE model from both an empirical-Bayes and a fully Bayesian perspective, respectively. In Chapter 2, we develop maximum likelihood estimation via an expectation-maximization (EM) algorithm, which offers stable computation (open full item for complete abstract)

    Committee: Noel Cressie PhD (Advisor); Peter Craigmile PhD (Committee Member); Tao Shi PhD (Committee Member) Subjects: Statistics
  • 18. Lang, Lixin Advancing Sequential Monte Carlo For Model Checking, Prior Smoothing And Applications In Engineering And Science

    Doctor of Philosophy, The Ohio State University, 2008, Statistics

    The research effort in this dissertation is targeted to investigate theoretical properties of some key statistics used in the sequential Monte Carlo (SMC) sampling, and to extend SMC to model checking, prior smoothing, and constrained state estimation. A novel application of SMC estimation to population pharmacokinetic models is also introduced.Asymptotic properties of two key statistics in the SMC sampling, importance weights and empirical effective samples size, are discussed in the dissertation. The sum-normalized nature of importance weights makes it extremely difficult, if not impossible, to analytically investigate their properties. By using expectation-normalized importance weights, we are able to show the theoretical estimate of empirical effective sample size under various situations. In addition, the superiority of optimal importance function over prior importance function is verified based on the expectation-normalized weights. The usage of SMC is also demonstrated for checking incompatibility between the prior and the data, using observation's predictive density value. When the prior is detected to be incompatible with the data, prior smoothing is proposed with a popular numerical method, Moving Horizon Estimation (MHE), to obtain a better estimate of the initial state value. Specifically, the incorporation of MHE smoothing into SMC estimation is among the first efforts to integrate these two powerful tools. Convergence of constrained SMC (Chen, 2004) is verified and its performance is further illustrated with a more complex model. SMC estimation is applied to a multi-dimensional population pharmacokinetic (PK) model. It is shown that the SMC sampling is faster than Markov Chain Monte Carlo (MCMC), and it doesn't suffer from the lack of convergence concern for MCMC.

    Committee: Prem Goel (Advisor); Bhavik Bakshi (Advisor); Mark Berliner (Committee Member); Mario Peruggia (Committee Member) Subjects: Statistics
  • 19. Gomez-Soto, Franz Deposit facilities and consumption smoothing: a dynamic stochastic model of precautionary wealth choices for a credit-constrained rural household

    Doctor of Philosophy, The Ohio State University, 2007, Agricultural, Environmental and Development Economics

    This dissertation examines impacts of convenient access to safe deposit facilities on a poor rural household's precautionary wealth and consumption smoothing. Access is measured by transaction costs. Safety results from the probability of bankruptcy of the deposit-taker in case of a local systemic shock. Two exercises are developed. First, transaction costs are incorporated in Samuelson's (1969) two-period stochastic model on asset portfolio decisions. A critical value for transaction costs makes the household indifferent between investing only in a risky asset and diversifying at a transaction cost. The magnitude of this critical cost reveals a high willingness to pay for diversification. Second, a dynamic, stochastic, infinite horizon model of wealth choices for a credit-constrained household with a precautionary motive is developed. Labor income is stochastic, precautionary wealth can be diversified between two risky assets, and there is a covariance between labor income, the price of livestock, and the effect of a local systemic shock on the solvency of the deposit-taking institution. Numerical methods are used to approximate a solution for the Bellman equation. Simulations are performed for the transaction cost parameter, bankruptcy risk, environmental riskiness, risk aversion, and discount rate. In the certainty-equivalent steady state, only livestock is held, but introducing risk generates level and composition effects on precautionary wealth. Lower transaction costs increase the level of wealth and the share of deposits, allowing consumption smoothing (reduced variance). Reductions that leave transaction costs high have insignificant effects, but beyond a threshold further reductions rapidly increase deposits. The risk of bankruptcy greatly discourages holdings of deposits and livestock. Risk aversion and impatience -associated with poverty- influence deposit holdings in opposite directions. The more risk averse and less impatient the household is, the more (open full item for complete abstract)

    Committee: Claudio Gonzalez-Vega (Advisor) Subjects: Economics, Finance
  • 20. Li, Jian-Cheng Generation of simulated ultrasound images using a Gaussian smoothing function

    Master of Science (MS), Ohio University, 1995, Electrical Engineering & Computer Science (Engineering and Technology)

    Generation of simulated ultrasound images using a Gaussian smoothing function

    Committee: Jeffrey Giesey (Advisor) Subjects: