Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 29)

Mini-Tools

 
 

Search Report

  • 1. Kim, Eunseop Extensions of Empirical Likelihood for Multiple Testing and Bayesian Inference

    Doctor of Philosophy, The Ohio State University, 2023, Statistics

    Statistical models defined through estimating equations or moment conditions have gained popularity over the past few decades due to their ability to enable inference on targeted quantities without the restrictive assumptions routinely made in parametric models. Within the context of estimating equations, empirical likelihood has become a popular nonparametric likelihood that enables a likelihood-driven style of inference, extending the use of likelihood from heavily structured parametric problems to those with minimal restrictions. Empirical likelihood exhibits many desirable properties of parametric likelihood, including Wilk's phenomenon and Bartlett correction. Additionally, confidence regions from empirical likelihood have data-driven shapes and orientations. This dissertation focuses on two extensions of empirical likelihood methods. The first extension involves expanding the scope of empirical likelihood to multiple hypothesis testing. Based on a computational strategy for hypothesis testing with empirical likelihood, we develop a framework for applying empirical likelihood to the analysis of designed experiments, addressing issues that arise from blocking and multiple comparisons. Technical results identify an appropriate limiting distribution for a set of comparisons of interest. We propose two single-step multiple testing procedures: asymptotic Monte Carlo and nonparametric bootstrap, both of which asymptotically control the generalized family-wise error rate and construct simultaneous confidence intervals for comparisons of interest without explicitly considering the underlying covariance structure. A simulation study and an application to experiments demonstrate that the performance of the procedures is robust to violations of standard assumptions for designed experiments. The second extension focuses on Bayesian inference with empirical likelihood. We propose a special type of empirical likelihood suitable for Bayesian inference called regularized (open full item for complete abstract)

    Committee: Steven MacEachern (Advisor); Mario Peruggia (Advisor); Christopher Hans (Committee Member); Yoonkyung Lee (Committee Member) Subjects: Statistics
  • 2. Alqahtani, Abdulaziz Numerical Methods for the Solution of Linear Ill-posed Problems

    PHD, Kent State University, 2022, College of Arts and Sciences / Department of Mathematical Sciences

    Linear ill-posed problems arise in various fields of science and engineering. Their solutions, if they exist, may not depend continuously on the observed data. To obtain stable approximate solutions, it is required to apply a regularization method. The main objective of this dissertation is to investigate regularization approaches and develop some numerical methods for solving problems of this kind. This work begins with an overview of linear ill-posed problems in continuous and discrete formulations. We review the most common regularization methods relying on some factorizations of the system matrix. Several iterative regularization strategies based on Krylov subspace methods are discussed, which are well-suited for solving large-scale problems. We then analyze the behavior of the symmetric block Lanczos method and the block Golub–Kahan bidiagonalization method when they are applied to the solution of linear discrete ill-posed problems. The analysis suggests that it generally is not necessary to compute the more expensive singular value decomposition when solving problems of this kind. The analysis of linear ill-posed problems often is carried out in function spaces using tools from functional analysis. The numerical solution of these problems typically is computed by first discretizing the problem and then applying tools from finite-dimensional linear algebra. We explore the feasibility of applying the Chebfun package to solve ill-posed problems with a regularize-first approach numerically. This allows a user to work with functions instead of vectors and with integral operators instead of matrices. The solution process is much closer to the analysis of ill-posed problems than standard linear algebra-based solution methods. The difficult process of explicitly choosing a suitable discretization is not required. The solution of linear ill-posed operator equations with the presence of errors in the operator and the data is discussed. An approximate solut (open full item for complete abstract)

    Committee: Lothar Reichel (Advisor); Jing Li (Committee Member); Barry Dunietz (Committee Member); Qiang Guan (Committee Member); Jun Li (Committee Member) Subjects: Applied Mathematics
  • 3. Alruwaili, Eman Iterative methods for the solution of the electrical impedance tomography inverse problem.

    PHD, Kent State University, 2023, College of Arts and Sciences / Department of Mathematical Sciences

    Ill-posed inverse problems arise in many fields of science and engineering in linear and nonlinear problems. Their solution, if it exists, is extremely sensitive to perturbations in the data (small perturbations in the data cause large oscillations in the obtained solution). The main challenge of working with these problems comes from the ill-conditioning, nonlinearity, and the possible large dimension of the problems. A well known approach such as regularization aims to reduce the sensitivity of the problem by replacing the given problem with a nearby one, whose solution is less affected by perturbations in the available data. In this dissertation we mainly consider nonlinear discrete ill-posed problems that arise from the Electrical Impedance Tomography (EIT) problem. EIT is a cheap, non-invasive, radiation-free imaging technique which is used to recover the internal conductivity of a body using measurements from electrodes from its surface. The typical technique is to place electrodes in the body and measure the conductivity inside the object. Low frequency current is applied on the electrodes below a threshold, making the technique harmless for the body. Mathematically, the reconstruction of the internal conductivity is a severely ill-posed inverse problem and yields a poor-quality solution. Moreover, the desired solution has sharp changes in the electrical properties that are typically challenging to be reconstructed by traditional smoothing regularization methods. To remedy this difficulty, one solves a regularized problem that is better conditioned than the original problem by imposing constraints on the regularization term. In this work we propose a method to solve the general ℓp nonlinear EIT problem through an iteratively reweighted majorization-minimization strategy combined with the Gauss-Newton approach. Simulated numerical examples from a complete electrode model illustrate the effectiveness of our approach.

    Committee: Jing Li (Advisor); Lothar Reichel (Committee Member); Xiang Lian (Committee Member); Qiang Guan (Committee Member); Jun Li (Committee Member) Subjects: Applied Mathematics
  • 4. Onisk, Lucas Arnoldi-type Methods for the Solution of Linear Discrete Ill-posed Problems

    PHD, Kent State University, 2022, College of Arts and Sciences / Department of Mathematical Sciences

    This work is concerned with the development of accurate and efficient iterative methods for the solution of linear discrete ill-posed problems when the matrix involved is nonsymmetric. These problems often arise in science and engineering through the discretization of Fredholm integral equations of the first kind. The matrices that define these problems are typically severely ill-conditioned and may be rank-deficient. Because of this, the solution of linear discrete ill-posed problems may not exist or are very sensitive to perturbations caused by errors in the available data. These difficulties can be reduced, for example, by applying iterative regularization techniques. Krylov subspace projection strategies have been used in tandem with iterative methods to form efficient and accurate solution methods. Specifically, the Arnoldi iteration is a well known iterative process that constructs an orthonormal basis of a Krylov subspace. The opening focus is on the development of a novel "approximate Tikhonov regularization" method based on constructing a low-rank approximation to the matrix in the linear discrete ill-posed problem by carrying out a few steps of the Arnoldi process. The subsequent chapter focuses on the description of three iterative methods that modify the generalized minimum residual (GMRES), block GMRES, and global GMRES methods for the solution of appropriate linear systems of equations. The primary contribution to this field of this work is through the introduction of two block variants for when there are multiple right-hand sides in the linear system. In the final chapter the limitations of applying block GMRES methods to linear discrete ill-posed problems are discussed. While block algorithms can be executed efficiently on many computers, the work herein shows that available block algorithms may yield computed approximate solutions of unnecessarily poor quality. A termed "local'' block GMRES method that can overcome the problems associated with b (open full item for complete abstract)

    Committee: Alessandro Buccini (Advisor); Lothar Reichel (Advisor) Subjects: Applied Mathematics
  • 5. Zhou, Chenxi Density Estimation in Kernel Exponential Families: Methods and Their Sensitivities

    Doctor of Philosophy, The Ohio State University, 2022, Statistics

    This dissertation is concerned with the nonparametric density estimation problem in a kernel exponential family, which is an exponential family induced by a reproducing kernel Hilbert space (RKHS). The corresponding density estimation problem can be formulated as a convex minimization problem over a RKHS or a subset of it. The loss functionals we focus on are the negative log-likelihood (NLL) loss functional and the score matching (SM) loss functional. We propose a new density estimator called the early stopping SM density estimator, which is obtained by applying the gradient descent algorithm to minimizing the SM loss functional and terminating the algorithm early to regularize. We investigate various statistical properties of this density estimator. We also compare this early stopping SM density estimator with the penalized SM density estimator that has been studied in the literature and address their similarities and differences. In addition, we propose an algorithm to compute the penalized maximum likelihood (ML) density estimator that is obtained by minimizing the penalized NLL loss functional. We empirically compare the penalized and early stopping SM density estimators with the penalized ML density estimator and find out that when there is a small amount of regularization (corresponding to small values of the penalty parameter or large values of the number of iterations), the regularized SM density estimates contain a bump or become a spike at the isolated observation, but the penalized ML density estimates do not. Moreover, if we remove the isolated observation, the resulting regularized SM density estimates do not contain a bump or a spike when the regularization is small. We attempt to explain why this happens. Observations above motivate us to study the sensitivities of different density estimators to the presence of an additional observation. We extend the definition of the influence function by allowing its input to be function-valued statis (open full item for complete abstract)

    Committee: Vincent Vu (Advisor); Sebastian Kurtek (Committee Member); Yoonkyung Lee (Committee Member) Subjects: Statistics
  • 6. Peterson, Clayton Predicting Lung Function Decline and Pulmonary Exacerbation in Cystic Fibrosis Patients Using Bayesian Regularization and Geomarkers

    MS, University of Cincinnati, 2022, Medicine: Biostatistics (Environmental Health)

    BACKGROUND: Environmental exposures and community characteristics have been linked to rapid lung function decline and other adverse pulmonary outcomes in people with cystic fibrosis (CF). Geomarkers, the measurements of these exposures, have been linked to patient outcomes in other respiratory diseases, though broad-based geomarker studies are lacking and it is unknown which geomarkers will have the greatest predictive potential for rapid decline and pulmonary exacerbation (PEx) in CF. OBJECTIVE: A retrospective longitudinal cohort study was performed to determine whether and which geomarkers would be chosen via novel Bayesian joint covariate selection approaches and to compare the predictive performance of the resultant models for onset of PEx. METHODS: Non-stationary Gaussian linear mixed effects models were fitted to data from 151 cystic fibrosis patients aged 6 – 20 receiving care at the Cincinnati Children's Hospital Cystic Fibrosis Center (2007-2017). The outcome of interest was forced expiratory volume in 1 second of percent predicted (FEV1pp). Target functions were used to predict PEx onset according to an established definition based on drops in FEV1pp. Covariates included 11 clinical/demographic characteristics (age, sex, number of PEx-defined events within previous year, F508del mutation, pancreatic insufficiency, MEDICAID insurance use, BMI percentile, PA infection, MRSA infection, CF-related diabetes mellitus, and the number of hospital visits within the previous year), and 45 geomarkers comprising 8 categories (socioeconomic status, access to care, roadway proximity, crime, land cover, impervious descriptors, weather, and air pollution). Joint selection of covariates for predictive models was achieved using four Bayesian penalized regression models (elastic-net, adaptive lasso, ridge, and lasso). Unique covariate selections at both the 95% and 90% credible intervals (CIs) were fit to a linear mixed effects model with non-stationary stocha (open full item for complete abstract)

    Committee: Marepalli Rao Ph.D. (Committee Member); Rhonda Szczesniak Ph.D. (Committee Member) Subjects: Statistics
  • 7. Kedari, Sayali Ravindra Bayesian Learning in Computational Rheology: Applications to Soft Tissues and Polymers

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Mechanical Engineering

    The characterization of the viscoelastic response of polymers and soft tissue is significant in several areas like biomedical engineering and material processing. Computational modeling of and experimentation with such rheological materials are complex due to their time- and temperature-dependent nature. There is a critical need for reliable computational models that account for the inherent variability in experimental data. Rheological constitutive models characterize the viscoelastic response of materials using discrete or continuous relaxation spectra. The spectra are characterized by parameters (time constants, elastic and shear moduli) representing the relaxation process, and contain information about the molecular structure of polymers and soft materials. We can estimate the viscoelastic model parameters from stress relaxation experiments, but several challenges exist. For discrete spectra, a challenge is a finite number of model parameters. In the case of continuous spectra, challenges include ill-posedness, the curse of dimensionality, parameter identifiability, as well as correlated and heteroscedastic data. For modeling temperature-dependent material behavior, the challenges include model complexity and limited availability of experimental data. To address these challenges, we aim to develop rigorous stochastic approaches. Bayesian methods offer a rigorous stochastic foundation and have gained significant interest due to the increasing availability of computational resources. The specific aims of this work are: Aim 1: Employ an information-theoretic approach, namely Fisher information, to develop criteria for evaluating experimental data to obtain an ideal range of parameters. Aim 2: Develop a hierarchical Bayesian (HB) approach to L2 regularization for inferring continuous spectra while considering heteroscedasticity. This approach applies to the general linear inverse problems involving regularization, and is not limited to (open full item for complete abstract)

    Committee: Kumar Vemaganti Ph.D. (Committee Member); Gowtham Atluri Ph.D. (Committee Member); Sandeep Madireddy Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Woo Kyun Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 8. Lopez Gomez, Daniel High Dimensional Data Methods in Industrial Organization Type Discrete Choice Models

    Doctor of Philosophy, The Ohio State University, 2022, Economics

    This dissertation is composed of three main papers. Each of these papers studies a different classical discrete choice model setting within the realm of Industrial Organization (IO) that now has the added complexity of containing a high-dimensional component that renders ineffective the traditional methods used and thus requires alternative approaches. In the first paper, I study a static single equilibrium market entry game of homogenous firms that contains a high-dimensional set of exogenous market characteristics that could enter a firm's profit function. In such type of high-dimensional setting we are at high risk of overfitting, i.e. estimating model parameters that are tailored too closely to the sample data available and thus don't generalize well to new data. The focus of this paper is exploring the use of different regularization techniques with the purpose of reducing overfitting when predicting market entry for a previously unobserved market. The second paper extends the previous market entry framework by now examining a static multiple equilibria market entry game of heterogeneous firms. The high-dimensional component in this setting arises from the way in which such a model is partially identified, which is through a set of moment inequalities that have to be met for a particular set of values of the parameters of interest to be consistent with the data. The number of moment inequalities that characterize this type of model can very easily grow beyond traditional sample sizes, thus requiring special attention from the researcher when testing whether a vector of values for the parameters of interest is indeed accepted by the model. This paper studies different approaches of high-dimensional testing applied to this market entry model and evaluates their performance. Finally, in the third paper I consider a different but still extremely relevant model of Industrial Organization, the aggregate discrete choice model with random coefficients for dema (open full item for complete abstract)

    Committee: Jason Blevins (Advisor); Adam Dearing (Committee Member); Robert de Jong (Committee Member) Subjects: Economics
  • 9. UGWU, UGOCHUKWU Iterative tensor factorization based on Krylov subspace-type methods with applications to image processing

    PHD, Kent State University, 2021, College of Arts and Sciences / Department of Mathematical Sciences

    This work is concerned with structure preserving and other techniques for the solution of linear discrete ill-posed problems with transform-based tensor-tensor products, e.g., the t-product and the invertible linear transform product. Specifically, we focus on two categories of solution methods, those that involve flattening, i.e., reduce the tensor equation to an equivalent equation involving a matrix and a vector, and those that preserve the tensor structure by avoiding flattening. Various techniques based on Krylov subspace-type methods for solving third order tensor ill-posed problems are discussed. The data is a laterally oriented matrix or a general third order tensor. Regularization of tensor ill-posed problem by Tikhonov's approach and truncated iterations are considered. Golub-Kahan bidiagonalization-type, Arnoldi-type, and Lanczos-type processes are applied to reduce large-scale Tikhonov minimization problems to small-sized problems. A few steps of the t-product bidiagonalization process can be employed to inexpensively compute approximations of the singular tubes of the largest Frobenius norm and the associated left and right singular matrices. A less prohibitive computation of approximations of eigentubes of the largest Frobenius norm and the corresponding eigenmatrix by a few steps of the t-product Lanczos process is considered. The interlacing of the Frobenius norm of the singular tubes is shown and applied. The discrepancy principle is used to determine the regularization parameter and the number of iterations by a chosen method. Several truncated iteration techniques, e.g., SVD-like, and those based on the above processes are considered. Solution methods for the weighted tensor Tikhonov minimization problem with weighted global and non-global bidiagonalization processes are discussed. The weights on the fidelity and regularization parts of this problem are suitably defined symmetric positive definite (SPD) tensors. The computation of the inverse of (open full item for complete abstract)

    Committee: Lothar (Advisor) Reichel (Committee Chair); Xiaoyu Zheng (Committee Member); Barry Dunietz (Committee Member); Sergij Shyanovskii (Committee Member); Jing Li (Committee Member) Subjects: Applied Mathematics
  • 10. Zeng, Jianfeng Time Series Forecasting using Temporal Regularized Matrix Factorization and Its Application to Traffic Speed Datasets

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science

    As technology has advanced, the time series data collected has become larger and larger in size. The problems of time series forecasting have thus become high dimensional. A recent study on time series forecasting attempted to tackle the problem using temporal regularized matrix factorization (TRMF) (Yu, Rao and Dhillon, 2016). In this research, a method is proposed to facilitate the application of time series forecasting using TRMF and a list of other algorithms on traffic speed data. The application is improved in several ways. First, a graph regularization is incorporated into TRMF to utilize spatial dependencies in data. This can improve the forecasting accuracy when strong spatial dependencies are present in data. Second, in order to incorporate temporal dependencies in data, a lag set has to be defined in advance. This would require the user to have both domain knowledge and familiarity with the data. This method can define a lag set algorithmically even with data containing missing values. Third, the forecasting result is improved by using imputed data when missing value rate is high or data is lacking, or damaged. Finally, a multi-step forecasting strategy is used to improve the accuracy of results for long-term forecasting.

    Committee: Anca Ralescu Ph.D. (Committee Chair); Kenneth Berman Ph.D. (Committee Member); Dan Ralescu Ph.D. (Committee Member) Subjects: Computer Science
  • 11. Pasha, Mirjeta Krylov subspace type methods for the computation of non-negative or sparse solutions of ill-posed problems

    PHD, Kent State University, 2020, College of Arts and Sciences / Department of Mathematical Sciences

    Ill-posed inverse problems arise in many fields of science and engineering. Their solution, if it exists, is very sensitive to perturbations in the data. In this thesis we consider linear discrete ill-posed problems. The challenge of working with these problems comes from the ill-conditioning and the possible large dimension of the problems. Regularization methods try to reduce the sensitivity by replacing the given problem with a nearby one, whose solution is less affected by perturbations. For small to medium size problems, we describe how the generalized singular value decomposition can be combined with iterated Tikhonov regularization and we illustrate that the method so obtained determines approximate solutions of higher quality than the more commonly used approach of pairing generalized singular value decomposition with (standard) Tikhonov regularization. The regularization parameter is determined with the aid of the discrepancy principle. In the remainder of the thesis we focus on large scale problems. They are solved by projecting them into a Krylov or generalized Krylov subspace of fairly small dimension. Bregman-type iterative methods have attracted considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they deliver. However, these iterative methods may require a large number of iterations and this reduces their attractiveness. We develop a computationally attractive linearized Bregman algorithm by projecting the problem to be solved into an appropriately chosen low-dimensional Krylov subspace. The projection reduces both the number of iterations and the computational effort required for each iteration. A variant of this solution method, in which nonnegativity of each computed iterate is imposed, also is described. Recently, the use of a $p$-norm to measure the fidelity term and a $q$-norm to meas (open full item for complete abstract)

    Committee: Lothar Reichel (Advisor); Alessandro Buccini (Advisor); Jing Li (Committee Member); Sergij Shiyanovskii (Committee Member); Arvind Bansal (Committee Member); Xiaoyu Zheng (Committee Member) Subjects: Applied Mathematics
  • 12. Patro, Abinash Mesh Regularization Through Introduction of Mesh Size based Scaling Factor using LS Dyna Explicit Analysis

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    Meshing is a crucial element of any FEA simulation. Accurate modeling & simulation requires close approximation of standard test conditions and definition of parameters influencing the phenomenon with the optimization. In current standards, the mesh size needs to be small to get an accurate estimation of the experimental result. However, this leads to long simulation time consuming vital time and resources. On the other hand, increasing the mesh size leads to inaccurate results. This problem is tackled in this study through mesh regularization by introducing mesh size-based scaling factor in the material damage model. The study for mesh regularization or mesh size optimization is studied in three examples. The mesh size-based scaling factor is found out through tensile tests, and the comparison of the numerical simulations results with the experimental tests. Then the scaling factor is implemented in the numerical simulations of a real structural experiment considered in this study. The study for mesh regularization or mesh size optimization goes through extensive research for the various experiments, i.e., Ballistic test, Milling test, and Engine casing penetration testing. Two different materials, namely, Titanium alloy (Ti-6Al-4V) and Aluminum alloy (Al7020-T651), are used. The material damage models used are Johnson Cook and GISSMO (Generalized Incremental Stress State-dependent Model). The study also tries to derive theoretical models to predict accurate scaling factors for mesh regularization. The theoretical scaling factors with the experimental scaling factor found through the numerical simulation of the experiments. Furthermore, this study also presents a summary of theoretical models published in the literature to predict certain scaling factors for mesh regularization. The theoretical scaling factors with the experimental scaling factor found through the numerical simulation of the experiments are compared and presented.

    Committee: Ala Tabiei Ph.D. (Committee Chair); Michael Alexander-Ramos Ph.D. (Committee Member); Woo Kyun Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 13. Park, Yonggi PARAMETER SELECTION RULES FOR ILL-POSED PROBLEMS

    PHD, Kent State University, 2019, College of Arts and Sciences / Department of Mathematical Sciences

    Tikhonov regularization is one of available methods for computing an approximate solution of linear discrete ill-posed problems with error-contaminated data. A regularization parameter balances the influence of a fidelity term, which measures how well the data are approximated, and of a regularization term, which dampens the propagation of the data error into the computed approximate solution. The value of the regularization parameter is important for the quality of the computed solution. This thesis introduces several algorithms when the discrepancy principle can be applied and when can not applied. In addition, among the many methods to solve discrete ill-posed problems, the linearized Bregman algorithm is studied.

    Committee: Lothar Reichel (Advisor); Alessandro Buccini (Committee Co-Chair); Jing Li (Committee Member); Jun Li (Committee Member); Kambiz Ghazinour (Committee Member); Austin Melton Jr (Committee Member); Andrew Tonge (Other); James Blank (Other) Subjects: Applied Mathematics
  • 14. Komandur, Deepak Load Identification using Matrix Inversion Method (MIM) for Transfer Path Analysis (TPA)

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    The main objective of this work was to assess the practical application of using matrix inversion method (MIM) for the transfer path analysis (TPA) process. An analysis of the basic transfer path analysis process using matrix inversion can help understand the accuracy of the matrix inversion process prior to studying the operational transfer path analysis process. The essence of the matrix inversion method is based on the idea that the operational forces in the system can be recalculated based on a frequency response function (FRF) measurement, in case the operational forces cannot be measured during operation. An operational system, therefore, is broken down into two parts an active subsection and a passive subsection. The response data is measured during operation and the FRF are measured when the source is disconnected so that a product of the FRF between the mounting location and the receiver location with the force input into the system would give the response equal to the operational response. However, as the FRF matrix has to be inverted in order to calculate the forces, the errors present in the measurement of the FRF are amplified during the inversion process leading to inaccurate results. In order to get better results methods such as, the Moore-Penrose pseudo inverse and singular value decomposition (SVD) of the FRF matrix based on various threshold criteria has been investigated. An attempt has been made to identify which of the methods would prove with an accurate presentation of the force and responses of the system. During the course of this thesis, it had been discovered the application of the above methods would depend on the experimental setup in use and that one would have to investigate a few methods simultaneously to understand the system better based on the results.

    Committee: Randall Allemang Ph.D. (Committee Chair); Aimee Frame Ph.D. (Committee Member); Allyn Phillips Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 15. Gurav, Hardik Experimental Validation of the Global Transmissibility (Direct Method) Approach to Transfer Path Analysis

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    The main objective of this thesis work was to experimentally assess the validity of the global transmissibility direct transmissibility approach to transfer path analysis process of vibration transmission path identification. The use of transmissibility functions has received a renewed attention as it can help by-pass some of the rudimentary problems associated with transfer path analysis process such as measurement of operational forces or the necessity to decouple the source/ active parts of the system to measure the frequency response functions. The essence of using a transmissibility approach relies on the idea of direct transmissibilities, which can be computed from the global measurable transmissibilities. Which implies that the response at any degree of freedom to a system external force can be decomposed in terms of the remaining degree of freedom responses and the system direct transmissibilities. The experimentation was conducted on a simple in-house fabricated mechanical system with electro-mechanical shaker led excitation method. Despite its advantages over the classical transfer path analysis approach, the global transmissibility direct transmissibility method has its own hue of challenges. The most prominent on being ill-conditioning of matrices undergoing matrix inversion operations which can often result in unreliable outcomes. To mitigate this problem various regularization techniques based on improved singular value rejection has been implemented. Amongst the various regularization techniques implemented, the singular value rejection based on the FRF error threshold gives overall satisfactory results in terms of the operational response reconstruction, in trend and magnitude. The probable reasons for deviation in results, and why some regularization techniques work while others are ineffective are investigated subsequently. Based on these a course of action is suggested in an effort to trim down these deviations and develop a reliable test (open full item for complete abstract)

    Committee: Randall Allemang Ph.D. (Committee Chair); Allyn Phillips Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 16. Ambrozic, Courtney Image Deblurring for Material Science Applications in Optical Microscopy

    Master of Science (M.S.), University of Dayton, 2018, Electrical Engineering

    The objective of this research is to develop an application-specific image deblurring algorithm for microscopic, material images. In microscopy, there are two types of image blur---one due to the limitation of the microscope, and another due to defocus. Defocus blur is particularly problematic in the case of spatially-varying materials, where the texture of the material surface is not flat. Through various deconvolution techniques, the image can be deblurred and high frequency components can be restored. Through our partnership with the Materials and Manufacturing Directorate at Air Force Research Lab (AFRL), we have developed an optimal deblurring method specifically for material images. We tailor our deblurring method for material images based on a priori knowledge about the characteristics of the material. The specificity of the material features allows us to impose stronger constraints on the defocus blur, which we leverage to handle spatially varying material surfaces, whose defocus blur is non-uniform across the image. The significance of this research is the development of a deblurring algorithm capable of handling a larger amount of blur and noise than the state-of-the-art methods. Currently, existing image deblurring algorithms are designed to handle diverse scene contents and blur kernels with large degrees of freedom. As a result, the existing methods can handle only small amounts of blur and noise. With the goal to handle types of images acquired by a specific microscope modality, we are able to recover finer details within the image while handling a larger degree of blur because the solution space is significantly constrained.

    Committee: Keigo Hirakawa (Advisor); Craig Przybyla (Committee Member); Jeff Simmons (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Engineering
  • 17. Yu, Ying The visual perception of 3D shape from stereo: Metric structure or regularization constraints?

    Master of Arts, The Ohio State University, 2017, Psychology

    A substantial number of psychophysical studies have found that the visual perception of 3D shape from stereo is systematically distorted so that the perceived shape is progressively compressed along the depth dimension as the viewing distance increases. This suggests that 3D shape perception depends on the perception of 3D metric structure, which requires the depth magnitude information derived from stereo to reconstruct the local structures, such as angles and line segments. These studies, however, have been criticized by Pizlo (Pizlo, 2008; Pizlo, Li, Sawada, & Steinman, 2014) for using impoverished stimuli that are not sufficiently well-structured to allow the application of powerful regularization constraints such as symmetry, planarity and compactness. His model, which takes these constraints as prior cues and recovers 3D shapes in a holistic way, can reliably reconstruct the shape from a single projection image. In Pizlo's model, the depth order information derived from stereo is the only binocular cue needed to improve the reconstructed shape to a nearly veridical level. The above two theoretical positions differ in many respects and we want to know which one can better account for the mechanism underlying people's 3D shape perception from stereo. Two experiments were conducted to evaluate people's ability to disambiguate 3D shapes defined by binocular disparity within the ambiguity family that formed by stretching a 3D shape in depth (i.e., Z-scale family) using different stimuli and different tasks under different viewing conditions. Although observers were able to make reliable judgments, most of them revealed systematic failures of shape constancy over changes in viewing angles and viewing distances. These findings can be largely accounted for by the misperception of metric structure and cannot be fitted by Pizlo's model. A few observers in Experiment 2, however, produced a different judgment pattern that cannot be explained by the misperception of metric (open full item for complete abstract)

    Committee: Alexander Petrov (Advisor) Subjects: Psychology
  • 18. Hashemi, Seyyed Amirreza PARAMETER CHOICES FOR THE SPLIT BREGMAN METHOD APPLIED TO SIGNAL RESTORATION

    Master of Science, University of Akron, 2016, Applied Mathematics

    This thesis aims to study the parameters involved in the Split Bregmann Method (SBM) when it is applied to signal restoration problems. SBM has been used increasingly on signal processing problems in recent years in particular for compressed sensing problems. SBM is an iterative method which has been used to solve l1-regularized optimization problems. Tikhonov regularization is one of the most common methods to solve discrete ill-posed problem and it appears as one of the steps inside SBM. Four standard different parameter choice methods for Tikhonov regularization were examined in order to find the best values for the regularized and threshold parameters inside SBM: L-curve, generalized cross validation (GCV), discrepancy principle (DP), and unbiased predictive risk estimation (UPRE). Several numerical experiments have been done with signals with sharp edges to compare all the different approaches.

    Committee: Malena Espanol (Advisor); Wilber Patrick (Committee Member); Kreider Kevin (Committee Member) Subjects: Applied Mathematics; Mathematics
  • 19. Hsu, Ting-Hao A Geometric Singular Perturbation Theory Approach to Viscous Singular Shocks Profiles for Systems of Conservation Laws

    Doctor of Philosophy, The Ohio State University, 2015, Mathematics

    The concept of singular shocks was introduced in a series of papers in the 1980s, by Keyfitz and Kranzer, in order to solve Riemann problems for a class of equations which cannot be solved using classical solutions. Classical solutions for Riemann problems are measurable functions composed of regular shocks and rarefactions, and singular shocks are distributions involving delta measures that are weak limits of approximate viscous solutions. During the past decades, many abstract theories of singular shocks were developed, and many examples of this type of solution in problems modeling physical phenomena were discovered. We study singular shocks as self-similar zero-viscosity limits via the viscous regularization ut+f(u)x=εtuxx for two systems of conservation laws. The first system models incompressible two-phase fluid flow in one space dimension, and the second one is the Keyfitz-Kranzer system. Singular shocks for both systems have been analyzed in the literature, and the results are enhanced in this dissertation. We improve and apply theorems from Geometric Singular Perturbation Theory, including Fenichel's Theorems, the Exchange Lemma, and the Corner Lemma, to prove existence and convergence of viscous profiles for singular shocks for those two examples. We also derive estimates for the growth rates of the unbounded viscous solutions. In particular, it is demonstrated that, although viscous solutions for these two systems both have shock layers of widths of order ε, they tend to infinity in quantitatively different manners. For the two-phase flow model, the maximum value of the solution is of order log(1/ε), while for the Keyfitz-Kranzer system, the maximum value is of order 1/ε2.

    Committee: Barbara Keyfitz (Advisor); Ovidiu Costin (Committee Member); Saleh Tanveer (Committee Member) Subjects: Mathematics
  • 20. Yu, Xuebo Generalized Krylov subspace methods with applications

    PHD, Kent State University, 2014, College of Arts and Sciences / Department of Mathematical Sciences

    The Tikhonov regularization method is a popular method to solve linear discrete illposed problems. The regularized problems can be solved with the aid of the generalized singular value decomposition (GSVD) when the problem is of small to medium size. This decomposition is not practical to use when the problem is of large size since the computation of the GSVD then is too expensive. The idea is to construct a solution subspace of small size with the aid of a generalized Krylov subspace method and find a solution in the solution subspace as an approximation to the solution in the full space. We refer to this as a reduction method. Several reduction methods for solving large Tikhonov regularization problems have been developed and are discussed in the lliterature. In this work we will add three novel reduction methods to this family. Our methods can give approximate solutions of higher accuracy than the GSVD and, therefore are attractive alternatives to the GSVD also when the matrices are small enough for the latter to be computed. In the context of ε-pseudospectrum computations, we propose a new rational Arnoldi method that is well suited for the situation when the rational functions involved have few distinct poles that are applied in a cyclic fashion.

    Committee: Lothar Reichel Dr. (Advisor); Xiaoyu Zheng Dr. (Committee Member); Jing Li Dr. (Committee Member); Arden Ruttan Dr. (Committee Member); Paul A. Farrell Dr. (Committee Member) Subjects: Mathematics