Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 24)

Mini-Tools

 
 

Search Report

  • 1. Ellis, Nicholas Identifying Quasi-Identical RF Power Amplifiers through EVM and NMSE Metrics using Digital Post Distortion Algorithms

    Master of Science, The Ohio State University, 2025, Electrical and Computer Engineering

    In this thesis, two papers that I have published on the topics of power amplifier identification and power amplifier linearization are presented. The first paper discusses quasi-identical power amplifiers that can be differentiated through the unique non-linearities that are inherent to each individual power amplifier (PA). In this paper it is experimentally verified that first linearizing the PA outputs using the same digital post distortion (DPoD) technique to correct for the non-linearities of the reference PA, magnifies the differences observed in the EVM and NMSE metrics. As the signal-to-noise ratio (SNR) of the distorted signal output increases, the differences between the EVM/NMSE measurements of the two PAs also increases. At a high enough SNR, two quasi-identical PAs can be reliably differentiated using the EVM/NMSE metrics once DPoD has been applied. In the second paper which is a continuation of the first, a generalized cubic spline basis (GCSB) with selective deep memory is used to perform an enhanced DPoD. It is experimentally verified that the use of deep memory in the GCSB model can not only increase the performance of DPoD but also greatly magnify the differences observed in the normalized mean squared error (NMSE) of the linearized PA output signal relative to the reference input signal. This new technique can thus be used to reliably differentiate between two quasi-identical power amplifiers from the same device's manufacturer. Lastly, a signal visualization tool developed with the goal to aid in signal analysis and the understanding of power amplifier linearization is presented.

    Committee: Patrick Roblin (Advisor); Joel Johnson (Committee Member) Subjects: Electrical Engineering
  • 2. Vangani, Harsh Modeling the geoid and the deflection of the vertical using tensor-product B-splines on the sphere /

    Master of Science, The Ohio State University, 2005, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 3. Li, Youjun Semiparametric and Nonparametric Model Approaches to Causal Mediation Analysis for Longitudinal Data

    Doctor of Philosophy, Case Western Reserve University, 2024, Epidemiology and Biostatistics

    There has been a lack of causal mediation analysis methods developed for complex longitudinal data. Most existing work focuses on extensions of parametric models that have been well developed for causal mediation analysis for cross-sectional data. To better handle complex, including irregular, longitudinal data, our approach takes advantage of the flexibility of penalized splines and performs causal mediation analysis under the structural equation model framework. The incorporation of penalized splines allows us to deal with repeated measures of the mediator and the outcome that are not all recorded at the same time points. The penalization avoids otherwise difficult choices in selecting knots and prevents the splines from overfitting so that the prediction for future time points will be more reasonable. We also provide the formula for identifying the natural direct and indirect effects based on our semiparametric models, whose inference is carried out by delta method and Monte Carlo approximation. This frequentist approach can be straightforward and efficient when implemented under the linear mixed model (LMM) framework, but it sometimes faces convergence problems as the random effects components introduce complications when using the commonly seen optimization algorithms in most of the statistical software. Although Bayesian modeling under LMM is less likely to face the convergence problem with the help of Markov chain Monte Carlo (MCMC) sampling, it can be computationally expensive compared to the frequentist approach due to the nature of the MCMC algorithm. As an alternative Bayesian approach, Gaussian process regression (GPR) also has the flexibility to fit various data patterns and will be more efficient than Bayesian modeling using MCMC, as the posterior distribution in GPR is a known form from which the posterior samples can be directly drawn. We thus attempt to extend the standard GPR approach to allow multiple covariates of both continuous and categorical (open full item for complete abstract)

    Committee: Pingfu Fu (Committee Chair); David Aron (Committee Member); Mark Schluchter (Committee Member); Jeffrey Albert (Advisor) Subjects: Biostatistics; Statistics
  • 4. Clark, Troy The Trefoil: An Analysis in Curve Minimization and Spline Theory

    Doctor of Philosophy, Case Western Reserve University, 2020, Mathematics

    We will consider a variational problem arising out of the localized induction equation. We are motivated by the idea of finding “fair” splines, by considering an energy functional involving the derivative of the curvature. Among the solutions to the Euler-Lagrange equations are two elastic curves and the Kiepert Trefoil. We will introduce features and properties of the trefoil. One of the features of the trefoil is that it is an algebraic curve with a simple parametrization to handle. In addition to this, we will show that the trefoil is a model for a two-parameter spline and provide examples of how pieces of the trefoil can be cut, transformed and fitted so that the resulting curve is aesthetically “fair”.

    Committee: David Singer Ph.D. (Advisor); Joel Langer Ph.D. (Committee Member); Elisabeth Werner Ph.D. (Committee Member); Colin McLarty Ph.D. (Committee Member) Subjects: Mathematics
  • 5. Su, Weiji Flexible Joint Hierarchical Gaussian Process Model for Longitudinal and Recurrent Event Data

    PhD, University of Cincinnati, 2020, Arts and Sciences: Mathematical Sciences

    Jointly modeling two types of longitudinal markers makes optimal use of the available information and serves to investigate the joint evolution of the two processes, to examine the underlying association and to evaluate the surrogate markers simultaneously. In this dissertation, we develop a series of joint models for the longitudinal repeated measurements, including continuous, repeated binary, and time to recurrent event data. Our goal is to extend the joint model with more flexibility via parametric, semi-parametric and nonparametric methods to capture various features in the data. A hierarchical Gaussian process is incorporated into the proposed joint model framework to explain the characteristics of both population and personalized variation and provide dynamic predictions. In analyzing the longitudinal continuous and repeated binary data, we incorporate a family of parametric link functions into the proposed joint model to obtain flexibility in handling skewness in the probability response curves. In jointly modeling the longitudinal and recurrent time-to-event data, we utilize both semi-parametric and non-parametric methods to monitor the non-linearity in population evolution and heterogeneity. Furthermore, we exhibit the application of the proposed joint model in examining the impact of various risk factors. We employ Bayesian approaches in model construction and estimation. The proposed models are compared with existing joint modeling approaches. Particularly, we incorporate the idea of likelihood decomposition and develop the model comparison criterion to facilitate performance assessment of each submodel separately. We carry out extensive simulation studies in each of the models we proposed. The purpose of the simulation studies is to show the properties, implementation, performance as well as potential problems of the proposed models compared with existing methods. Joint modeling is of particular importance in clinical studies. Our real data (open full item for complete abstract)

    Committee: Xia Wang Ph.D. (Committee Chair); Xuan Cao Ph.D. (Committee Member); Won Chang Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member); Rhonda Szczesniak Ph.D. (Committee Member) Subjects: Statistics
  • 6. Green, Brittany Ultra-high Dimensional Semiparametric Longitudinal Data Analysis

    PhD, University of Cincinnati, 2020, Business: Business Administration

    As ultra-high dimensional longitudinal data is becoming ever more apparent in fields such as public health, information systems, and bioinformatics, developing flexible methods with a sparse set of important variables is of high interest. In this setting, the dimension of the covariates can potentially grow exponentially with respect to the number of clusters. This dissertation research considers a flexible semiparametric approach, namely, partially linear single-index models, for ultra-high dimensional longitudinal data. Most importantly, we allow not only the partially linear covariates, but also the single-index covariates within the unknown flexible function estimated nonparametrically to be ultra-high dimensional. Using penalized generalized estimating equations, this approach can capture correlation within subjects, can perform simultaneous variable selection and estimation with a smoothly clipped absolute deviation penalty, and can capture nonlinearity and potentially some interactions among predictors. We establish asymptotic theory for the estimators including the oracle property in ultra-high dimension for both the partially linear and nonparametric components. An efficient algorithm is presented to handle the computational challenges, and we show the effectiveness of our method and algorithm via a simulation study and yeast cell cycle gene expression data. In addition, we develop an alternative solution methodology via the penalized quadratic inference function with partially linear single-index models for ultra-high dimensional longitudinal data. This methodology can improve the estimation efficiency when the working correlation structure is misspecified. Performance is demonstrated via a simulation study and analysis of a genomic dataset.

    Committee: Peng Wang Ph.D. (Committee Chair); Yan Yu Ph.D. (Committee Chair); Lenisa Chang Ph.D. (Committee Member) Subjects: Business Administration
  • 7. Goel, Archak Design of Functionally Graded BCC Type Lattice Structures Using B-spline Surfaces for Additive Manufacturing

    MS, University of Cincinnati, 2019, Engineering and Applied Science: Mechanical Engineering

    Additive manufacturing methods have recently been used to make light-weighted parts using lattice structures for various applications. Functionally graded lattice structures (FGLs) are structures that are designed using lattices with a varying distribution of porosity by virtue of varying the volume fractions of each unit cell in the 3D design domain. This graded design strategy helps achieve advanced properties related to a structure's mechanical performance and functionalities such as ingrowth of bone during tissue implant fixation and transfer of heat in an optimized fashion in light-weighted structures. Compliance minimization is one such classic problem where topology optimization techniques are used to determine the optimum distribution of material in the design domain while obtaining the desired reduction in weight. This material distribution is typically populated with lattices of variable volume fraction unit cells to generate FGLs. Strut type unit cells such as BCC are commonly used to develop FGLs. In order to develop such structures with strut type unit cells there is a need for a methodology that can maintain smooth connectivity among unit cells of varying densities. This thesis discusses a new method to achieve smoothly connected FGLs, based on a BCC unit cell geometry, using a B-spline surface-based unit cell design methodology. The author's previous work in [1] on generating bifurcating geometries using B-spline surfaces is extended to lattices as a case of multi-furcation geometries. First, a control polyhedron net is developed on the basis of desired unit cell geometry which is then further processed to construct watertight boundary representation of the unit cell using a 3rd order B-spline surface. This design methodology is used in conjunction with an algorithm to populate the density distribution from SIMP-based topology optimization using the unit cells with different volume fractions. The resulting lattice structure is compared with a uniform (open full item for complete abstract)

    Committee: Sam Anand Ph.D. (Committee Chair); Michael Alexander-Ramos Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Engineering
  • 8. Margevicius, Seunghee Modeling of High-Dimensional Clinical Longitudinal Oxygenation Data from Retinopathy of Prematurity

    Doctor of Philosophy, Case Western Reserve University, 2018, Epidemiology and Biostatistics

    Many remarkable advances have been made in the non-parametric and semiparametric methods for high-dimensional longitudinal data. However, there is a lack of a method for addressing missing data in these important methods. Motivated by oxygenation of retinopathy of prematurity (ROP) study, we developed a penalized spline mixed effects model for a high-dimensional nonlinear longitudinal continuous response variable using the Bayesian approach. The ROP study is complicated by the fact that there are non-ignorable missing response values. To address the non-ignorable missing data in the Bayesian penalized spline model, we applied a selection model. Properties of the estimators are studied using Markov Chain Monte Carlo (MCMC) simulation. In the simulation study, data were generated with three different percentages of non-ignorable missing values and three different sample sizes. Parameters were estimated under various scenarios. The proposed new approach did better compare to the semiparametric mixed effects model with non-ignorable missing values under missing at random (MAR) assumption in terms of bias and percent bias in all scenarios of non-ignorable missing longitudinal data. We performed sensitivity analysis for the hyper-prior distribution choices for the variance parameters of spline coefficients on the proposed joint model. The results indicated that half-t distribution with three different degrees of freedom did not influence to the posterior distribution. However, inverse-gamma distribution as a hyper-prior density influenced to the posterior distribution. We applied our novel method to the sample entropy data in ROP study for handling nonlinearity and the non-ignorable missing response variable. We also analyzed the sample entropy data under missing at random.

    Committee: Abdus Sattar (Advisor); Mark Schluchter (Committee Chair); Albert Jeffrey (Committee Member); Abdus Sattar (Committee Member); Sana Loue (Committee Member) Subjects: Biostatistics
  • 9. Zhang, Han Detecting Rare Haplotype-Environmental Interaction and Nonlinear Effects of Rare Haplotypes using Bayesian LASSO on Quantitative Traits

    Doctor of Philosophy, The Ohio State University, 2017, Statistics

    Rare variants and gene-environment interaction (GXE) are two important contributors to the etiology of many complex diseases. Since many diseases (e.g. dichotomous traits) are discretizations of some underlying quantitative measurements, it is important to study such quantitative traits directly as they may contain greater amount of information. Examples include obesity (based on body mass index measurements) and hypertension (based on blood pressure measurements). In recent years, several methods have been proposed for detecting associations of rare haplotype variants, environmental factors and their interacting effects on complex diseases. However, the focus of most existing methods has been on binary traits and case-control population data. In this dissertation, I will present a Quantitative Bayesian LASSO (QBL) method for detecting rare and common haplotype effects and GXE on quantitative traits for cohort data. By appropriately setting the priors for the effect size parameters, I can increase statistical power for detecting main, and interacting, effects involving rare haplotype variants. I will present simulation results with both continuous and discrete environmental factors and a range of disease models and distributions. I will also demonstrate the utility of QBL in a real data application. In QBL, the key assumption is the linear interaction effect between haplotypes and a continuous environmental covariate, which may hamper the discovery of novel variants on the trait and the true causal mechanism. Although assuming linearity as a working model may work fine in some situations, it would be obviously important to have statistical methods that correctly account for non-linearity if that is indeed the case. An example of non-linear GXE is hypertension based on blood pressure measurements for making diagnosis: a genetic variance may influence blood pressure differently depending on age, but the interaction is clearly non-linear (Wang et al., 2014). In rece (open full item for complete abstract)

    Committee: Shili Lin (Advisor); Asuman Seda Turkmen (Committee Member); Eloise Kaizar (Committee Member) Subjects: Statistics
  • 10. Chen, Eva Estimation of the term structure of interest rates via cubic exponential spline functions /

    Doctor of Philosophy, The Ohio State University, 1987, Graduate School

    Committee: Not Provided (Other) Subjects: Economics
  • 11. Xia, Shuang Detecting Rare Haplotype-Environment Interaction and Dynamic Effects of Rare Haplotypes using Logistic Bayesian LASSO

    Doctor of Philosophy, The Ohio State University, 2014, Statistics

    Rare variants and gene-environment interaction (GXE) are believed to be two important contributors to “missing heritability”. Thus, detecting GXE where G is a rare haplotype variant (rHTV) is a pressing problem. Haplotype analysis is usually the natural second step to follow up on a genomic region that is implicated to be associated through common single nucleotide variants (SNV) analysis. Further, rHTV can tag associated rare SNV (rSNV) and may provide greater power to detect them than popular collapsing methods. In this dissertation, we undertake the idea of Logistic Bayesian LASSO (LBL) and incorporate environmental factors (E) and their interactions with haplotypes (GXE) in LBL using case-control data, leading to LBL-GXE. By adopting the Markov chain Monte Carlo(MCMC) methods and choosing appropriate priors, LBL shrinks the effects of unassociated (especially common) haplotypes towards zero so that an associated rHTV can be identified with greater power. We investigate the performance of LBL-GXE under extensive simulation models involving different combinations of rare and common associated haplotypes. The results show that LBL-GXE has reasonably good powers for detecting interactions with rHTV while keeping the type I error rates well-controlled. We also apply the approach (LBL-GXE) to the Michigan, Mayo, AREDS, Penn- sylvania (MMAP) Cohort Study on Age-related Macular Degeneration (AMD). LBL- GXE detects the interaction of a specific rHTV in the Complement Factor H (CFH) gene with smoking. To the best of our knowledge, this is the first time in the AMD literature that an interaction of smoking with a specific (rather than pooled) rHTV has been implicated. The behavior of a gene can be dynamic, thus it is important that we study the dynamic effect of gene overtime, the second topic of this dissedtation. We model the effect of both rare and common haplotype pairs by time-varying coefficients (tvc) using B-spline with (open full item for complete abstract)

    Committee: Shili Lin (Advisor); Christopher Bartlett (Committee Member); Asuman Turkmen (Committee Member) Subjects: Statistics
  • 12. Al-Shaikh, Enas Longitudinal Regression Analysis Using Varying Coefficient Mixed Effect Model

    PhD, University of Cincinnati, 2012, Medicine: Biostatistics (Environmental Health)

    Linear and nonlinear mixed models are very powerful techniques for modeling the relationship between a response variable and covariates and for handling the within-subject correlations in longitudinal data. For many applications in real life, however, it is difficult to find the proper parametric model to fit the data. Therefore, the adequacy of the model assumptions and the potential consequences of model misspecifications on the analysis under the classical linear model framework are questionable. Thus, it is important to increase the flexibility of linear regression models and to relax the conditions imposed on traditional parametric models to explore the hidden structure. The varying coefficient model (VCM), which was proposed by Hastie and Tibshirani (1993), provides a versatile and flexibale analysis tool for relating longitudinal responses to longitudinal predictors. Specically, this approach provides a novel representation of varying coefficient functions through suitable covariance of the underlying stochastic processes, which is particularly advantageous for sparse and irregular designs, as often encountered in longitudinal studies. In this dissertation, we hypothesized that varying coefficient mixed effect model (VCMEM) accurately predict, explore and address the relationship between four different covariates and the antigen level of MsgC using penalized spline smoothing technique. The longitudinal data were obtained from the Multicenter AIDS Cohort Study (MACS). We have two specific aims to test this hypothesis. The first aim is to fit VCMEM to MACS data, where the variable antigen level of MsgC is continuous. The second aim is to perform goodness of fit test to investigate the significance of the model covariates in VCMEM in the first aim using bootstrap techniques. We focused on fitting the VCMEM for the MACS data, where both fixed and random effects were modeled non-parametrically with P-spline smoothing. This allows us to explore how the effects of (open full item for complete abstract)

    Committee: Linda Levin PhD (Committee Chair); Charles Ralph Buncher ScD (Committee Member); Paul Succop PhD (Committee Member); Peter Walzer MD MSc (Committee Member) Subjects: Biostatistics
  • 13. Ramaswami, Hemant A Novel Method for Accurate Evaluation of Size for Cylindrical Components

    MS, University of Cincinnati, 2010, Engineering : Mechanical Engineering

    The objective of this thesis is to develop a methodology to calculate the size of a cylindrical profile accurately per ANSI standards. The ANSI Y 14.5.1M-1994 standard defines the size of a cylinder as the size of the largest ball rolling on a spine such that all points on the surface of the cylinder are external to it, or the size of the smallest ball rolling on a spine such that all points on the surface of the cylinder are internal to it.Current methods of size evaluation reduce the complexity of the spine and model it as a straight line. A novel methodology to evaluate the control points of the spine modeled as a Bezier curve or an Open Uniform B-Spline curve of a pre-specified degree based on points collected on the surface of the cylinder has been developed in this thesis. This provides a quantitative measure of the size of the cylinder in accordance with ANSI standards. The formulations to evaluate the maximum inscribing spine and the minimum circumscribing spine are presented as multi-level optimization problems. The outer level optimization is used to identify the optimal set of control points for the spline representing the path of the rolling ball. The inner level optimization is used to find the nearest point on the curve corresponding to every point in the dataset. The optimization formulation has been used to calculate the true size of cylinders for several published, simulated and real datasets. These results have been compared to traditional estimates for size of a cylinder, such as the maximum inscribed, minimum circumscribed and least squares cylinders. The results indicate that the method presented in this research conforms better to the ANSI standards as compared to the traditional methods. Further analysis is presented to observe the effect of sample size on the results of the algorithm. It is observed that with an increase in the sample size, the difference between the results of the presented algorithm and the traditional methods increases wit (open full item for complete abstract)

    Committee: Sundararaman Anand PhD (Committee Chair); David Thompson PhD (Committee Member); Murali Sundaram PhD (Committee Member) Subjects: Mechanical Engineering
  • 14. Wu, Zhou Two Essays on Single-index Models

    PhD, University of Cincinnati, 2008, Business Administration : Quantitative Analysis

    Single-index models, in the simplest form E(y|x)=g(xTb), generalize linear models by allowing flexible curvatures captured by the unknown function g(.), and at the same time, retain the same easy interpretability as in linear models, given the index parameter vector b that form the linear index xTb. In addition, compared with fully nonparametric models, single-index models avoid the “curse of dimensionality”. This dissertation consists of two essays on single-index models. The first essay is concerned with estimation of single-index varying coefficient models. Varying coefficient models assume that the regression coefficients vary with some threshold variables. Previous research focused on the case of a single threshold variable. It is common for the coefficients to depend on multiple threshold variables but the resulting model is difficult to estimate. Single-index coefficient models alleviate the difficulty by modeling each coefficient by a function of an index. Existing estimation approaches employ kernel smoothing or local linear approximation of the coefficient functions (Xia and Li, 1999; Cai, Fan and Yao, 2003) which entail heavy computational burden. Also, implementation of different bandwidths for different coefficient functions to allow different smoothness is difficult for local approaches. We propose a penalized spline approach to estimating single-index coefficient models that not only allows different smoothness for different coefficient functions but also is computationally fast. Asymptotic theory is established under dependency. Numerical studies demonstrate the proposed approach. The second essay is on single-index quantile regression. Nonparametric quantile regression with multivariate covariates is often a difficult estimation problem due to the “curse of dimensionality”. Single-index quantile regression, where the conditional quantile is modeled by a nonparametric link function of a linear combination of covariates, can reduce the dimensionality (open full item for complete abstract)

    Committee: Yan Yu PhD (Committee Chair); Martin S. Levy PhD (Committee Member); Weihong Song PhD (Committee Member) Subjects: Statistics
  • 15. BASU, SHUBHANKAR Performance Modeling and Optimization Techniques in the Presence of Random Process Variations to Improve Parametric Yield of VLSI Circuits

    PhD, University of Cincinnati, 2008, Engineering : Computer Engineering

    As semiconductor industry continues to follow Moore's Law of doubled devicecount every 18 months, it is challenged by the rising uncertainties in the manufacturing process for nanometer technologies. Manufacturing defects lead to a random variation in physical parameters like the dopant density, critical dimensions and oxide thickness. These physical defects manifest themselves as variations in device process parameters like threshold voltage and effective channel length of transistors. The randomness in process parameters affect the performance of VLSI circuits which leads to a loss in parametric yield. Conventional design methodologies, with corner case based analysis techniques fail to predict the performance of circuits reliably in the presence of random process variations. Moreover, the analysis techniques for detection of defects in the later stages of the design cycle result in significant overhead in cost due to re-spins. In recent times, VLSI computer aided design methodologies have shifted to statistical analysis techniques for performance measurements with specific yield targets. However, the adoption of statistical techniques in commercial design flows has been limited by the complexity of their usage and the need for generating specially characterized models. This also makes them unsuitable in repeated loops during the synthesis process. In this dissertation, we present an alternate approach to model and optimize the performance of digital and analog circuits in the presence of random process variations. Our work is targeted for a bottom-up methodology providing incremental tolerance to the circuits under the impact of random process variations. The methodologies presented, can be used to generate fast evaluating accurate macromodels to compute the bounds of performance due to the underlying variations in device parameters. The primary goal of our methodology is to capture the statistical aspects of variation in the lower levels of abstraction, while (open full item for complete abstract)

    Committee: Ranga Vemuri PhD (Committee Chair); Harold Carter PhD (Committee Member); Wen-Ben Jone PhD (Committee Member); Carla Purdy PhD (Committee Member); Srinivas Katkoori PhD (Committee Member) Subjects: Computer Science
  • 16. Zhang, Qi Semi-Parametric Test Based on Spline Smoothing for Genetic Association Studies Under Stratified Populations

    PhD, University of Cincinnati, 2007, Medicine : Biostatistics (Environmental Health)

    Several approaches are available to adjust for population stratification in genetic association studies. However, these methods either have low power and can not control for population stratification when population stratification becomes strong or require unrealistic assumptions. Here, I propose a semi-parametric test based on penalized spline smoothing to adjust for population stratification in genetic association studies. In my method, at first, a genetic background variable is derived for each individual by using principal component analysis at a set of independent markers; and then the relationship between trait values, genotypic scores at candidate markers, and the genetic background variable are modeled through a semi-parametric model. Penalized spline smoothing can be expressed in a linear mixed model framework, which allows such model to be fitted using standard mixed model software, and also allows the fitting of complex models. I conducted coalescence simulations as well as the simulation using HapMap data to compare the power and type I error of this test with two existing methods, Genomic Control (GC) and Structure Association (SA). My method not only has lower false positive rate but also has higher sensitivity to detect the true association in the presence of population stratification compared with the GC and SA methods. In most cases I considered, my method has higher power; and in the case of very high level of genetic heterogeneity and greater differential prevalence, my method has substantially higher power than that of the other two methods.

    Committee: Dr. Ranjan Deka (Advisor) Subjects: Biology, Biostatistics
  • 17. WOLFE, GLENN PERFORMANCE MACRO-MODELING TECHNIQUES FOR FAST ANALOG CIRCUIT SYNTHESIS

    PhD, University of Cincinnati, 2004, Engineering : Computer Engineering

    This work focuses on the development of accurate and efficient performance parameter macro-models for use in the synthesis of analog circuits. Once constructed the mathematical models may be used as substitutes for full SPICE simulation, providing efficient computation of performance parameter estimates. In this thesis, we explore various modeling architectures, develop and apply two unique sampling methodologies for adaptively improving model quality, and attempt to apply the sizing rules methodology in order to perform dimensional reduction and ensure proper operation of analog circuits. In order to properly create an analog performance model, a training data set is needed to create the model, and an independent validation data set is needed to verify the accuracy of the model. The training and validation data sets are comprised of discretely sampled points in the design space. Various methods exist for generating these sample points. A static sampler does not take into account the shape of the function under scrutiny, whereas an adaptive sampler strives to reduce modeling error through strategic placement of costly sample points. Two unique adaptive sampling methodologies are developed and are applied to various analog circuit performance metrics. It is shown experimentally that both adaptive samplers are capable of improving maximum modeling errors for various performance metrics and analog topologies. Strategic placement of costly sample points improves model quality while reducing the time needed to create the performance models. Adaptive sampling also alleviates human intervention during model construction, realizing an automatic framework for sampling and modeling performance parameters. The sizing rules method and feasibility region modeling are analyzed and applied to analog performance macro-modeling in an attempt to automatically reduce the dimensionality of the design space, simplify performance parameter behavior, and ensure proper DC biasing. A feasib (open full item for complete abstract)

    Committee: Dr. Ranga Vemuri (Advisor) Subjects:
  • 18. LI, MIN TWO ESSAYS IN BAYESIAN PENALIZED SPLINES

    PhD, University of Cincinnati, 2002, Business Administration : Quantitative Analysis

    Spline curve fitting has attracted a great deal of attention in recent years. It is a powerful method for fitting nonlinear models when the true mean functions are unknown and need to be estimated. This dissertation consists of two essays in Bayesian penalized splines estimation. In the first essay, we propose Bayesian Adaptive Penalized Splines (BAPS), combining features of both penalized splines and regression splines. We first outline a hierarchical Bayesian approach to penalized splines using Markov Chain Monte Carlo for fixed locations of spline knots. We then propose Bayesian Adaptive Penalized Splines, employing a reversible jump Markov Chain Monte Carlo algorithm to adaptively and Simultaneously select the number of knots, the locations of knots, and the smoothing parameter. BAPS is applied to real examples and compares favorably with competing methods in simulation studies. The second essay provides a Bayesian approach to estimating Treasury and corporate term structures with a penalized spline model. First, we estimate the Treasury term structure with a Bayesian penalized spline model, considering both normal and double exponential disturbances. We then estimate the corporate term structure by adding a spread to the estimated Treasury term structure, incorporating the knowledge of positive credit spread into our Bayesian model as informative priors. This is the first work using a Bayesian approach in the term structure literature and several advantages for adopting such an approach are presented.

    Committee: Dr. Yan Yu (Advisor) Subjects: Statistics
  • 19. Ning, Zhaohui Visualization for Runner System Design in Die Casting

    Master of Science, The Ohio State University, 2009, Industrial and Systems Engineering

    The research focuses on the development of a die casting runner system design process, and the visualization of this process. It provides a simple and flexible data representation for the runner system data and a user interface in CastView to create this data. The design process includes two steps. The first step is to create profiles. The second step is to create runner system data. Profile creation utilizes b-spline curve fitting algorithms to generate profiles with a uniform number of profile points from user defined sketches with a none-uniform number of sketch points. Runner system surfaces are created by sweeping interpolated profiles along user defined curve paths. To fully define runner system surfaces and promise their smoothness, curve paths' shapes and interpolation methods need to be defined. Different shapes and different interpolation methods can be applied to a sweep path. In the implementation presented here, circle is used for path shape and s-function interpolation is applied to normal and planar points. With these configurations, smooth sweep runner system surfaces have been illustrated.

    Committee: R. A. Miller PhD (Advisor); Jerald Brevick PhD (Committee Member) Subjects: Industrial Engineering
  • 20. Sriramagiri, Harsha B Spline Modelling of Non Linear Power Amplifiers based on LSNA measurememts

    Master of Science, The Ohio State University, 2009, Electrical and Computer Engineering

    Non Linear RF is an extensive area of research. The importance of this is due to the boom in cell phones and restrictions placed by the FCC. Due to bandwidth constraints set by the FCC, every service provider should ensure that there are no out of band components transmitted. The core cause of this has been the power amplifier which is generally a Non Linear Amplifier, such as Class E or Class F, which have excellent gain but very poor linearity. They cause intermodulation products and spectral regrowth which can result in poor transmission. Lot of techniques have been developed to remove these non linearities, such as Pre Distortion, wherein the signal is distorted before sending it to the amplifier so that the non linearities get cancelled. Thus, accurate models of the amplifiers are essential to predict their behavior for the purpose of pre distortion.This study focuses on the modeling of the non linearities present in the amplifier, focussing on fitting the signal both in time and frequency domain. The modeling is based completely on B-Splines, to fit the responses of a 16 tone, 64 Side bands WiMAX Signal at 2.65 GHz. The experimental data is obtained from the Large Signal Network Analyzer (LSNA) which recorded the reflection coeffiecients for all the harmonics at different power levels. The data from the LSNA is first fitted in time domain and later in the frequency domain. We explore the advantages and the limitations of both kinds of fitting. The goal of the work is to fit the fundamental band using the B-Spline Model and predict the spectral regrowth. Entire fitting process has been implemented in MATLAB after suitable synchronization of phase and delay which are introduced by the test equipment.

    Committee: Partick Roblin Dr. Sc (Advisor); Steven Bibyk PhD (Committee Member) Subjects: Electrical Engineering