Search Results (1 - 25 of 72 Results)

Sort By  
Sort Dir
 
Results per page  

Wilson, James EdwardDesign techniques for first pass silicon in SOC radio transceivers
Doctor of Philosophy, The Ohio State University, 2007, Electrical Engineering

Future radio transceivers are expected to deliver much higher data rates and operate at several frequencies. In 4G wireless systems, convergence of cellular and WLAN transceiver for VoIP will require the radio to operate in multiple RF bands and with different modulation schemes ranging from BPSK to 64- and 256-QAM OFDM. There is also the challenge for low power even as the handheld is pushed to achieve additional performance. While CMOS technology scaling and innovations in platform based systems and Network-on-Chip (SOC and NOC) have resulted in great strides within the digital part (digital baseband/MAC), the radio part of a wireless solution remains a major bottleneck. In today’s radio design environment, a fully integrated CMOS radio requires several silicon spins before it meets all product specifications and often with relatively low yields. This results in significant increase in NRE cost, especially when considering that mask set costs increase exponentially as feature size scales down. Furthermore, additional spins could lead to missing important market windows, particularly with the decreasing life cycles of semiconductor products.

In addition to the complexity of highly integrated radio transceiver, RF performance is highly susceptible to random variations in process and operating conditions. Such variations do not scale with the process. Worst-case corner simulations often lead to over-design and increased power consumption. RF models, package models and design kits are based on certain assumptions that severely limit design space exploration. All these factors prohibit first-time-right silicon.

This thesis work aims to address these issues by presenting design techniques leading to first pass success and taking advantage of the increased integration of digital, analog and RF. Through the exploitation of advances in the digital baseband, this dissertation proves that it is possible to calibrate the noise of the analog and RF front end. Through careful design and circuit feedback, it is also possible to calibrate the other important parameters in the RF front end, such as input matching, gain and linearity.

Committee:

Mohammed El-Naggar (Advisor)

Keywords:

first pass design; first pass silicon; RF; LNA; calibration; nosie calibration; input match; input match calibration

McLemore, Donald RodneyLayered Sensing Using Master-Slave Cameras
Master of Science in Engineering (MSEgr), Wright State University, 2009, Electrical Engineering
The objective of this thesis is to demonstrate the layered sensing concept using Master-Slave cameras. The process of 2D camera calibration and several key factors that can present error during such calibration are described. The analysis and results are based on calibration of a pinhole model camera system. The calibration is accomplished using OpenCV software and the results are analyzed using MATLAB software. These results are divided into intrinsic and extrinsic camera parameters. These parameters are then used to determine the position and orientation of the object in the camera coordinate system. This thesis also explores the use of two cameras as a Master-Slave system to demonstrate the layered sensing concept. The Master camera’s orientation, zoom, and distance from the Slave camera is fixed. Using the position and orientation of the object in the Master camera coordinate system, the position of the object in the Slave coordinate system is found using transformation matrices. This information is then used to determine the pan/tilt angles of the Slave camera. The Slave camera is then directed to focus on the object using servo control.

Committee:

Kuldip S. Rattan, PhD (Advisor); Devert Wicker, PhD (Committee Member); Doug Petkie, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

CAMERAS; checkerboard; calibration; camera calibration; Focal Length; zoom; Slave camera

Mannuru, SravanthiA Fully Automated Geometric Lens Distortion Correction Method
Master of Science (M.S.), University of Dayton, 2011, Electrical Engineering

In applications such as computer vision and robotics, camera calibration is required to correct geometric lens distortion of images. The problem with most techniques is that they require human involvement in the calibration process. This thesis proposes a new algorithm for camera calibration with no human involvement.

Typically in camera calibration process, an image of a calibration target (usually a checkerboard) is acquired for distortion correction. The checkerboard is used because it has known features and is easily segmented. If the image of checkerboard pattern undergoes distortion when the image is captured, and the distortion may be determined by analyzing the image of the checkerboard.

The proposed process for coefficient estimation is accomplished by segmenting out the checkerboard of a acquired image. The segmentation is done by finding the connected pixels (components), labeling the connected components and filtering out the unnecessary components from the acquired image. Then the algorithm uses sobel edge detection to detect the vertical and horizontal edges of the checkerboard, because the lines can be used to measure the displacement of image coordinates from their ideal location.

Next, the proposed distortion-correction model is applied to the edges of the image with a set of correction coefficients, resulting a set of corrected images. Next the best fit line (synthesized line) is found for each observed line in the each corrected image, and the squared distance between each synthesized and observed line is calculated in each corrected image. The average squared distance is then calculated for each corrected image. Finally, the minimum average distance is found for a set of corrected images in order to obtain the respective image correction coefficients.

Both synthetically generated images and natural images have been used to measure the performance of the proposed algorithm. The amount of distortion present in images before and after correction are represented graphically, and results show that the proposed, fully automated algorithm provides equivalent results when compared to other methods which require human involvement.

Committee:

Eric Balster, PhD (Committee Chair); Frank A. Scarpino, PhD (Committee Member); Vijayan K. Asari, PhD (Committee Member)

Subjects:

Biomedical Research; Electrical Engineering

Keywords:

Radial Distortion; Radial Distortion Correction; Calibration Process; Camera Calibration

Van Hook, Richard L.A Comparison of Monocular Camera Calibration Techniques
Master of Science in Computer Engineering (MSCE), Wright State University, 2014, Computer Engineering
Extensive use of visible electro-optical (visEO) cameras for machine vision techniques shows that most camera systems produce distorted imagery. This thesis investigates and compares several of the most common techniques for correcting the distortions based on a pinhole camera model. The methods being examined include a common chessboard pattern based on (Sturm 1999), (Z. Zhang 1999), and (Z. Zhang 2000), as well as two "circleboard" patterns based on (Heikkila 2000). Additionally, camera models from the visual structure from motion (VSFM) software (Wu n.d.) are used. By comparing reprojection error from similar data sets, it can be shown that the asymmetric circleboard performs the best. Finally, a software tool is presented to assist researchers with the procedure for calibration using a well-known fiducial.

Committee:

Kuldip Rattan, Ph.D. (Advisor); Juan Vasquez, Ph.D. (Committee Member); Thomas Wischgoll, Ph.D. (Committee Member)

Subjects:

Computer Engineering; Computer Science; Optics; Scientific Imaging

Keywords:

Calibration; Reprojection Error; Visual Structure from Motion; OpenCV; calibration patterns

Horiki, YasutakaA self calibration technique for a DOA array in the presence of mutual coupling and resonant scatterers
Doctor of Philosophy, The Ohio State University, 2006, Electrical Engineering
This dissertation describes an iterative self-calibration technique for a Direction-of-Arrival (DOA) array to automatically remove the effects of mutual coupling and near-zone resonant size scatterers. The Inverse Method of Moments (IMM) is also introduced to determine the normalized Method of Moments (MM) impedance matrix and MM voltage vector for a Uniform Linear Array (ULA) given its terminal currents produced by plane waves from known directions. The IMM can be combined with an iterative technique to determine the array DOA angles with mutual coupling effects removed.

Committee:

Edward Newman (Advisor)

Keywords:

Direction of arrival; angle of arrival; antenna array; iterative calibration; self calibration; mutual coupling; method of moments; inverse method of moments

Hartwig, Marissa KayDo test items that induce overconfidence make unskilled performers unaware?
PHD, Kent State University, 2013, College of Arts and Sciences / Department of Psychology
When a person estimates their global (overall) performance on a test they just completed, low performers often overestimate their performance whereas high performers estimate more accurately or slightly underestimate. Thus, low performers have been described as "unskilled and unaware" (Kruger & Dunning, 1999). However, recent evidence (Hartwig & Dunlosky, in press) demonstrates that low performers sometimes estimate accurately. What determines whether a participant estimates accurately vs. inaccurately remains unclear. Thus, the present research asks: What might participants use as the basis for their global estimates, and can it explain the accuracy of those estimates? One intuitive possibility is that participants use their response confidence in test items as the basis of their global estimates. A simple instantiation of this idea is described by the item-frequency hypothesis, which posits that participants compute the frequency of their high-confidence responses, and this frequency serves as an estimate of their global performance. A corollary of this hypothesis is that items that produce high confidence in wrong answers (i.e., false alarms, or FAs) will contribute to global overestimates, whereas items that produce low confidence in correct answers (i.e., misses) will contribute to global underestimates. Study 1 found preliminary support for the hypothesis, because the frequency of high-confidence responses on a typical trivia test was correlated with participants' global estimates, and the imbalance of FAs vs. misses predicted the accuracy of those estimates. To evaluate the hypothesis experimentally, Studies 2 and 3 manipulated the frequencies of FAs and misses that a trivia test was expected to yield, and participants were randomly assigned to receive one of the tests. Tests designed to yield many FAs (relative to misses) produced global overestimation, tests designed to yield more misses (relative to FAs) produced underestimation, and tests designed to yield a balance of FAs and misses produced accurate estimation. Thus, the selection of test items affects global estimates and their accuracy. The imbalance of FAs and misses could not explain all individual differences in estimation accuracy, but it nonetheless was a moderate predictor of global estimation accuracy.

Committee:

John Dunlosky (Committee Chair); Maria Zaragoza (Committee Member); Mary Beth Spitznagel (Committee Member); Christopher Was (Committee Member); Timothy Gallagher (Committee Member)

Subjects:

Psychology

Keywords:

metacognition; judgment accuracy; unskilled-and-unaware; calibration; overestimation

Bou Sleiman, SleimanBuilt-in-Self-Test and Digital Self-Calibration for Radio Frequency Integrated Circuits
Doctor of Philosophy, The Ohio State University, 2011, Electrical and Computer Engineering
The continual physical shrinking of semiconductor device dimensions is allowing for more integration between the previously segmented digital logic, memory, analog, and radio frequency domains – heralding the “More than Moore” era. Although able to meet the performance requirements for high-speed analog and RF, the devices are not guaranteed to always run at their typical sweet spot. The drifts from the optimal operation are due to many factors related to the silicon process and its response to changes in voltage and temperature, or what is collectively named PVT (Process, Voltage, Temperature) variations. These variations are a problem in all the integrated domains of the chip; however, RF circuits fail, in a more disproportionate manner, at sustaining proper operation over PVT. This makes them more prone to performance degradations and loss of yield when fabricated, in contrast to digital chips that can achieve near perfect yield. Putting both RF and digital together on a single chip, the hybrid system obviously inherits the lower yield, negating all the integration advantages. Therefore, the RF portions, in a sense, represent the SoC’s Achilles’ heel; in essence, an overly powerful and densely integrated chip can be made useless by a smaller underperforming portion of the chip. The ultimate goal is to increase the yield of the RF blocks by actively maintaining them in their optimal operating region. This proves to be a non-trivial task, as the operating conditions of the system at all times need to be known. For complex integrated systems, full verification during fabrication testing is quite prohibitive, in time and cost. A solution would be to build self-testing, and eventually self-healing, systems. Built-in-Self-Test (BiST) paradigms have already established themselves in the validation of digital blocks but are now becoming an increasingly active domain of research and development in RF. The notion of migrating RF test functionality to inside the chip brings us one step closer to cognitive-like radios. If RF blocks and systems can test for, and extract, their performance, then the ability to calibrate and cancel discrepancies can also be built into the system. Hence, Built-in-Self-Calibration (BiSC) can be layered on top of BiST to result in auto-correcting RF impairments at the block and system levels. In this dissertation, we discuss the problems set forth by increased integration and decreased circuit robustness. We also express the requirements for building efficient true self-test mechanisms using on-chip resources not only as value-added elements but also as necessary components for successful first-pass success of RF SoCs. An efficient RF sensor is presented along with the different possible built-in-tests for which it can be employed. The implementation of these on-chip test strategies aid in the development of calibration techniques that leverage the strengths of the more robust parts of the system to cover up the weaknesses of the others.

Committee:

Mohammed Ismail El-Naggar, PhD (Advisor); Waleed Khalil, PhD (Committee Member); Patrick Roblin, PhD (Committee Member)

Subjects:

Electrical Engineering

Keywords:

RFIC;BiST;self-calibration;self-healing;Integrated Circuits;robustness;yield

Everett, Ryan VincentAn Improved Model-Based Methodology for Calibration of an Alternative Fueled Engine
Master of Science, The Ohio State University, 2011, Mechanical Engineering
The EcoCAR challenge is a three year competition with the goal of re-engineering a 2009 General Motors crossover utility vehicle to improve vehicle emissions and fuel economy, while maintaining drivability and consumer acceptability. Ohio State’s team has selected an extended range electric vehicle (EREV) architecture with a 1.8 L compressed natural gas (CNG) Honda engine as the auxiliary power unit. This engine was chosen because of its 12.5:1 compression ratio, which results in higher brake efficiencies than a traditional spark-ignition (SI) engine. The Honda engine is converted to run on E85 fuel, which requires the engine control software to be rewritten. The purpose of this project is to improve the engine control strategy to reduce tail-pipe emissions and increase fuel economy. This research investigates a model-based calibration methodology to develop accurate engine calibrations for several operating parameters. The methodology utilizes design of experiments (DoE) techniques for data collection and advanced data analysis tools in MATLAB to develop accurate and robust engine control calibrations. The control software developed by Ohio State to run the Honda engine on E85 fuel resulted in a peak engine brake efficiency of 41% and a vehicle that meets EPA Tier II Bin 3 emissions standards.

Committee:

Giorgio Rizzoni, PhD (Advisor); Shawn Midlam-Mohler, PhD (Advisor); Yann Guezennec, PhD (Committee Member)

Subjects:

Alternative Energy; Automotive Engineering; Engineering; Mechanical Engineering

Keywords:

model-based calibration; E85 internal combustion engine; volumetric efficiency; alternative fuels; engine control; response surface; design of experiments; test minimization;

Gandhi, Raju P.REDUCTION / ELIMINATION OF ERRORS IN COST ESTIMATES USING CALIBRATION – AN ALGORITHMIC APPROACH
Master of Science (MS), Ohio University, 2005, Industrial and Manufacturing Systems Engineering (Engineering)

In estimating the cost of goods or services, errors between the estimated value and the actual cost should be expected. In this thesis, the assumption that this estimate error is consistent with similar goods and services is exploited to develop a calibration method for cost estimates produced with the cost estimator software system developed at Ohio University. This document discusses the various factors that need to be considered when implementing calibration in a hierarchical cost structure using multiple cost pools. An algorithm to implement calibration has been detailed along with pseudo-code. The end of the document describes how calibration can be applied to an estimate.

Committee:

David Koonce (Advisor)

Subjects:

Engineering, Industrial

Keywords:

Cost Estimation; Cost Estimator; Calibration; FIPER

Adu-Gyamfi, KwameLaboratory calibration of soil moisture, resistivity, and temperature probe - Capacitance probe
Master of Science (MS), Ohio University, 2001, Civil Engineering (Engineering)

Laboratory calibration of soil moisture, resistivity, and temperature probe - Capacitance probe.

Committee:

Gayle Mitchell (Advisor)

Subjects:

Engineering, Civil

Keywords:

Laboratory calibration; soil moisture; resistivity; temperature probe; Capacitance probe

Lee, ByunghoMethodology for rapid static and dynamic model-based engine calibration and optimization
Doctor of Philosophy, The Ohio State University, 2005, Mechanical Engineering
Over the past decade, the government regulations and legislations on pollutant emissions produced by transportations, especially by automotive vehicles, have become increasingly stringent. Moreover, the rise of gasoline price in recent years and increasing demand for large Sport Utility Vehicles (SUV’s), especially by customers in the North American market, have driven automotive industries to focus on developing advanced technologies for production engines to reduce fuel consumption and pollutant emissions. As a result of the advancements in automotive technologies, today’s Internal Combustion (IC) engines are equipped with advanced actuators and sensors, which provide more control authority than ever before, while changing how engines are operated. These additional actuator inputs (for example, direct fuel injection, variable intake system, intake port throttling, variable valve actuation, etc.) can lead to a sizable improvement in fuel economy and emissions. However, these additional degrees of control freedom can also lead to a significant increase in engine mapping and calibration requirements during the engine design and control system development processes. For this reason, this research work proposes a comprehensive methodology that provides a systematic approach to assist an engine control system development and calibration processes in the model-based framework. This alternative approach for developing an engine control system could in principle relieve some of the experimental burdens while accelerating the development and calibration processes. The proposed methodology has been demonstrated based on an advanced IC engine equipped with a Variable Valve Actuation (VVA) system. First, a new approach of obtaining reliable engine maps using a validated crank-angle resolved engine model has been introduced. Then, the utility and feasibility of the model-based engine map have been demonstrated by using it to develop a mean-value engine plant model and validate its outputs with the vehicle data acquired during transient driving cycle tests. This type of engine model is a computer-aided tool that can provide a rapid and accurate evaluation of a prototype control algorithm by using it as a virtual engine plant with which control engineers can quickly test and validate their control algorithm via, for example, hardware-in-the-loop simulations.

Committee:

Yann Guezennec (Advisor)

Keywords:

Model-Based Engine Calibration

Anderson, Jennifer ChristineQUANTITATIVE AND QUALITATIVE INVESTIGATIONS INTO URINARY CALCULI USING INFRARED MICROSPECTROSCOPY
Doctor of Philosophy, Miami University, 2007, Chemistry
This dissertation encompasses research focused on both qualitative and quantitative techniques for the analysis of loose urinary calculi (often referred to as renal stones) as well as biopsied tissue sections containing urinary calculi material. Due to high rates of renal calculi misdiagnosis, qualitative techniques that are efficient, accurate, and lack long sample preparation are desperately needed in clinics and hospitals around the world. The techniques presented in this dissertation rely on the use of infrared molecular microspectroscopy and infrared spectral maps for unbiased, fast, and accurate methods of both urinary calculi and tissue analysis. Utilizing infrared reflectance techniques, samples of varying size, shape, thickness, and consistency are easily and quickly analyzed. Additionally, infrared reflectance techniques are non-destructive, allowing the analysis of delicate samples without their contamination or destruction. Infrared spectral maps pin-point and visually differentiate urinary calculi components in a matter of minutes, resulting in an unbiased and accurate diagnosis. Not only are qualitative techniques and results detailed in this dissertation, but quantitative results are included as well. Using infrared attenuated total internal reflectance (ATR) techniques, detection limits of urinary calculi components are investigated. Both concentration detection limits and particle size detection limits are important in the quantification of urinary calculi components. ATR analysis techniques allow the quantitative analysis of particles as small as 10 ìm in size, as well as the analysis of concentrations of urinary calculi components as low as 1% by weight. Finally, an appendix is included outlining the procedure, sample types, and results of an EPA internship lasting 900 hours. The focus of this internship was the analysis of corrosion samples from public and private drinking water systems. The main capacity of this research was confirmatory in nature, using Raman microspectroscopy to confirm or deny the presence of certain lead and copper compounds.

Committee:

Andre Sommer (Advisor)

Subjects:

Chemistry, Analytical

Keywords:

Infrared; Microspectroscopy; Renal Stones; Kidney; Calibration Curve; EPA

Zhang, LinPATTERN RECOGNITION METHODS FOR THE ANALYSIS OF INFRARED IMAGING DATA AND MULTIVARIATE CALIBRATION STANDARDIZATION FOR NEAR-INFARED SPECTROSCOPY
Doctor of Philosophy (PhD), Ohio University, 2002, Chemistry (Arts and Sciences)

Automated analysis methods for infrared imaging data were investigated in two applications. The first application involved a hyperspectral Fourier transform infrared spectroscopic imaging system. The objective was to differentiate human breast cells in normal and cancer states based on infrared imaging measurements combined with pattern recognition techniques. The Fourier transform imaging system consisted of a step-scan interferometer coupled to an infrared microscope. A mercury cadmium telluride detector with 64x64 array format was used. Cluster analysis and artificial neural networks were employed to develop automated classifiers for determining cell and non-cell pixels within the image and for predicting the disease state of the cell pixels. Satisfactory results were obtained and the results show that this approach provides a promising tool to aid the pathologist in the differentiation of normal and diseased human tissues.

The second application involved a multispectral remote sensing imaging system mounted on an aircraft. The objective was to develop automated methods for the remote detection of atmospheric chemical species from airborne multispectral infrared imaging data. The imager used in this study was a multispectral infrared line scanner based on 14 spectral bands. The stack emissions from an ammonia plant within a nitrogen fertilizer facility were investigated in this study. Cluster analysis and piecewise linear discriminant analysis were used in training data selection and classifier building, respectively. Satisfactory results were obtained for the classification of image pixels into plume/non-plume and CO 2 /non-CO 2 categories.

Multivariate calibration has been widely used in analytical chemistry for decades. However, there are many situations when a multivariate calibration model may become invalid. To avoid the overhead of a full recalibration, multivariate calibration standardization is necessary. A new multivariate calibration standardization approach is developed based on the idea of building robust calibration models. The method is based on calibration sample selection and a weighting procedure. The main advantage of this approach is that compared to standardization methods such as direct standardization (DS) or piecewise direct standardization (PDS), the identical set of samples need not be measured with both primary and secondary instruments.

In an initial study, three data sets with the same constituents collected on the same instrument over a period of six years were used. The results showed that the proposed method significantly improved prediction performance in the new situation. A second study further investigated calibration standardization across three different instruments (one primary and two secondary instruments). The performance of the proposed algorithm, DS, and PDS were compared in this study. The results showed that the proposed algorithm outperformed both DS and PDS.

Committee:

Gary Small (Advisor)

Subjects:

Chemistry, Analytical

Keywords:

chemometrics; multivariate calibration standardization; pattern recognition; near-infrared spectroscopy; infrared imaging

Kohli, KaranStructural Dynamics Model Calibration and Validation of a Rectangular Steel Plate Structure
MS, University of Cincinnati, 2014, Engineering and Applied Science: Mechanical Engineering
In this fast growing world, where technological advancements are influenced by more efficient and less expensive designs, there is a significant need of adopting methods and techniques that can support this thought process. To address this requirement, Finite Element (FE) modeling techniques were introduced as a viable solution. Specifically in case of mechanical structures, where prototype building and physical testing requires a lot of time and involves huge costs, developing an FE model that can predict its dynamic characteristics accurately is key to address the issue. However, with more reliance on FE models as a true representative of physical structures, there is a significant need to evaluate such models. This evaluation involves comparing the results obtained from simulating FE models with the results achieved from mathematical models and real world test data. In other words, such models need to undergo `verification’ and `validation’ before they can be used to predict reliable results for the future. In this research, a model verification, calibration, and validation case study is performed on a rectangular steel plate structure. The case study was based on a systematic approach that was used in an attempt to follow the Guidelines for Verification and Validation (V\&V) published recently by the American Society of Mechanical Engineers (ASME) and the American Institute of Aeronautics and Astronautics (AIAA). The accuracy of the model and the validation process was confirmed by modal correlation when the structure and the calibrated model were subjected to perturbed mass and constrained boundary conditions. The validation criteria were achieved using the calibrated model and the results obtained through validation criterion helped quantify the accuracy of the developed model under different boundary conditions.

Committee:

Randall Allemang, Ph.D. (Committee Chair); Allyn Phillips, Ph.D. (Committee Member); Kumar Vemaganti, Ph.D. (Committee Member)

Subjects:

Mechanical Engineering

Keywords:

Model Validation;Verification and Validation;Model Calibration

Patrick, Nathan AEvaluating Near Surface Lapse Rates Over Complex Terrain Using an Embedded Micro-Logger Sensor Network in Great Basin National Park
Master of Science, The Ohio State University, 2014, Atmospheric Sciences
The Great Basin National Park (GBNP) embedded sensor network (ESN) consisting of Lascar micro-loggers was established in 2006 to characterize near surface temperature and humidity. With respect to micro-logger networks, based on available literature, GBNP's ESN contains the densest deployment of micro-loggers (currently 15) above 3000 m in North America for a limited local area. Primary purposes were to assess local climate conditions, evaluate how climate may be changing and support other research projects with meteorological data. In this work, surface temperature lapse rates and surface specific humidity lapse rates were calculated and analyzed on three different time scales (annual, seasonal and monthly) using linear regression. Furthermore, the ESN was subdivided into different geographic subsets which encompass different elevation ranges and landcover types. Results indicated a calculated study-wide (2006 - 2012) mean annual temperature lapse rate, -6.0°C km-1, compared favorably to the common environmental lapse rate (ELR) value of -6.5°C km-1. However, surface temperature lapse rates varied considerably for different geographic subsets and time scales. Mean monthly temperature lapse rates for the entire study area varied from -3.8°C km-1 in January to -7.3°C km-1 in June. Additional variability was introduced when elevation zones were considered. For locations below 3000 m (all QC data), the mean monthly temperature lapse rate ranged from -3.6°C km-1 in January to -9.1°C km-1 in August. Perhaps more significant, in May and summer (JJA) surface lapse rates below 3000 m became quite steep. Here, surface lapse rates exceeded -9.0°C km-1 and approached the dry adiabatic lapse rate. Above 3000 m, surface mean monthly temperature lapse rates were more compressed ranging from -4.5°C km-1 in May to -7.3°C km-1 in September. Throughout summer, mean monthly temperature lapse rate differences between high elevations (> 3000 m) and low elevations (< 3000 m) exceeded 3.0°C km-1. Reduced differences occurred in other months but lower elevations continued to have steeper mean monthly temperature lapse rates. These results indicated using a common ELR (-6.5°C km-1) to interpolate surface temperatures will introduce bias and error for processes using sub-annual resolution. Another objective of this study was assessing Lascar sensor drift by completing a relative sensor calibration and drift study. This was done by comparing 13 brand new un-deployed Lascars with other meteorological instruments and then deploying them alongside existing Lascars in a dual deployment configuration at 10 locations within the ESN. Results indicated overall drift was not a problem during the one year comparison. Some locations did exhibit characteristics of minor drift but the impact was negligible. Finally, near surface temperature lapse rates above 3000 m were compared to free air equivalent lapse rates derived from using nearby radiosonde data. The hypothesis tested was the upper portions of GBNP behave like the free air and terrestrial processes have a minimal impact on the temperature structure of the surface. Results suggested that outside the summit of Wheeler Peak, the upper elevations of GBNP did not behave like the free air and surface temperatures were a result of terrestrial processes interacting with the atmosphere.

Committee:

Bryan Mark (Advisor); Jialin Lin (Committee Member); Michael Durand (Committee Member)

Subjects:

Atmospheric Sciences; Environmental Science; Geography; Geophysical; Hydrologic Sciences; Hydrology; Meteorology; Physical Geography; Water Resource Management

Keywords:

Lapse Rates; Great Basin; Micro-Logger; Micro Logger; lascar; Specific Humidity; Surface Temperatures; Complex Terrain; mountain geography; National Park; Hydrology; Dry Adiabatic; Saturated Adiabatic; Environmental Lapse Rate; Free Air; Calibration

Messay-Kebede, TemesguenComputationally Efficient and Robust Kinematic Calibration Methodologies and their Application to Industrial Robots
Doctor of Philosophy (Ph.D.), University of Dayton, 2014, Electrical Engineering
Robot kinematic calibration is the process of enhancing the positioning accuracy of a given manipulator and must be performed after robot manufacture and assembly or during periodical maintenance. This dissertation presents new computationally efficient and robust kinematic calibration algorithms for industrial robots that make use of partial measurements. These include a calibration method that requires the supply of Cartesian coordinates of the calibration points (3DCAL) and another calibration technique that only requires the radial measurements from the calibration points to some reference (1DCAL). Neither method requires orientation measurements nor the explicit knowledge of the where-about of a reference frame. Contrary to most other similar works, both methods make use of a simplified version of the original Denavit-Hartenberg (DH) kinematic model. The simplified DH(-) model has not only proven to be robust and effective in calibrating industrial manipulators but it is also favored from a computational efficiency viewpoint since it consists of comparatively fewer error parameters. We present a conceptual approach to develop a set of guidelines that need to be considered in order to properly construct the DH(-) model such that it is parameterically continuous and non-redundant. We also propose an automated method to provide a characterization of the parameters that can be insightful in identifying redundant/irrelevant parameters and deducing the DH(-) error model of a manipulator. The method is a hybrid scheme comprised of the Simulated Annealing (SA) algorithm and a local solver/optimizer and it conducts a statistical analysis on the estimates of a given error parameter that is indicative of its relevance. For the type of industrial robots used in this dissertation, we made note that calibrating the home position only is sufficient to attain adequate results for most robotics applications. Hence, we put forward for consideration of a yet simpler calibration model; the DH(-)(-) model. We employ the Trust Region (TR) method to minimize the objective functions (solve for the error parameters of the simplified error models) of both frameworks (3DCAL and 1DCAL). We also compare the performance of the proposed methods to that of a state-of-the-art commercial system (Motocal) using the same materials, data and internationally recognized performance standards. Our experimental results suggest that our methods are more robust and yield better results compared to that of MotoCal.

Committee:

Raul Ordonez, Ph.D. (Committee Chair); Russell Hardie, Ph.D. (Committee Member); John Loomis, Ph.D. (Committee Member); Ruihua Liu, Ph.D. (Committee Member)

Subjects:

Engineering; Robotics

Keywords:

Kinematic Calibration; Optimization; Simulated Annealing; Trust Region; CompuGauge; MotoCal; Industrial Robots; Yaskawa Motoman Robotics Inc

Nowacki, Brenna M.Verification and Calibration of State-of-the-Art CMC Mechanistic Damage Model
Master of Science (M.S.), University of Dayton, 2016, Mechanical Engineering
Due to their low density, high toughness and elevated temperature performance, Ceramic Matrix Composites (CMCs) are attractive candidates for replacing metals in many high temperature applications, such as gas turbine engines and exhaust nozzles. While there are numerous benefits to CMCs, there are several limitations hindering the full-scale application within the aerospace industry. One significant limitation is the ability to accurately model and predict CMC damage behavior. A mechanistic approach to modeling the damage behavior in CMCs was previously developed by Structural Analytics. The damage model, CLIP (Ceramic Matrix Composite Life Prediction), is embedded in a software package that consists of an ABAQUS user-subroutine, as well as a standalone application. The current study verifies the model by calibrating it to a slurry melt-infiltrated SiC/SiC composite. A series of experimental tests were conducted at the Air Force Research Laboratory (AFRL) including montonic tensile tests at 23°C, 800°C and 1200°C, a creep test at 1200°C and a sequentially loaded tensile test at 23°C. The results from the experimental tests were used to calibrate the damage model. The calibration was concluded as successful when the model could produce matching stress-strain curves to the experimental data at the respective temperatures. Finally, the model was used to make predictions for intermediate temperature ranges of monotonic tension, sequentially loaded tension, and off-axis tension.

Committee:

Pinnell Margaret, Ph.D. (Advisor); Jefferson George, Ph.D. (Committee Member); Whitney Thomas, Ph.D. (Committee Member)

Subjects:

Materials Science; Mechanical Engineering

Keywords:

ceramic matrix composite; ABAQUS; CLIP; damage model; calibration; SiC-SiC; material characterization

Rajagopalan, Sai S.V.Model Based Control Design And Rapid Calibration For Air To Fuel Ratio Control Of Stoichiometric Engines
Doctor of Philosophy, The Ohio State University, 2008, Mechanical Engineering
The air-fuel ratio control problem has been the focus of research in academia eversince the advent of microprocessor based control of automotive systems. With rising fuel costs, increased environmental consciousness, and the need to be competitive in the global market, automotive manufacturers, who have traditionally relied on extensive experimental calibration to fine tune performance, are moving towardsmodel based calibration of control gains to minimize costs and incorporate advances in power-train technology in a systematic manner. While solutions presented in the open literature to date illustrate the application of the plethora of modern control theory to this problem, calibration effort and emissions performance have not been given adequate priority. The work presented in this document bridges a gap between academia and industry by using modern advances in control theory to design a solution that significantly minimizes the calibration effort required to meet performance requirements. Through proper design of experiments, data collected from the plant is used to design mathematical models that represent the dominant dynamics observed in the intake system, the fueling, combustion and gas transport and finally the behavior of the three way catalyst. The structure of these models are chosen by a deep understanding of the physics involved. System identification is used to simplify the model structure. To capture the highly nonlinear operating condition based behavior of the plant, a linear parameter varying approach is used to schedule the model coefficients through the use of linear splines on manifold pressure, engine speed and coolant temperature. The control architecture chosen utilizes a servomechanism incorporating coordinated open and closed loop control action to meet the desired performance goal and is universal in the sense that both switching sensor and linear sensor based systems can be easily accommodated with minimal change. A rapid action inner-loop, with the pre-catalyst oxygen sensor feedback to control the engine exhaust, is used in conjunction with an outer loop with post catalyst oxygen sensor feedback, to control the oxygen storage in the catalyst. Calibration of the control gains is achieved through closed loop optimization using genetic algorithms. Parallel computing is used for this optimization. Performance and stability criteria are simultaneously incorporated in the cost function to achieve the desired goals. Numerical linear matrix inequality based tests are used to guarantee stability of the closed loop system over the entire operating region. Although the discussion in this dissertation is restricted to calibration of the inner-loop compensator, one can easily adopt the proposed techniques for the design of the outer-loop compensator, as well as for a wide variety of feedback control systems.

Committee:

Stephen Yurkovich (Advisor)

Subjects:

Automotive Materials; Electrical Engineering; Mechanical Engineering

Keywords:

AFR control; Optimization; Catalyst Oxygen storage; Model based control; model based calibration.

Hall, John J.The design, construction and control of a four-degree-of-freedom hybrid parallel/serial motion platform for the calibration of multi-axis inertial measurement units
Master of Science (MS), Ohio University, 2000, Mechanical Engineering (Engineering)

The Department of Mechanical Engineering and the Avionics Engineering Center at Ohio University are developing an electromechanical system for the calibration of an inertial measurement unit (IMU) using global positioning system (GPS) antennas. The GPS antennas and IMU are mounted to a common platform to be oriented in the angular roll, pitch, and yaw motions. Vertical motion is also included to test the systems in a vibrational manner. A four-DOF system based n the parallel Carpal Wrist has been developed as a test platform for this calibration process. High-accuracy positioning is not required from the platform since the GPS technology provides absolute positioning data for the IMU calibration.

Committee:

Robert Williams II (Advisor)

Subjects:

Engineering, Mechanical

Keywords:

hybrid motion platform; parallel/serial motion platform; calibration of multi-axis; inertial measurement units; IMU; global positioning system; GPS antennas

Lee, Jun SeokOn-Wafer Characterization of Electromagnetic Properties of Thin-Film RF Materials
Doctor of Philosophy, The Ohio State University, 2011, Electrical and Computer Engineering
At the present time, newly developed, engineered thin-film materials, which have unique properties, are used in RF applications. Thus, it is important to analyze these materials and to characterize their properties, such as permittivity and permeability. Unfortunately, conventional methods used to characterize materials are not capable of characterizing thin-film materials. Therefore, on-wafer characterization methods using planar structures must be used for thin-film materials. Furthermore, most new, engineered materials are usually wafers consisting of thin films on a thick substrate. The first step of this study was the development of a novel, on-wafer characterization method for isotropic dielectric materials using the T-resonator method. Although the T-resonator method provides highly accurate measurement results, there is still a problem in determining the effective T-stub length, which is due to the parasitic effects. Our newly developed method uses both the resonant effects and the feed-line length of the T-resonator. In addition, performing the TRL calibration provides the exact length of the feed line, thereby minimizing the uncertainty in the measurements. As a result, our newly developed method showed more accurate measurement results than the conventional T-resonator method, which only uses the T-stub length of the T-resonator. The second step of our study was the development of a new on-wafer characterization method for isotropic, magnetic-dielectric, thin-film materials. The on-wafer measurement approach that we developed uses two microstrip transmission lines with different characteristic impedances, which allow the determination of the characteristic impedance ratio. Therefore, permittivity and permeability can be determined from the characteristic impedance ratio and the measured propagation constants. In addition, this method involves Thru-Reflect-Line (TRL) calibration, which is the most fundamental calibration technique for on-wafer measurement, and it eliminates the parasitic effects between probe tips and contact pads. The third step of this study was the development of an on-wafer characterization method for magnetic-dielectric material using T-resonators. This method allows the determination of the ratio of the characteristic impedance to the effective refractive index of the magnetic-dielectric materials at the resonant frequency points. Therefore, permittivity and permeability can be determined. Although this method does not provide continuous extractions of material properties, it provides more accurate experimental results than the transmission line methods. The last step of this research was the evaluation and assessment of an anisotropic, thin-film material. Many of the new materials being developed are anisotropic, and previous techniques developed to characterize isotropic materials cannot be used. In this step, we used microstrip line structures with a mapping technique to characterize anisotropic materials, which allowed the transfer of the anisotropic region into the isotropic region. In this study, we considered both uniaxial and biaxial anisotropic material characterization methods. Furthermore, in this step, we considered a characterization method for biaxial anisotropic material that has misalignments between the optical axes and the measurement axes. Thus, our newly developed anisotropic material characterization method can be used to determine the diagonal elements in the permittivity tensor as well as the misalignment angles between the optical axes and the measurement axes.

Committee:

Roberto Rojas (Advisor); Patrick Roblin (Committee Member); Fernando Teixeira (Committee Member)

Subjects:

Electrical Engineering

Keywords:

on-wafer measurement; material characterization; permittivity; permeability; anisotropic material; on-wafer calibration

McElroy, Mark AllenA Procedure for Generating Finite Element Models (FEM) of Abdominal Aortic Aneurysms with Fluid Boundary Conditions Derived from Magnetic Resonance Imaging (MRI) Velocimetry
Master of Science, The Ohio State University, 2010, Mechanical Engineering

Abdominal Aortic Aneurysms (AAAs) are localized bulges in the lower aortic artery tissue. AAAs are prone to rupture, an extremely dangerous event, in which the aorta rips open and blood is allowed to flow freely into the body’s internal cavity. The fatality rate for ruptured AAAs is over 50% so preventative surgery is the preferred method of treatment. However many AAAs never rupture and the risks involved with preventative surgery are not negligible. Clinicians therefore must decide when the risks of AAA rupture outweigh those of preventative surgery.

The current clinical metric for determining the risk of AAA rupture is the transverse diameter. A 5.5 cm diameter is the suggested max allowable size. As many as 20-30% of AAAs below this threshold rupture and in practice, the operating surgeon must account for other risk factors too such as blood pressure and aneurysm shape. As a result, the decision is no more than an educated guess based on a series of known risk factors. There is a clinical desire for a more reliable and comprehensive AAA rupture risk metric.

Studies have shown that maximum arterial wall Von-Mises stress, calculated using patient-specific finite element (FE) models outperforms diameter with regards to predicting AAA rupture. Modern AAA FE models employ fully coupled dynamic fluid- structure interaction (FSI) techniques in an effort to accurately measure max wall stress in-vivo and non-invasively.

Published boundary conditions (BCs) for dynamic AAA model fluid domains typically involve standard flow rate and pressure conditions being applied at the inlet and outlet of the model respectively. Our lab proposes using in-vivo blood velocity measurements from phase-encoded velocimetry MRI scans to generate patient-specific fluid BCs. A patient-specific flow rate condition is applied at the inlet matching the velocimetry data read in at the inlet. A patient-specific downstream pressure is applied at the outlet. This pressure BC is derived from an optimization routine which seeks to match the modeled and measured outlet flow rates by altering the impedance at the outlet.

To date, only one model has been run to convergence, due to a computation run time of over 1 month. While changes were made to the pressure condition at the outlet (a 14% increase in dynamic range) during optimization, these changes had almost no effect of the max arterial wall stress.

Committee:

Samir Ghadiali, PhD (Advisor); Simonetti Orlando, MD (Committee Member)

Subjects:

Biomedical Research; Engineering; Mechanical Engineering

Keywords:

abdominal aortic aneurysm; AAA; finite element; FE; magnetic resonance imaging; MRI; velocimetry; calibration

Elesev, AleksandrRobot Localization Using Inertial and RF Sensors
Master of Computer Science, Miami University, 2008, Computer Science and Systems Analysis
A mobile robot must know its position in order to operate autonomously. The process of determining the robot's position from sensor data is termed robot localization. IMU and RF are a few of the many different types of sensors that can be used for navigation and localization purposes. When used independently, these sensors can achieve good accuracy when operating in certain conditions. By merging the results from multiple sensors, the accuracy over a wider range of conditions can be obtained. This work proposes a technique of merging heterogeneous information from inertial and RF sensors. Since sensors have errors associated with their readings, the robot's state will be represented using a probability distribution function (PDF). At each time step, this PDF will be updated based on the RF readings and then updated again based on the IMU readings. Better localization accuracy is obtained by using the RF and inertial sensors together.

Committee:

Michael Zmuda, PhD (Advisor); Jade Morton, PhD (Committee Member); Valerie Cross, PhD (Committee Member)

Subjects:

Artificial Intelligence; Computer Science; Robots

Keywords:

robot localization; navigation; IMU; inertial sensors; RF; radio frequency sensors; particle filters; grid-based approach; PDF; probability density function; calibration

Leatherman, Erin RaeOptimal predictive designs for experiments that involve computer simulators
Doctor of Philosophy, The Ohio State University, 2013, Statistics
A deterministic computer simulator is the numerical coding of a mathematical model that describes the relationship between variables in a physical system. Computer experiments use simulators as experimental tools to determine "responses" or "outputs" at a set of user-specified input design points. As mathematical descriptions have become more sophisticated, the use of deterministic simulators as experimental vehicles has become more widespread in many scientific applications. In fact, computer experiments can sometimes be used instead of traditional physical experiments when the number of experimental factors are too numerous to study via a physical experiment, or when a physical experiment is financially prohibitive or unethical to perform. Unfortunately the mathematical models used for computer experiments can be expensive to evaluate and may require days or months for the simulator to make a single run. When this is the case, the simulator output is often modeled as a Gaussian Stochastic Process (GaSP) using training data collected from the simulator at a user-specified set of input design points. When it is of interest to predict simulator output over the entire input space, classical design criteria for computer experiments select designs that are space-filling. That is, the design points are selected to be well-spread within the input space. This dissertation investigates an alternative, process-based design criterion for prediction which minimizes the Bayesian Integrated Mean Square Prediction Error (BIMSPE). The BIMSPE is calculated by averaging the Integrated Mean Square Prediction Error across a prior distribution for the model parameters. This dissertation uses the minimum BIMSPE criterion to find simulator designs that allow for good prediction of the simulator output. The predictive ability of the BIMSPE-optimal designs are compared to IMSPE-optimal designs and to space-filling designs using a simulation study. When computer simulators are able to describe the true mean of an associated physical experiment, physical observations can be used to calibrate the simulator model. Calibration is performed so that the simulator output is as "close as possible" to the mean physical response. This dissertation also uses the minimum BIMSPE criterion to find combined physical and simulator designs that allow for good prediction of the true mean physical response using the calibrated simulator. Using a simulation study, the predictive ability of the combined BIMSPE-optimal designs are compared to combined IMSPE-optimal designs and classic physical and simulator designs that are used for prediction. This dissertation shows that in both the simulator-only setting and the calibration setting, the classic designs perform much worse than the IMSPE- and BIMSPE-optimal designs.

Committee:

Angela Dean, PhD (Advisor); Thomas Santner, PhD (Advisor); William Notz, PhD (Committee Member); Matthew Pratola, PhD (Committee Member)

Subjects:

Statistics

Keywords:

experimental design; computer experiments; prediction; calibration; computer simulator

Huang, BoMining Vehicle Classifications from Archived Loop Detector Data
Master of Science, The Ohio State University, 2014, Electrical and Computer Engineering
Vehicle classification data are used in many transportation applications, including: pavement design, environmental impact studies, traffic control, and traffic safety. Ohio has over 200 permanent count stations, supplemented by many more short-term count locations. Due to the high costs involved, the density of monitoring stations is still very low given the lane miles that are covered. This study leveraged the deployed detectors in the Columbus Metropolitan Freeway Management System (CMFMS) to collect and analyze classification data from critical freeways where the Ohio Department of Transportation has not been able to collect much classification data in the past due to site limitations. The CMFMS was deployed in an unconventional manner because it included an extensive fiber optic network, frontloading most of the communications costs, and rather than aggregating the data in the field, the detector stations sent all of the individual per-vehicle actuations (i.e., PVR data) to the traffic management center (TMC). The PVR data include the turn-on and turn-off time for every actuation at each detector at the given station. Our group has collected and archived all of the PVR data from the CMFMS for roughly a decade. The PVR data allows this study to reprocess the original actuations retroactively. As described in this thesis, the research undertook extensive diagnostics and cleaning to extract the vehicle classification data from detectors originally deployed for traffic operations. The work yielded length based vehicle classification data from roughly 40 bi-directional miles of urban freeways in Columbus, Ohio over a continuous monitoring period of up to 10 years. The facilities span I-70, I-71, I-270, I-670, and SR-315, including the heavily congested inner-belt. Prior to this study, these facilities previously had either gone completely without vehicle classification or were only subject to infrequent, short-term counts.

Committee:

Benjamin Coifman (Advisor); Keith Redmill (Committee Member)

Subjects:

Electrical Engineering; Engineering; Transportation

Keywords:

Loop detectors, detector mapping, unmatched transitions, transition time, loop detector calibration, pulse matching, vehicle length calculation, data quality evaluation

Alturki, Abdulrahman S.Principal Point Determination for Camera Calibration
Doctor of Philosophy (Ph.D.), University of Dayton, 2017, Electrical and Computer Engineering
Calibration is a fundamental task in computer vision, and it is critical for a growing number of applications. Identification of the principal point parameter is a significant step for calibration, because its accuracy will strongly effect the other parameters and the overall accuracy of the calibration. Additionally, some camera properties require slightly different definitions for the principal point. This work explores several novel techniques for highly accurate principal point estimation, all of which utilize simple planar checkerboard targets. First, an improved and automated corner detection algorithm is described. Checkerboard image corner points are located as saddle points, and the Hough transform is applied to remove spurious points and group them into rows and columns. The lines formed from these rows and columns are used to identify vanishing points. Multiple vanishing points lie along horizon lines, and two different techniques based on horizon lines are described for estimating the principal point. It is also possible to identify the principal point using images of a pair of checkerboards, one behind the other, that are nominally perpendicular to the camera’s optical axis. This problem requires additional corner-point processing to separate the two checkerboards in the images, and corrections are developed to handle orientation errors such as small rotations and translations between the checkerboards and image plane. Experimental results for these methods are presented and their accuracy and possible applications are discussed.

Committee:

John Loomis (Advisor)

Subjects:

Computer Science; Electrical Engineering

Keywords:

Camera Calibration, Computer Vision, Corner detection, hough transform, principal point,

Next Page