Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 29)

Mini-Tools

 
 

Search Report

  • 1. Aksu, Alper BENCH-TOP VALIDATION OF INTELLIGENT MOUTH GUARD

    Master of Science in Biomedical Engineering, Cleveland State University, 2013, Fenn College of Engineering

    Concussion is the signature athletics injury of the 21st Century. Scientists are hard at work monitoring effects of hard impacts on the human brain. However, existing tools and devices are inadequate to screen the effects. Hence, a new approach is required to accurately quantify peak values of head impacts or concussions and relate these values to clinical brain health outcomes. A new head impact dosimeter, the Intelligent Mouth Guard (IMG) has been developed and can be conveniently located inside the mouth. In this study, the IMG printed circuit board (PCB) including four (4) high-quality shock resistant sensors has been developed and implemented as a tri-axial impact analyzer in a mouthpiece. The bench-top validation process of the IMG was divided into theoretical uncertainty analysis of linear accelerometers, theoretical uncertainty analysis of angular rate sensors, bench-top uniaxial impact testing of linear accelerometers and bench-top uniaxial static testing of angular rate sensors. More specifically, this study also presents a method based on National Bureau of Standards (NBS) of analyzing measurement error for any components of a specialized electrical circuit and any types of data acquisition system. In the current application of an IMG printed circuit board (PCB), utilized for linear acceleration, angular acceleration and angular velocity measurements, has sensor uncertainties quantified. The uncertainty model is branched into two parts: The bias error (B) and the random error (R). In this paper, expected measurement error types for PCB components (ADXL001 linear accelerometer, L3G4200D gyroscope) are quantified and their effects on the IMG system are computed. The uncertainty analysis presented here can be a guide in future in vitro and in vivo IMG validation tests. During bench-top testing, IMG linear accelerometers quantified peak linear acceleration with 98.2% accuracy and 98.0% precision. The IMG gyroscope quantified peak angular velocity with 97.0 (open full item for complete abstract)

    Committee: Adam Bartsch Ph.D. (Advisor); Murad Hizlan Ph.D. (Committee Member); Sridhar Ungarala Ph.D. (Committee Member); Majid Rashidi Ph.D. (Committee Member) Subjects: Automotive Engineering; Biomedical Engineering; Electrical Engineering; Engineering; Mechanical Engineering
  • 2. Zhang, Chi Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning Approaches

    Doctor of Philosophy, The Ohio State University, 2022, Civil Engineering

    Uncertainty quantification is important in many engineering and scientific domains, as uncertainties, of both aleatory and epistemic types, are ubiquitous and inevitable since the complete knowledge cannot be achieved. The probability of failure quantifies the probability of a system failing to meet a specific performance requirement. It is a vital measurement of performance when uncertainties are considered, and it can facilitate the design optimization and decision making for critical infrastructure systems. The computational costs of uncertainty quantification are often prohibitive due to the nature of multi-query analysis and expensive numerical models. Surrogate models can be used to facilitate the reliability analysis. Kriging is the among the most popular surrogate models for reliability analysis due to its capability of providing uncertainty information. How to best utilize the simulation data to construct the Kriging model is a primary research topic in the reliability domain. This dissertation offers the following novel contributions to this research topic: • A novel methodology for adaptive Kriging reliability methods is proposed. It considers the global impact of adding new training points and focuses on reducing the error in the most effective manner. • An effective multi-fidelity reliability method is proposed. The information source and training points can be selected simultaneously to achieve optimal construction of the surrogate model. • A two-phase approach for reliability updating with adaptive Kriging is proposed. The error of posterior failure probability introduced by the Kriging model is quantified. • Adaptive Kriging method is integrated with value of information analysis, and a knowledge sharing scheme is developed to enhance the training efficiency. While surrogate models such as Kriging substantially reduce the computational cost of multi-query analyses, they still require costly simulations of complex computational models. (open full item for complete abstract)

    Committee: Abdollah Shafieezadeh (Advisor); Halil Sezen (Committee Member); Jieun Hur (Committee Member) Subjects: Civil Engineering
  • 3. Alsaqqa, Obada Fuzzy Time-Delay Model in Fault-Tree Analysis for Critical Path Method

    Master of Science, The Ohio State University, 2015, Civil Engineering

    Construction projects are always expected to be delayed, but the likelihood of a delay varies between projects because of the particular circumstances and schedule for that project. It is usually left to the scheduler to estimate these future circumstances of the project when preparing the schedule and determining the duration of the project. However, the schedule of the Critical Path Method (CPM) does not indicate the factors that are assumed to participate in determining the likelihood for delay. These deterministic durations, apart from the relationship between the activities, are the dominant contributor to the critical path in the CPM calculation. Risk management focuses on the processes that are considered critical, although delay may emerge from non-critical paths. In this study, a new fuzzy model is proposed to provide a subjective assessment of the likelihood of delay for activities in different periods. Using this model, the scheduler's assessment of the likelihood of delay for each activity can be combined to determine the likelihood of a project delay. This process is done utilizing fuzzy logic and fault-tree analysis and is then combined with the CPM schedule of the project. The result is a fuzzy fault-tree that shows the potential delay of the project and its contributing paths. Applying this method on a sample project, the results show that risk of delay comes not only from critical paths but also from non-critical paths. Consequently, the CPM schedule duration can be reevaluated such that the project can be rescheduled to account for the new findings and, at the very least, the risk of delay can be accounted for.

    Committee: Fabian Tan (Advisor); Tarunjit Butalia (Committee Member); Rachel Kajfez (Committee Member) Subjects: Civil Engineering; Engineering; Industrial Engineering; Logic; Management; Mathematics; Operations Research
  • 4. Srinivasan, Soundararajan Integrating computational auditory scene analysis and automatic speech recognition

    Doctor of Philosophy, The Ohio State University, 2006, Biomedical Engineering

    Speech perception studies indicate that robustness of human speech recognition is primarily due to our ability to segregate a target sound source from other interferences. This perceptual process of auditory scene analysis (ASA) is of two types, primitive and schema-driven. This dissertation investigates several aspects of integrating computational ASA (CASA) and automatic speech recognition (ASR). While bottom-up CASA are used as front-end for ASR to improve its robustness, ASR is used to provide top-down information to enhance primitive segregation. Listeners are able to restore masked phonemes by utilizing lexical context. We present a schema-based model for phonemic restoration. The model employs missing-data ASR to decode masked speech and activates word templates via dynamic time warping. A systematic evaluation shows that the model restores both voiced and unvoiced phonemes with a high spectral quality. Missing-data ASR requires a binary mask from bottom-up CASA that identifies speech-dominant time-frequency regions of a noisy mixture. We propose a two-pass system that performs segregation and recognition in tandem. First, an n-best lattice, consistent with bottom-up speech separation, is generated. Second, the lattice is re-scored using a model-based hypothesis test to improve mask estimation and recognition accuracy concurrently. By combining CASA and ASR, we present a model that simulates listeners' ability to attend to a target speaker when degraded by energetic and informational masking. Missing-data ASR is used to account for energetic masking and the output degradation of CASA is used to model informational masking. The model successfully simulates several quantitative aspects of listener performance. The degradation in the output of CASA-based front-ends leads to uncertain ASR inputs. We estimate feature uncertainties in the spectral domain and transform them into the cepstral domain via nonlinear regression. The estimated uncertainty substantially im (open full item for complete abstract)

    Committee: DeLiang Wang (Advisor) Subjects:
  • 5. Shang, Shen Stochastic Material Characterization of Heterogeneous Media with Randomly Distributed Material Properties

    Doctor of Philosophy, University of Akron, 2012, Civil Engineering

    In the field of computational mechanics, there has been a very challenging problem, which is the characterization of heterogeneous media with randomly distributed material properties. In reality, no material is homogeneous and deterministic in nature and it has been well-known that randomness in microstructures and properties of materials could significantly influence scatter of structural response at larger scales. Therefore, stochastic characterization of heterogeneous materials has increasingly received attention in various engineering and science fields. In order to deal with this challenging problem, two major challenges need to be addressed: 1) developing an efficient modeling technique to discretize the material uncertainty in the stochastic domain and 2) developing a robust and general inverse identification computational framework that can estimate parameters related to material uncertainties. In this dissertation, two major challenges have been addressed by proposing a robust inverse analysis framework that can estimate parameters of material constitutive models based on a set of limited global boundary measurements and combining the framework with a general stochastic finite element analysis tool. Finally a new stochastic inverse analysis framework has been proposed, which has a novel capability of modeling effects of spatial variability of both linear and nonlinear material properties on macroscopic material and structural response. By inversely identifying statistical parameters (e.g. spatial mean, spatial variance, spatial correlation length, and random variables) related to spatial randomness of material properties, it allows for generating statistically equivalent realizations of random distributions of linear and nonlinear material properties and their applications to the development of probabilistic structural models. First, a robust inverse identification framework, called the Self-Optimizing Inverse Method (Self-OPTIM), has been developed. Unli (open full item for complete abstract)

    Committee: Gun Jin Yun Dr. (Advisor); Wieslaw Binienda Dr. (Committee Member); Ernian Pan Dr. (Committee Member); Xiaosheng Gao Dr. (Committee Member); Kevin Kreider Dr. (Committee Member) Subjects: Civil Engineering; Engineering; Experiments; Materials Science; Mathematics
  • 6. Walker, David Effect of Thruster Uncertainties on a Deep Space Mission Using Electric Propulsion

    Master of Sciences (Engineering), Case Western Reserve University, 2024, EMC - Aerospace Engineering

    As deep space missions expand in scope and distance, the efficiency of propulsion systems becomes paramount. This thesis analyzes the impact of small measurement errors in the thrust profiles of Hall-Effect Thrusters, a common type of electric propulsor, known for increased efficiency compared to traditional chemical propulsion despite lower thrust. Due to their prolonged operational times, these errors compound, affecting trajectory and mission success. Through analyzing the AEPS Hall-Effect Thruster prototype, designed for NASA's Artemis Program's Gateway space station, using curve fitting and a Monte Carlo simulation, we assess the effects of these errors on an example mission to Alpha Centauri. Results show plasma dynamics cause the majority of error, but cause minimal trajectory deviation and propellant loss. This reinforces electric propulsion's suitability for long-distance space travel. This work informs spacecraft mission design, providing valuable insights into fuel efficiency and system selection, as well as building upon NASA Glenn Research Center's Electric Propulsion and Power Laboratory's prior research.

    Committee: Bryan Schmidt (Committee Chair); John Yim (Other); Richard Bachmann (Committee Member); Paul Barnhart (Committee Member) Subjects: Aerospace Engineering; Engineering; Mechanical Engineering
  • 7. Rownak, Md Ragib Uncertainty in Multi-Objective Optimization: Weighted Entropy and Partition-Based Uncertainty Reduction

    Master of Science, The Ohio State University, 2023, Mechanical Engineering

    The optimal solutions, i.e. selection of combinations of decision variables, of a system being designed to satisfy multiple objectives that are dependent upon various decision variables are found on the non-dominated Pareto fronts (NDPFs). In this thesis, multi-objective optimization problems and the concept of NDPFs for such problems are discussed. A case study of a practical application (development of sensor selection method for online monitoring (OLM) systems) is presented to study a multi-objective optimization problem and verify the effectiveness and performance of the sensor selection method to generate the optimal cost-benefit sensor deployment scheme for OLM systems for advanced nuclear reactors. In the study, several important capabilities of the OLM system, such as the capability of observing various states of the target system, the capability of fault detection and discrimination, the capability of fault prognostics, and various characteristics of sensors, such as the functionalities, the integrities, the reliabilities, and the costs, are taken into account when generating the optimal sensor deployment scheme. As a solver of a multiple-objective optimization problem, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is used which outputs a series of sensor deployment solutions, including the numbers, types, and deployment positions of the sensors required by the OLM system. However, uncertainty is not considered for the OLM systems case study. But decision variables are often subject to uncertainty, which can significantly affect the NDPF of an optimization problem. The effect of uncertainty, how uncertainty propagates and affects the distribution of the NDPFs is studied to understand the significance of uncertainty in the decision variables in this thesis. The required conditions and the analytical expression of the probability that a solution will be in the NDPF is developed for multiple objective functions with multiple decision variables. Measure (open full item for complete abstract)

    Committee: Tunc Aldemir (Committee Member); Carol Smidts (Advisor) Subjects: Mechanical Engineering
  • 8. Odunola, Tolulope Towards Robust Project Design: Avoiding Pitfalls in Cost Benefit Analysis amidst Climate Uncertainty

    MS, University of Cincinnati, 2022, Engineering and Applied Science: Environmental Engineering

    The world's infrastructure needs have been rising steadily as different countries battle several socio-economic factors that place heavy demand on infrastructure. Also, the urgent need for reliable infrastructure is pronounced in developing nations of the world – mostly concentrated in Africa, Asia, Latin America and the Caribbeans. Furthermore, project planners in international development and foreign aid agencies often make use of Cost Benefit Analysis (CBA) for economic evaluation of proposed projects. However, several concerns have been raised about the danger of making important investment decisions based on a single metric – the Economic Rate of Return (ERR), which does not account sufficiently for project uncertainties, including climate change uncertainty. This study demonstrates the sensitivity of a project's economic rate of return (ERR) to climate change. This is done by conducting a climate risk assessment for a prototypical project and illustrating how variations in climate affect the computed ERR. Many studies have explored how to integrate uncertainties into the project development process including attempts at Robust Cost Benefit Analysis, the use of techniques for Decision Making under Deep Uncertainty (DMDU) and Eco-engineering Decision Scaling (EEDS). This study lends a voice to the call for more robust approaches to project design which help to guard against failure in the presence of a myriad of uncertainties. Recommendations on a robust approach to project design based on the case study illustration are also presented.

    Committee: Patrick Ray Ph.D. (Committee Member); Drew McAvoy Ph.D. (Committee Member); Casey Brown PhD (Committee Member) Subjects: Environmental Engineering
  • 9. Horiguchi, Akira Bayesian Additive Regression Trees: Sensitivity Analysis and Multiobjective Optimization

    Doctor of Philosophy, The Ohio State University, 2020, Statistics

    As computing power grows, computer experiments have become an increasingly popular approach to study the relationship between the inputs and resulting outputs of a computational model. The most popular statistical model in computer experiments is the Gaussian Process model. However, the Bayesian Additive Regression Trees (BART) model can better handle the explosive increase in the quantity of available data. This thesis considers two problems in the design and analysis of computer experiments. The first problem is computing sensitivity indices for BART. The second problem is estimating the Pareto Front and Set of a multiobjective computer simulator using BART. To solve the first problem, we derive closed-form expressions of sensitivity indices for BART and establish a relationship between these indices and the count heuristic commonly used to measure input variable activity in BART. These expressions are exact and do not require integral approximation methods. We then empirically assess the performance of these BART-based sensitivity indices in capturing a function's sensitivity indices on several test functions. We compare this performance to that of the count heuristic and of the Treed Gaussian Process model whose sensitivity indices can be approximated. For the comparison to counts, we propose a novel ranking method suited for this input variable activity setting. To solve the second problem, we introduce a BART model with multidimensional outputs and provide an algorithm to find the exact Pareto Front and Set of the function that results from a trained multiple-output BART model. We also introduce two approaches of quantifying the uncertainty of these estimates. We then empirically compare these two uncertainty quantification approaches to each other on several test functions. For this comparison, we propose two metrics that capture certain desirable properties of a Pareto Front or Set estimate.

    Committee: Matthew Pratola (Advisor); Thomas Santner (Advisor); Radu Herbei (Committee Member); Roshan Joseph (Committee Member); Mark Pitt (Other) Subjects: Statistics
  • 10. Yuan, Mengfei Machine Learning-Based Reduced-Order Modeling and Uncertainty Quantification for "Structure-Property" Relations for ICME Applications

    Doctor of Philosophy, The Ohio State University, 2019, Materials Science and Engineering

    The design framework for complex materials property and processing models within the Integrated Computational Material Engineering (ICME) is often hindered by the expensive computational cost. The ultimate goal of ICME is to develop data-driven, materials-based tools for the concurrent optimization of material systems while, improving the deployment of innovative materials in real-world products. Reduced-order, fast-acting tools are essential for both bottom-up property prediction and top-down model calibrations employed for modern material design applications. Additionally, reduced-order modeling requires formal uncertainty quantification (UQ) from the processing stages all the way down to the manufacturing and component design. The goal of this thesis is to introduce a machine learning-based, reduced-order crystal plasticity model for face-centered cubic (FCC) polycrystalline materials. This implementation was founded upon Open Citrination, an open-sourced materials informatics platform. Case studies for both the bottom-up property prediction and top-down optimization of model parameters are demonstrated within this work. The proposed reduced-order model is used to correctly approximate the plastic stress-strain curves and the texture evolution under a range of deformation conditions and strain rates specific to a material. The inverted pathway is applied to quickly calibrate the optimal crystal plasticity hardening parameters given the macroscale stress-strain responses and evolutionary texture under certain processing conditions. A visco-plastic self-consistent (VPSC) method is used to create the training and validation datasets. The description of the material texture is given through a dimension reduction technique which was implemented by principal component analysis (PCA). The microstructures of engineering materials typically involve an intricate hierarchical crystallography, morphology, and composition. Therefore, an accurate, virtual representation (open full item for complete abstract)

    Committee: Stephen Niezgoda (Advisor); Yunzhi Wang (Committee Member); Michael Groeber (Committee Member); Maryam Ghazisaeidi (Committee Member) Subjects: Materials Science
  • 11. Savka, Andriy TIMING OF UNCERTAINTY SHOCKS AND FIRMS' INVESTMENT DECISIONS: MIXED FREQUENCY ANALYSIS

    MA, Kent State University, 2018, College of Business and Entrepreneurship, Ambassador Crawford / Department of Economics

    Economic uncertainty and its effect on economic activity has become a popular topic in recent economic literature. Although the impact of economic uncertainty on macroeconomic indicators receives most of the attention, its influence on firm-level decisions remains less clear and requires more study. Recent development in econometrics methodology allows to analyze data with different frequencies and draw conclusions about specific effects of uncertainty based on timing of shock occurrence. Built on the available knowledge on measures of uncertainty, our empirical study is aimed to investigate the effect of macroeconomic and financial uncertainty on firms' decision regarding allocation of financial resources towards capital expenditures. Employing mixed frequency vector autoregressive process model (MF-VAR), we look at the difference of firms' response to various timing of uncertainty shocks. Firm-level data is aggregated based on sector classification, which allows us to track industry-specific variations in responses and offer their interpretation based on the difference in terms of industry competitiveness and other characteristics of various industries. The thesis is organized as follows. The literature review of the relevant recent publications is followed by discussion of analytical framework, including mixed frequency vector autoregressive (MF-VAR) model. Report of empirical analysis includes description of data sample and main empirical results. The thesis is concluded with the summary of findings and suggestion for research questions to further understanding of the topic.

    Committee: Dandan Liu (Advisor); Michael Ellis (Committee Member); Jeremiah Harris (Committee Member) Subjects: Economics
  • 12. Fageehi, Yahya SIMULATION-BASED OPTIMIZATION FOR COMPLEX SYSTEMS WITH SUPPLY AND DEMAND UNCERTAINTY

    Doctor of Philosophy, University of Akron, 2018, Engineering

    The Hunger Relief Food Bank is a non-profit organization collecting, organizing, and channeling food to front-line agencies who have the same mission. Food banks in general act as warehouse depots reliant on donations, that distribute food to achieve their goal of ending hunger. The biggest challenge faced by Food Banks — besides matching the supply of funds and donated food with Demand — is managing and improving operations, while coping with uncertainty in Supply and Demand. Critical processes and logistical issues are the main foci for food bank performance, as integration is yet to be achieved. To address this, the researcher developed several data analytical models (including descriptive, explanatory, and forecasting [predictive] models), to provide deeper insight into critical operations and non-traditional supply chain issues influencing food bank performance; first, to fully understand the system dynamics of food bank operations, then, to overcome the uncertainty associated with the system and finally to manage and improve operations. Through Data-Mining techniques we fully understand the system dynamics of food banks and useful information was generated. Understanding the patterns and availability of donated food and the orders frequency helps food bank organizations effectively plan and manage the storage and equitable distribution of food in a sustainable way. Moreover, we explore several predictive models to estimate the quantity of both in-kind food donation and Demand, to be used to overcome the supply and Demand uncertainty experienced by food banks. In addition, Lean Six Sigma methodology was used as a framework to identify opportunities for improvement, while eliminating waste associated with its processes. Simulation-Based Optimization (prescriptive) Models were implemented to investigate operations and to reengineer processes. Similarly, incorporating uncertainty in the developed system, enabled realistic system analysis and established optimum sc (open full item for complete abstract)

    Committee: Shengyong Wang (Advisor); Naw Mimoto (Committee Member); Yilmaz Sozer (Committee Member); Ling Chen (Committee Member) Subjects: Evolution and Development; Industrial Engineering; Mathematics; Mechanical Engineering
  • 13. Biswas, Ayan Uncertainty and Error Analysis in the Visualization of Multidimensional and Ensemble Data Sets

    Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering

    Analysis and quantification of uncertainty have become an integral part of the modern day data analysis and visualization frameworks. Varied amounts of uncertainty are introduced throughout the different stages of the visualization pipeline. While visualizing the scientific data sets, it is now imperative to provide an estimation of the associated uncertainty such that the users can readily assess the reliability of the visualization tools. Quantification of uncertainty is non-trivial for scalar data sets and this problem becomes even harder while handling multivariate and vector data sets. In this dissertation, several techniques are presented that identify, utilize and quantify uncertainty for multi-dimensional data sets. These techniques can be broadly classified into two groups: a) analysis of the existence of relationships and features and b) identification and analysis of error in flow visualization tools. The first category of studies use multivariate and ensemble datasets for analyzing relationship uncertainties. The second category of studies primarily use vector fields to demonstrate streamlines and stream surface for error analysis. In the analysis stage, we initially present an information theoretic framework towards the exploration of uncertainty in the relationships of multivariate datasets. We show that, in a multivariate system, variables can show interdependence on each other and information theoretic distance can be effectively used to find a hierarchical grouping of these variables. Using information content as the importance measure, salient variables are identified to start the variable exploration process. Specific mutual information is used for classifying the isosurfaces of one variable such that they reveal uncertainty regarding the other selected variables. Feedback from the ocean scientists establishes the superiority of this system over the existing techniques. From multivariate relationships, next we discuss the uncertainty in the rel (open full item for complete abstract)

    Committee: Han-Wei Shen Dr. (Advisor); Raghu Machiraju Dr. (Committee Member); Huamin Wang Dr. (Committee Member) Subjects: Computer Engineering; Computer Science
  • 14. Kalathil, Robins Assessment of Uncertainty in Core Body Temperature due to Variability in Tissue Parameters

    MS, University of Cincinnati, 2016, Engineering and Applied Science: Mechanical Engineering

    Background and objective. Firefighters often perform strenuous muscular activity in hot and hostile environments while wearing heavy personal protective equipment (PPE). The combination of these factors (stressors) during firefighting leads to build up of heat stress and, consequently, elevates the body temperature. This causes several adverse effects on the firefighter, including muscular fatigue and increased cardiovascular strain. Therefore, in order to avoid the adverse health conditions due to uncompensable heat stress, the prediction and monitoring of the thermal response of firefighters is critical. Computational analysis can be used as an effective tool for predicting the thermal response as such a technique allows the testing for variability of input parameters such as tissue properties and environmental conditions. However, numerical prediction of the thermal response of a firefighter requires a reasonable estimate of input parameters. The variability associated with these parameters needs to be evaluated in the numerical study for obtaining reliable results. Specifically, tissue properties are known to vary between individuals and influence the prediction of thermal response. Further, measurement of tissue properties of each firefighter is not practical. Therefore, in this study, we developed a whole body computational model to evaluate the effect of variability (uncertainty) in tissue parameters on the thermal response of a firefighter during firefighting. Methods. Modifications were made to an existing human whole body computational model, developed in our lab, for conducting transient thermal analysis for a firefighting scenario. In conjunction with nominal (baseline) tissue parameters obtained from literature and physiologic conditions from a firefighting drill, the Pennes' bioheat and energy balance equations were solved to obtain the core body temperature, Tc_N, of a firefighter. Subsequently, the uncertainty, u, in core body tempe (open full item for complete abstract)

    Committee: Rupak Banerjee Ph.D P.E. (Committee Chair); Amit Bhattacharya Ph.D. (Committee Member); Jay Kim Ph.D. (Committee Member) Subjects: Engineering
  • 15. Chen, Chun-Ming Data Summarization for Large Time-varying Flow Visualization and Analysis

    Doctor of Philosophy, The Ohio State University, 2016, Computer Science and Engineering

    The rapid growth of computing power has expedited scientific simulations which can now generate data in unprecedentedly high quality and quantity. However, this advancement has not been mirrored in I/O performance, and hence scientific research is facing great challenges in visualizing and analyzing large-scale simulation results. Among areas of scientific research, fluid flow analysis plays an important role in many disciplines such as aerospace, climate modeling and medicine applications. The data-intensive computation required for fluid flow visualization makes it difficult to devise efficient algorithms and frameworks for flow analysis. First, to analyze a time-varying flow field, pathline visualization is typically used to reveal particle trajectories in the flow. Pathline computation, however, has irregular data access pattern that complicates out-of-core computation when the flow data are too large to fit in the main memory. Strategies on modeling the access pattern and improving spatial and temporal data locality are needed. Second, to avoid tremendous I/O latency, the simulated flow field results are typically down-sampled when they are stored, which inevitably affects the accuracy of the derived pathlines. Error reduction and modeling becomes important to enable uncertainty visualization in order for better decision making. This dissertation addresses the above challenges by data summarization approaches that efficiently process large data into succinct representations to facilitate flow analysis and visualization. First, a graph modeling approach is employed to encode the data access pattern of pathline computation, with which a cache-oblivious file layout algorithm and a work scheduling algorithm are proposed to optimize disk caching during out-of-core pathline visualization. Second, an incremental algorithm is devised that fits streaming time series of flow fields into higher-order polynomials and estimates errors in a compact distribution model. Th (open full item for complete abstract)

    Committee: Han-Wei Shen (Advisor); Rephael Wenger (Committee Member); Jen-Ping Chen (Committee Member) Subjects: Computer Science
  • 16. Carper, Jayme Verification and Validation of a Transient Heat Exchanger Model

    Master of Science in Engineering (MSEgr), Wright State University, 2015, Renewable and Clean Energy

    A statistical based verification and validation process is applied to the transient modeling of a shell and tube heat exchanger. A generic model of a heat exchanger was developed based on first principles as a sub-system of a larger thermal system model. This model was originally created without any experimental data, as it was not readily available. To provide the data necessary to apply the validation process, a thermal emulator was designed and built that allowed control of all system inputs to the heat exchanger, while also providing the instrumentation to record all required data. A wide test matrix was chosen to fully encompass the expected operational envelope of the heat exchanger. Focus on the collection of experimental data was the minimization of uncertainty, as these uncertainties were amplified once they were propagated through the validation process. The validation process encompasses the completion of sensitivity and uncertainty analyses, uncertainty propagation, verification, and validation. Once these steps were completed using a set of non-ideal experimental data, uncertainty in the transient heat exchanger model is quantified. This manuscript proposes a way to complete the validation process without replicate data sets by utilizing known information about the physical process. At the completion of the process, both uncertainties and model form error are quantified for the system outputs and a statistical validation metric is applied. These outputs help to define whether or not the model captures the physical process to a satisfactory degree while also highlighting avenues for improvement if the uncertainty is deemed too large for the intended application.

    Committee: Rory Roberts Ph.D. (Advisor); Scott Thomas Ph.D. (Committee Member); J. Mitch Wolff Ph.D. (Committee Member) Subjects: Engineering; Mechanical Engineering
  • 17. Kim, Sei Jin Three Essays on the Implications of Environmental Policy on Nutrient Outputs in Agricultural Watersheds and the Heterogeneous Global Timber Model with Uncertainty Analysis

    Doctor of Philosophy, The Ohio State University, 2015, Agricultural, Environmental and Developmental Economics

    This dissertation consists of three chapters: the implications of environmental policy on nutrient outputs in agricultural watersheds; an assessment of the effects of global wood biomass demand projections on forests using the Global Timber Model (GTM), including heterogeneous products in the forestry sector; and the analysis of whether forest-based bioenergy is carbon neutral using the Monte Carlo analysis with the Global Timber Model (GTM). The first chapter examines whether the federally sponsored voluntary environmental programs to reduce phosphorus pollution from agriculture have had any impact on water quality outcomes. Using daily observations on nutrient emissions taken over 37 years in two Great Lakes tributaries, we estimate an econometric model of phosphorus emissions. Phosphorus emissions are the most important contributor to harmful algal blooms, which have recently caused significant health concerns. Our results indicate that these voluntary programs have had very little effect on phosphorus outputs. In contrast, we show that an input tax could be effective in reducing phosphorus pollution, and consequently, the likelihood of future harmful algal blooms. The second chapter uses the Global Timber Model (GTM) to analyze global biomass demand projection scenarios. The current literature in the Global Timber Model lacks implications of diverse utilizations in forests, assuming a homogeneous product of woody use. In this chapter, the model maximizes the present value of net social welfare derived from harvesting and managing the world's forests and assumes that the timber market consists of two heterogeneous goods: saw-timber and pulpwood. A functioning market for cellulosic biomass does not yet exist; however, we assume that either type of wood is an available feedstock for production of cellulosic bioenergy on the global scale, and that it can be substituted for the purposes of making ethanol or other energy, such as electricity and heat. A baseli (open full item for complete abstract)

    Committee: Brent Sohngen (Advisor); Ian Sheldon (Committee Member); Abdoul Sam (Committee Member) Subjects: Environmental Economics
  • 18. Mackey, Jon Thermoelectric Energy Conversion: Advanced Thermoelectric Analysis and Materials Development

    Doctor of Philosophy, University of Akron, 2015, Mechanical Engineering

    Thermoelectric materials exhibit a significant coupling between thermal and electrical transport. Devices made from thermoelectric materials can convert between thermal and electrical energy. System reliability is extremely high but widespread use of the technology is hindered by low conversion efficiency. To increase the practicality of thermoelectric devices improvements are required in both (i) device design and (ii) thermoelectric materials. Advanced thermoelectric analysis developed in this work provides general guidelines for device design by introducing a new set of design factors. The new analytic factors include Device Design Factor, Fin Factor, Inductance Factor, and Thermal Diffusivity Factor. The advanced analysis is applied to two material systems developed in this work. The first system investigated was a composite of WSi2 precipitates in a Si/Ge matrix. The composite was investigated through both solidification techniques and powder processing. The system has a 30% higher figure of merit, a material parameter relating to conversion efficiency, than traditional zone-leveled Si/Ge. The second system investigated was a novel quaternary CoxNi4-xSb12-ySny skutterudite. The system was found to achieve both n- and p-type conduction with tuning of the Co level.

    Committee: Celal Batur Dr. (Advisor); Alp Sehirlioglu Dr. (Committee Member); Minel Braun Dr. (Committee Member); Guo-Xiang Wang Dr. (Committee Member); Jerry Young Dr. (Committee Member); Bob Viellette Dr. (Committee Member) Subjects: Aerospace Materials; Materials Science; Mechanical Engineering
  • 19. Parker, Jason Approximate Message Passing Algorithms for Generalized Bilinear Inference

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    Recent developments in compressive sensing (CS) combined with increasing demands for effective high-dimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and low-rank constraints. In this dissertation, we extend the generalized approximate message passing (GAMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of CS, to handle several classes of bilinear inference problems. First, we consider a general form of noisy CS where there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm called Matrix Uncertain GAMP (MU-GAMP) whose goal is minimization of mean-squared error of the signal estimates in the presence of these uncertainties, without attempting to estimate the uncertain measurement matrix itself. Next, we extend GAMP to the generalized-bilinear case, in which the measurement matrix is estimated jointly with the signals of interest, enabling its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. We derive this Bilinear GAMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assu (open full item for complete abstract)

    Committee: Philip Schniter (Advisor); Lee Potter (Committee Member); Emre Ertin (Committee Member) Subjects: Electrical Engineering
  • 20. Dingus, Rebecca Recognizing Uncertainty and Managing Appropriately: How Should Sales Managers Do It?

    PHD, Kent State University, 2014, College of Business and Entrepreneurship, Ambassador Crawford / Department of Marketing and Entrepreneurship

    This dissertation explores the effects of sales managers' behaviors on sales force performance, given various situations that sales forces face. Using a structure-conduct-performance framework, the most appropriate behaviors that sales managers should engage in are determined by assessing performance relative to sales managers' conduct in the presence of varied structural factors. Assuming that a sales manager's conduct varies based on the structural (i.e., situational) variables facing a sales force, a framework of transaction cost economics is used to identify the structural variables of transaction specific assets and uncertainty (both external and internal). As these structural conditions vary, the appropriateness of particular sales manager behaviors also changes. Sales manager conduct is considered with respect to (1) sales manager's control (behavior-based, outcome-based) of the sales force, (2) sales manager's trust in the sales force, and (3) a sales manager's adaptability to the sales force and related situations. Unique conceptualizations are provided for both control (challenging its traditional single continuum) and adaptability of the sales manager (extending beyond adaptive selling behavior). The sales force is the unit of analysis and, accordingly, performance is assessed as sales force performance. Thirteen hypotheses are formed to predict relationships between the structure, conduct, and performance variables. Using self-assessed, sales manager data, they are empirically tested. A survey created by adapting scales from the literature assesses the structural conditions sales managers are facing, the behaviors in which they engage, and how their sales forces perform. Participating sales managers were recruited through two university sales centers and through social ties. Respondents in the sample represent large, reputable firms in several different countries. The data was analyzed using structural equation modeling (SEM) with several competi (open full item for complete abstract)

    Committee: Michael Hu (Advisor) Subjects: Marketing