Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Shen, Zuchao Optimal Sample Allocation in Multilevel Experiments

    PhD, University of Cincinnati, 2019, Education, Criminal Justice, and Human Services: Educational Studies

    Multilevel experiments have been widely used in education and social sciences to evaluate causal effects of interventions. Two key considerations in designing experimental studies are statistical power and the minimal use of resources. Optimal design framework simultaneously addresses both considerations. This dissertation extends previous optimal design frameworks by developing a more general optimal sample allocation framework that allows sampling costs to vary across both levels of the hierarchy and treatment conditions while relaxing the proportion of units assigned to the treatment condition. The dissertation includes five chapters. Chapter one introduces the studies and their context. Chapter two develops optimal sampling extensions that allow sampling costs to vary across hierarchical levels and treatment conditions for two- and three-level cluster-randomized trials. Chapter three further extends these developments to two- and three-level multisite randomized trials. Chapter four further extends these developments to four-level experiments. The structure of each main chapter (chapters two to four) includes the review of previous frameworks, the development of a more general framework, the demonstration of the utility of the proposed framework by comparing with previous frameworks, and a sensitivity analysis of the proposed framework against the misspecification of design parameter values. The results show that the proposed framework and solutions can frequently identify designs with more statistical precision than previous frameworks even when some parameters are constrained due to immutable practical concerns. The results also suggest that the gains in statistical precision and design efficiency identified by the proposed framework are fairly robust to misspecifications of the cost structure and the values of incidental design parameters (e.g., intraclass correlation coefficient). By using the proposed framework, researchers can plan studies using few (open full item for complete abstract)

    Committee: Benjamin Kelcey Ph.D. (Committee Chair); George Richardson Ph.D. (Committee Member); Christopher Swoboda Ph.D. (Committee Member) Subjects: School Administration
  • 2. Kim, Soo-Il Essays on the temporal insensitivity, optimal bid design and generalized estimation m odels in the contingent valuation study

    Doctor of Philosophy, The Ohio State University, 2004, Agricultural, Environmental and Development Economics

    This dissertation aims to provide answers to some of issues in dichotomous choice contingent valuation: the temporal structure of willingness to pay, practical guideline for survey design and generalized estimation method. The first essay proposes the temporal willingness to pay (TWTP) as an alternative definition of the present value of willingness to pay. In the survey of contingent valuation, a respondent compares TWTP with the present value of randomly assigned cost. TWTP enables the test for consistency of respondent's valuation with respect to payment schemes. Using a sequential test, the insensitivity of TWTP is tested on the data of oyster reef restoration programs in the Chesapeake Bay. The test result shows that TWTP is insensitive to the offered payment schedule or on the length of the stream of benefits of the project, which implies consistent willingness to pay for the environmental project. However, discount rates estimated from the data vary significantly across project lengths and time span between offered payment schedules. The second essay suggests a practical alternative design named a uniform design, to existing optimal or robust bid designs in contingent valuation. The uniform design draws cost assigned to respondent from a predetermined uniform distribution. Analytics and simulations show that the uniform design has lower bound of efficiency at 84 percent of D-optimum. Simulations demonstrate that the uniform design outperforms optimal designs when initial information is poor and outperforms robust designs when true values of parameters are known. The third essay challenges the theoretical and technical background of the simple logit model. Standard logit model in contingent valuation assumes i.i.d error distribution between initial and proposed states. Relaxing the restrictive assumption in the simple logit model requires a generalized estimation technique that utilizes a Gumbel mixed model. Estimation results show that correlation between two (open full item for complete abstract)

    Committee: Timothy Haab (Advisor) Subjects: Economics, Agricultural
  • 3. Nakka, Sai Krishna Sumanth Co-design of Hybrid-Electric Propulsion System for Aircraft using Simultaneous Multidisciplinary Dynamic System Design Optimization

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Mechanical Engineering

    A key challenge in the design of hybrid-electric propulsion systems (HEPS) for aircraft is the complexity involved in handling efficient sizing of the components as well as control strategy between the multiple power sources. In order to handle this challenge effectively, combined optimal design and control (co-design) methods that enable the integration of energy management optimization along with the vehicle sizing are required. Even though some studies have explored such methods, they have done so in a computationally-intensive nested formulation with limited depth on the design and control modeling aspect. This thesis addresses these issues by posing the system design problem using a simultaneous formulation of the Multidisciplinary Dynamic System Design Optimization (MDSDO) co-design strategy. While the simultaneous formulation, generally, facilitates superior computational performance, the MDSDO method solves co-design problems from a more balanced perspective between design- and control-related variables. We apply this method to hybrid-electric aircraft propulsion system design with an objective to determine the optimal propulsion component designs and supervisory control strategies for a fixed mission profile such that the total energy consumption is minimized. The hybrid configuration is compared to the conventional reference aircraft, MQ-9 Reaper, on the basis of system efficiency. The individual powertrain components are mathematically modeled using physically-meaningful design variables and the mission analysis is done using mathematical simulations. In addition to that, a parametric study on the battery energy density is presented to explore the near-term viability of HEPS for aircraft

    Committee: Michael Alexander-Ramos Ph.D. (Committee Chair); Manish Kumar Ph.D. (Committee Member); Mark Turner Sc.D. (Committee Member) Subjects: Mechanical Engineering
  • 4. Cox, Kyle Increasing the Feasibility of Multilevel Studies through Design Improvements and Analytic Advancements

    PhD, University of Cincinnati, 2019, Education, Criminal Justice, and Human Services: Educational Studies

    Multilevel study designs are well suited for research in hierarchically structured educational settings. However, this structure, limited resources, and complex theories of teaching and learning limit the ability of educational researchers to feasibly conduct adequate studies. This three-article dissertation increases the feasibility of multilevel studies through improvements in study design and advancements in analytic approaches. The totality of this work expands the capacity of educational researchers to conduct multilevel studies. First, I extend the partial posterior predictive distribution method (p3 method) to test multilevel mediation. A variety of inferential tests are available for single and multilevel mediation but most come with notable limitations that balance tradeoffs between power and Type I error. The p3 method is a contemporary resampling-based composite approach specifically suited for complex null hypotheses. I develop the p3 method and investigate its performance within the context of two-level cluster-randomized multilevel mediation studies. The p3 method performed well relative to other mediation tests because it provides a more judicious balance of the Type I error rate and power. The method serves as a powerful alternative tool for researchers investigating multilevel mediation. Next, I investigate the robustness of statistical power under an optimal sampling framework to misspecified parameter values in cluster-randomized designs with cluster- or individual-level mediators. When planning cluster-randomized studies probing mediation, effective and efficient sample allocation is governed by several parameters. In the design stage, these parameters are typically approximated using information from prior research and these approximations are likely to deviate from the true values eventually realized in the study. The results suggest that estimates of statistical power are robust to misspecified parameter values across a variet (open full item for complete abstract)

    Committee: Benjamin Kelcey Ph.D. (Committee Chair); Amy Farley Ph.D. (Committee Member); Jessaca Spybrook Ph.D. (Committee Member); Christopher Swoboda Ph.D. (Committee Member) Subjects: Psychological Tests
  • 5. Bahg, Giwon Adaptive Design Optimization in Functional MRI Experiments

    Master of Arts, The Ohio State University, 2018, Psychology

    Efficient data collection is one of the most important goals to be pursued in cognitive neuroimaging studies because of the exceptionally high cost of data acquisition. Design optimization methods have been developed in cognitive science to resolve this problem, but most of them lack generalizability because their functionality tends to rely on a specific type of cognitive models (e.g., psychometric functions) or research paradigm (e.g., task-to-region mapping). In addition, traditional optimal design methods fail to exploit neural and behavioral data simultaneously, which is essential for providing an integrative explanation of human cognition. As one of the possible solutions, we propose an implementation of Adaptive Design Optimization (ADO; Cavagnaro, Myung, Pitt, & Kujala, 2010) in model-based functional MRI (fMRI) experiments using a Joint Modeling Framework (B. M. Turner, Forstmann, et al., 2013). First, we introduce a general architecture of fMRI-based ADO and discuss practical considerations in real-world applications. Second, three simulation studies show that fMRI-based ADO estimates parameters more accurately and precisely than conventional, randomized experimental designs. Third, a real-time fMRI experiment validates the performance of fMRI-based ADO in the real-world setting. The result suggests that ADO performs better than randomized designs in terms of accuracy, but the unbalanced designs proposed by ADO may inflate the variability of trial-wise estimates of neural activation and therefore model parameters. Lastly, We discuss the limitations, further developments, and applications of fMRI-based ADO.

    Committee: Brandon Turner (Advisor); Jay Myung (Committee Member); Zhong-Lin Lu (Committee Member) Subjects: Psychology
  • 6. Hegana, Ashenafi Low Temperature Waste Energy Harvesting by Shape Memory Alloy Actuator

    Doctor of Philosophy, University of Akron, 2016, Mechanical Engineering

    Waste heat energy harvesting has been one of the techniques used to reduce emission of CO2 and improve efficiency of power generation, oil mining and different industrial processes. Nearly 90% of waste heat is considered low-grade (< 230OC) and is unsuitable for traditional waste heat recovery techniques. Thus, a non-continuous SMA Energy Harvesting prototype (EHP) to convert low-grade heat into electricity is presented in this research. We first demonstrate the feasibility of EHP using non-continuous shape memory alloy actuators (SMA) to convert waste heat energy to electricity. Both linear and spring shaped SMA wires made of NiTi alloy have been used to evaluate the energy harvesting capability of EHP. The experimental results proved that the EHP can generate oscillatory shaft rotation. The EHP test where the generator was connected to the main shaft through gear-box provided nearly 10V and 2.5rads shaft rotation in 0.3s. It was also found that the helical spring SMA actuator energy conversion factor was higher than that of the linear SMA actuator. Next, energy storage on both super-capacitors and micro-capacitors by the EHP with the helical spring SMA wire was explored. Using full-wave rectifier circuit, the average steady state energy stored across 6F capacitor per 2.5s of operation was 7.4mJ that is greater by factors of 4, 2 2 from the stored energy across 1.4mF, 2.2mF & 3.3mF capacitors, respectively. The SMA cyclic heating and cooling experiment was developed to evaluate the relationship between SMA stress and SMA temperature, and to validate the hysteretic behavior of SMA actuator upon thermal loading. Sigmoidal-Weibull 4 parameter model was obtained as the best curve fitted model to the experimental SMA stress-temperature data. The test results also disclosed the hysteretic characteristic of SMA wires induced by cyclic thermal loading This confirmed the functionality of presented energy harvesting device using SMA actuators. Then, Heat trans (open full item for complete abstract)

    Committee: Erik Engeberg Dr. (Advisor); Celal Batur Dr. (Committee Co-Chair); D. Dane Quinn Dr. (Committee Member); Hariharan Subramaniya I Dr. (Committee Member); Alper Buldum Dr. (Committee Member) Subjects: Automotive Engineering; Energy; Engineering; Materials Science; Mechanical Engineering; Mechanics
  • 7. Al Kader, Abedallah Design and Analysis of Facility Location Experiments Applied to Alcohol-Based Hand Sanitizers

    Doctor of Philosophy, The Ohio State University, 2024, Industrial and Systems Engineering

    This dissertation introduces a groundbreaking data-driven approach to optimize the placement and quantity of facilities relevant across various industries and supply chains, such as warehouses, charging stations, and hand sanitizer dispensers. Addressing a critical challenge in operational efficiency, this research seeks to significantly impact industry practices, public health, and environmental sustainability. In our preliminary analysis working with a major provider of Alcohol-Based Hand Sanitizer dispensers, we discovered that up to 80% of dispenser facilities are underutilized, underscoring a widespread issue in resource allocation with profound implications for manufacturers and public health. This dissertation aims to rectify such inefficiencies by proposing the first-ever data-driven methodology for facility placement, rooted in optimally designed experiments and optimally placed facilities. We investigate relevant optimal experimental design, a key component of scientific research that ensures precise data collection while minimizing costs. This research pioneers the development of efficient, optimal experimental design methods for the relevant type of linear models to facility location problems which we argue are generalized linear models that address period-specific response cross-correlations. Time periods in which facilities are configured are termed a type of “whole plot” in that responses at the facilities correlate within the period. Related experimental planning leads to a new type of optimal split designs which account for terms including “network effects” associated with the specific configuration of facilities, including co-location. In our split plot formulation, we also include “market presence” effects relating to the total number of facilities within a local region. The coordinate-exchange algorithm, initially proposed by Meyer and Nachtsheim in 2005, is enhanced to cater to scenarios where the proximity of physical locations influences d (open full item for complete abstract)

    Committee: Theodore Allen (Advisor); Susan O'Hara (Committee Member); Samantha Krening (Committee Member) Subjects: Industrial Engineering
  • 8. Hafez, Mhd Ammar AN IMPROVED POLYNOMIAL CHAOS EXPANSION BASED RESPONSE SURFACE METHOD AND ITS APPLICATIONS ON FRAME AND SPRING ENGINEERING BASED STRUCTURES

    Doctor of Philosophy in Engineering, Cleveland State University, 2022, Washkewicz College of Engineering

    In engineering fields, computational models provide a tool that can simulate a real world response and enhance our understanding of physical phenomenas. However, such models are often computationally expensive with multiple sources of uncertainty related to the model's input/assumptions. For example, the literature indicates that ligament's material properties and its insertion site locations have a significant effect on the performance of knee joint models, which makes addressing uncertainty related to them a crucial step to make the computational model more representative of reality. However, previous sensitivity studies were limited due to the computational expense of the models. The high computational expense of sensitivity analysis can be addressed by performing the analysis with a reduced number of model runs or by creating an inexpensive surrogate model. Both approaches are addressed in this work by the use of Polynomial chaos expansion (PCE)-based surrogate models and design of experiments (DoE). Therefore, the objectives of this dissertation were: 1- provide guidelines for the use of PCE-based models and investigate their efficiency in case of non-linear problems. 2- utilize PCE and DoE-based tools to introduce efficient sensitivity analysis approaches to the field of knee mechanics. To achieve these objectives, a frame structure was used for the first aim, and a rigid body computational model for two knee specimens was used for the second aim. Our results showed that, for PCE-based surrogate models, once the recommended number of samples is used, increasing the PCE order produced more accurate surrogate models. This conclusion was reflected in the R2 values realized for three highly non-linear functions ( 0.9998, 0.9996 and 0.9125, respectively). Our results also showed that the use of PCE and DoE-based sensitivity analyses resulted in practically identical results with significant savings in the computational cost of sensitivity an (open full item for complete abstract)

    Committee: Jason Halloran (Advisor); Lutful Khan (Committee Member); Daniel Munther (Committee Member); Josiah Sam Owusu-Danquah (Committee Member); Stephen Duffy (Committee Member) Subjects: Biomechanics; Biomedical Engineering; Biomedical Research; Civil Engineering
  • 9. Xie, Yanli Design, Analysis, and Misspecification Sensitivity of Partially and Fully Nested Multisite Cluster-Randomized Designs

    PhD, University of Cincinnati, 2022, Education, Criminal Justice, and Human Services: Educational Studies

    The purpose of this dissertation is to develop principles and strategies for and identify limitations of multisite cluster randomized trials in the context of partially and fully nested designs. In the first study, I develop principles of estimation, sampling variability, and inference for studies that leverage multisite designs within the context of partially nested structures. In the second study, I further advance multisite partially nested designs by developing expressions to predict the statistical power with which these types of designs can detect main effects and their heterogeneity across sites, as well as expressions to identify optimal sampling plans. In the third study, I examine the impact (and sensitivity) of misspecifying the site-by-treatment variance parameter value on the power and efficiency of multisite cluster-randomized designs with fully nested data.

    Committee: Benjamin Kelcey Ph.D. (Committee Member); Christopher Swoboda Ph.D. (Committee Member); Youn Seon Lim Ph.D. (Committee Member); Ying Guo Ph.D. (Committee Member) Subjects: Educational Evaluation
  • 10. Chang Cheng, Jorge Probing Human Category Structures with Synthetic Photorealistic Stimuli

    Doctor of Philosophy, The Ohio State University, 2022, Psychology

    Formal theories of human categorization have been a focus of research in psychology and bolster more than a century's worth of studies. In this process, many theories have been proposed, evaluated, and compared through the development of computational models. An assumption shared among these studies is the existence of a psychological stimulus space that encodes stimuli. Traditionally, the field has modeled this space through low-dimensional artificial stimuli designed for the laboratory setting. While this paradigm grants a high degree of control over experiments, the ecological validity of this assumption is a major point of criticism. After all, stimuli in the natural world are often complex and unlikely to be fully represented in a low-dimensional space. Thus, studies in the field must address this representational gap for results to be generalizable. In the present thesis, we study this representational gap by reexamining the debate between exemplar and prototype models of categorization in an experiment that utilizes highly realistic stimuli and a high-dimensional stimulus space. In doing so, we propose a framework for experimentation based on using Generative Adversarial Networks (GANs) to model the psychological feature space. This framework also showcases several techniques that address the unique challenges of utilizing high-dimensional stimuli that have prevented their adoption in the past. We employed our framework in two experiments comparing prototype and exemplar models in different settings and found a consistent advantage for prototype models contrary to the dominant view in the field. We then theorize an explanation for this advantage by discussing the effects of increasing the dimensionality of the feature space on each type of model, arguing that prototype models are more robust in these scenarios. These observations suggest that previously found advantages for exemplar models might have been an artifact of using low-dimensional stimuli. The (open full item for complete abstract)

    Committee: Jay Myung Myung (Advisor); Keith Redmill (Committee Member); Brandon Turner (Committee Member); Mark Pitt (Advisor) Subjects: Cognitive Psychology; Computer Science; Psychobiology
  • 11. Kedari, Sayali Ravindra Bayesian Learning in Computational Rheology: Applications to Soft Tissues and Polymers

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Mechanical Engineering

    The characterization of the viscoelastic response of polymers and soft tissue is significant in several areas like biomedical engineering and material processing. Computational modeling of and experimentation with such rheological materials are complex due to their time- and temperature-dependent nature. There is a critical need for reliable computational models that account for the inherent variability in experimental data. Rheological constitutive models characterize the viscoelastic response of materials using discrete or continuous relaxation spectra. The spectra are characterized by parameters (time constants, elastic and shear moduli) representing the relaxation process, and contain information about the molecular structure of polymers and soft materials. We can estimate the viscoelastic model parameters from stress relaxation experiments, but several challenges exist. For discrete spectra, a challenge is a finite number of model parameters. In the case of continuous spectra, challenges include ill-posedness, the curse of dimensionality, parameter identifiability, as well as correlated and heteroscedastic data. For modeling temperature-dependent material behavior, the challenges include model complexity and limited availability of experimental data. To address these challenges, we aim to develop rigorous stochastic approaches. Bayesian methods offer a rigorous stochastic foundation and have gained significant interest due to the increasing availability of computational resources. The specific aims of this work are: Aim 1: Employ an information-theoretic approach, namely Fisher information, to develop criteria for evaluating experimental data to obtain an ideal range of parameters. Aim 2: Develop a hierarchical Bayesian (HB) approach to L2 regularization for inferring continuous spectra while considering heteroscedasticity. This approach applies to the general linear inverse problems involving regularization, and is not limited to (open full item for complete abstract)

    Committee: Kumar Vemaganti Ph.D. (Committee Member); Gowtham Atluri Ph.D. (Committee Member); Sandeep Madireddy Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Woo Kyun Kim Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 12. Fahim, Muhammad Qaisar Co-optimization of design and control of electrified vehicles using coordination schemes

    Master of Science, The Ohio State University, 2022, Mechanical Engineering

    An efficient simulation framework for co-optimization of design and control is fundamental in the development phase of hybrid electric vehicles to achieve the best system- level improvements of energy efficiency and emissions. Coordination schemes for co- optimization have been widely investigated in the literature, but only for a limited number and nature of design and control variables. In this study a decomposition-based coordination scheme capable to handle multi-time scale, time variant and time invariant (discrete and continuous) variables with ability to handle each sub-problem with different solver is not only demonstrated but also compared with simultaneous-based scheme in terms of optimality of the solution and computational performance. The two coordination schemes are used to co-optimize energy management strategy and components sizing for a series hybrid truck. In addition, multiple objectives are weighted in the cost function: fuel consumption, battery size, and tailpipe pollutant emissions. Results show that the simultaneous scheme is computationally less expensive for simple problems, but it becomes computationally inefficient with increasing problem complexity, with the additional drawback of not being able to handle integer-valued dynamic variables. On the other hand, the decomposition-based scheme can solve such problems, but with a more complex problem formulation. Results show that the decomposition-based scheme has not only 14% improvement in computational performance, but the optimality of the solution is also comparable with simultaneous-based scheme. Hence, as compared to the dynamic optimization, co-optimization yields up to 3.7% improvement in the average genset efficiency operation. Moreover, the fuel consumption for dynamic optimization was 2.5 kg which is reduced to 1.6 kg with co-optimization and was further reduced to 1.5 by adding engine on off control.

    Committee: Qadeer Ahmed (Advisor); Shawn Midlam-Mohler (Committee Member); Manfredi Villani (Other) Subjects: Aerospace Engineering; Automotive Engineering; Electrical Engineering; Mechanical Engineering; Robotics
  • 13. Amoussougbo, Thibaut Combined Design and Control Optimization of Autonomous Plug-In Hybrid Electric Vehicle Powertrains

    MS, University of Cincinnati, 2021, Engineering and Applied Science: Mechanical Engineering

    A major emphasis within the automotive industry today is autonomous driving. Many recent studies in this area deal with the development of real-time optimal control strategies to improve overall vehicle energy efficiency. Although such research is critically important, it overlooks the potential need to reevaluate the design of an autonomous vehicle itself, especially as it relates to the powertrain. Failing to thoroughly examine the impact of autonomous driving on vehicle powertrain design could limit the potential opportunities to augment the energy-efficiency gains from optimal powertrain control (power demand) strategies. Therefore, this thesis addresses this situation by investigating the impact of autonomous driving on the design (sizing) and control strategies (energy management + power demand) of a plug-in hybrid-electric vehicle (PHEV) powertrain. In particular, a dynamic optimization method known as multidisciplinary dynamic system design optimization (MDSDO) is used to formulate and solve a combined optimal design and control optimization (or control co-design) problem for an autonomously-driven PHEV powertrain under two simulation conditions: in the first, only an autonomous driving cycle represented by a hypothetical lead (HL) duty cycle is considered, whereas the second also includes acceleration and all-electric range (AER) performance along with the HL duty cycle in order to generate an overall powertrain design solution. The optimal solutions for both simulation conditions are then compared to those corresponding to a control co-design problem for a human-driven PHEV powertrain, with the results indicating that autonomous driving does indeed have a significant impact on both powertrain design and control. Therefore, this implies a compelling need to reevaluate current powertrain design conventions when developing autonomous vehicles.

    Committee: Michael Alexander-Ramos Ph.D. (Committee Chair); Manish Kumar Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Engineering
  • 14. Vishwanath, Aashrith Large-scale Numerical Optimization for Comprehensive HEV Energy Management - A Three-step Approach

    Master of Science, The Ohio State University, 0, Electrical and Computer Engineering

    The transportation sector is making a transition from conventional engine vehicles to hybrid electric vehicles because of the environmental concerns like global warming. HEVs are a very lucrative option today because it helps reduce the usage of fossil fuels without much compromise on the range of the vehicle. This is because HEVs offer extra degrees of freedom to operate the vehicle in electric mode or engine mode or both. This calls for optimizing the powertrain of a HEV. As a part of this research work, we present a more realistic approach by considering a large state-space which engenders complex dynamics/ interactions between multiple sub-systems. A P2 parallel hybrid powertrain of a class 6 Pick-up & Delivery truck is considered as the case-study problem. This problem involves 13 states and 4 control levers. Some of these variables are discrete in nature and some are continuously varying with respect to time. Some have slow dynamics like temperature, while some have fast dynamics like battery state of charge which makes it a stiff system. Usage of LUTs, interpolations and conditional formulations exacerbate the complexity of the problem already considered. Optimization of all these variables together makes it very challenging for the solver hence, a novel three-step approach is presented and used to solve the case-study problem. This makes use of pseudo spectral method (PSC) for handling real-valued variables and for accurate state estimations and Dynamic programming (DP) for the optimization of integer-valued variables. We present three scenarios for the case-study problem where fuel consumption alone is minimized, emissions alone are minimized and, lastly a combination of both fuel and emissions are minimized. The computation time for this huge problem is only of the order of 50-80 minutes using the 3-step approach. The fuel minimization case has the least fuel and highest emissions, and vice versa for the emissions minimization case. The fuel & emissions pr (open full item for complete abstract)

    Committee: Qadeer Ahmed (Advisor); Vadim Utkin (Committee Member) Subjects: Aerospace Engineering; Automotive Engineering; Electrical Engineering; Mechanical Engineering; Robotics
  • 15. Handford, Matthew Simulating human-prosthesis interaction and informing robotic prosthesis design using metabolic optimization

    Doctor of Philosophy, The Ohio State University, 2018, Mechanical Engineering

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations, which track experimentally determined non-amputee walking kinematics, we can instead explicitly model the human-prosthesis interaction to produce a prediction of the user's walking kinematics. To accomplish this, we use large-scale trajectory optimization on a muscle-driven multi-body model of an amputee with a robotic prosthesis to obtain metabolic energy-minimizing walking gaits. While this computational framework can be applied to a wide range of passive or biomechatronic prosthetic, exoskeletal, and assistive devices, here, we focus on unilateral ankle-foot prostheses. We use this optimization to determine optimized prosthesis controllers by minimizing a weighted sum of human metabolic and prosthesis costs and develop Pareto optimal curves between human metabolic and prosthesis cost with various prostheses masses and at various speeds. We also use this optimization to obtain trends in the energetics and kinematics for various net prosthesis work rates produced by given prosthesis feedback controllers. We find that the net metabolic rate has a roughly quadratic relationship with the net prosthesis work rate. This simulation predicts that metabolic rate could be reduced below that of a non-amputee, although such gaits are highly asymmetric and not seen in experiments with amputees. Walking simulations with bilateral symmetry in kinematics or ground reaction forces have higher metabolic rates than asymmetric gaits, suggesting a potential reason for asymmetries in amputee walking. Our findings suggest that a computational framework such as one presented here could augment the experimental approaches to prosthesis design iterations, although quantitatively accurate predictions of experiments from simulation remains an open probl (open full item for complete abstract)

    Committee: Manoj Srinivasan (Advisor); Steve Collins (Committee Member); Kiran D'Souza (Committee Member); Rob Siston (Committee Member) Subjects: Mechanical Engineering
  • 16. Nguyen, Huong Near-optimal designs for Gaussian Process regression models

    Doctor of Philosophy, The Ohio State University, 2018, Statistics

    The Gaussian process (GP) regression model is a popular modeling framework in both spatial statistics and computer experiments. Our main goal is to find suitable designs for GP model predictions of outputs at unobserved locations; this goal can be interpreted as finding optimal designs that minimize the integrated mean square prediction errors (iMSPE) criterion function among all feasible designs. For most problems, there is no analytic solution to the minimization problem and the minimization step is done by some stochastic optimization algorithms. These algorithms are configured to search through the set of all feasible designs. At the conclusion of the search, the best design found is recommended. Although the recommended design is unlikely to be a true optimal design, the recommended design is expected to be close to one. Nevertheless, the current interpretation of the designs recommended by these algorithms does not include the uncertainty or the risks associated with this assumption (how close the recommended design is to a true optimal design). Most critically, we do not have a direct answer to important questions regarding the quality of the recommended designs (for example, whether the recommended design is significantly better than other feasible designs). In some cases, we even find that depending on the specifications of the problems and the optimization algorithm employed, the recommended designs can be undesirable. In this dissertation, we propose a new design generation and selection framework centering around a near-optimal design (NOD) concept. This new framework considers the natural range of iMSPE among all feasible designs and subsequently, reports the uncertainty about the quality of the recommended designs. Furthermore, by recognizing the sub-optimality of the majority of designs recommended by a number of stochastic optimization algorithms, we can implement more specific requirements about the minimal acceptable design quality, alleviating (open full item for complete abstract)

    Committee: Peter Craigmile (Advisor); Matthew Pratola (Advisor); Oksana Chkrebtii (Committee Member); Notz Bill (Committee Member) Subjects: Statistics
  • 17. Debnath, Jayanta Development of Scheduling, Path Planning and Resource Management Algorithms for Robotic Fully-automated and Multi-story Parking Structure

    Master of Science, University of Toledo, 2016, Electrical Engineering

    This thesis demonstrates development of a complete suite of path planning, elevator scheduling and resource allocation algorithms to manage multiple concurrent requests, in real time and in a dynamic context, for storage and retrieval of vehicles loaded onto robotic carts for a robotic, fully-automated, multi-story and driving-free parking structure. The objective is to utilize, for parking, the available spaces across the floors of a parking structure that does not have any driving lanes at a much higher percentage rate which is greater than or equal to 80% in all cases while keeping the customer waiting times at minimum. Path search and planning employs the incremental informed search algorithm D* Lite with domain-specific heuristics, and the uninformed search algorithm Uniform Cost Search in a completely-automated framework. An optimization algorithm based on nested partitions and genetic algorithm is adapted for scheduling of a group of elevators in the multi-story parking structure environment. A small percentage of parking spots are reserved as “blank cells” to facilitate movement of roller beds carrying a vehicle to its storage or retrieval destination. Resource allocation and management is accomplished using statistical models employing queueing theory for structural resources such as blank cells and elevators while minimizing customer waiting time. Lower bounds on the number of elevators needed for a specific floor count and number of parking spaces per floor are derived using statistical modeling. Multiple vehicles are considered to be potentially moving from one parking space to another by roller bed pallets moving along tracks mounted on the surface of each storage cell. A software simulator based on multi-threaded Java code and unified modeling language was developed to perform empirical testing and validation of the performance of the proposed integration framework for the set of path search, elevator scheduling and resource management algorithm (open full item for complete abstract)

    Committee: Gursel Serpen (Committee Chair); Kevin Xu (Committee Member); Ahmad Javaid (Committee Member) Subjects: Artificial Intelligence
  • 18. Gunbatar, Yakup Nonlinear Adaptive Control and Guidance for Unstart Recovery for a Generic Hypersonic Vehicle

    Doctor of Philosophy, The Ohio State University, 2014, Electrical and Computer Engineering

    This work presents the development of an integrated flight controller for a generic model of a hypersonic air-breathing vehicle. The flight control architecture comprises a guidance and trajectory planning module and a nonlinear inner-loop adaptive controller. The emphasis of the controller design is on achieving stable tracking of suitable reference trajectories in the presence of a specific engine fault (inlet unstart), in which sudden and drastic changes in the vehicle aerodynamics and engine performance occur. First, the equations of motion of the vehicle for a rigid body model, taking the rotation of the Earth into account, is provided. Aerodynamic forces and moments and engine data are provided in lookup-table format. This comprehensive model is used for simulations and verification of the control strategies. Then, a simplified control-oriented model is developed for the purpose of control design and stability analysis. The design of the guidance and nonlinear adaptive control algorithms is first carried out on a longitudinal version of the vehicle dynamics. The design is verified in a simulation study aiming at testing the robustness of the inner-loop controller under significant model uncertainty and engine failures. At the same time, the guidance system provides reference trajectories to maximize the vehicle's endurance, which is cast as an optimal control problem. The design is then extended to tackle the significantly more challenging case of the 6-degree-of-freedom (6-DOF) vehicle dynamics. For the full 6-DOF case, the adaptive nonlinear flight controller is tested on more challenging maneuvers, where values of the flight path and bank angles exceed the nominal range defined for the vehicle. Simulation studies show stable operation of the closed-loop system in nominal operating conditions, unstart conditions, and during transition from sustained scramjet propulsion to engine failure mode.

    Committee: Andrea Serrani Prof. (Advisor); Umit Ozguner Prof. (Committee Member); Zhang Wei Prof. (Committee Member) Subjects: Aerospace Engineering; Computer Engineering; Electrical Engineering; Engineering
  • 19. Taslim, Cenny Multi-Stage Experimental Planning and Analysis for Forward-Inverse Regression Applied to Genetic Network Modeling

    Doctor of Philosophy, The Ohio State University, 2008, Industrial and Systems Engineering

    This dissertation proposes methods for steady state linear system identification for both forward cases in which prediction of outputs for new inputs are desired and also inverse prediction of which inputs fostered measured outputs are needed. Special attention is given to genetic network modeling applications. Inverse prediction matters here because then one can predict the effective genetic perturbation associated with a new target drug compound or therapy. The primary application addressed in this dissertation is motivated by our on-going contributions related to Down syndrome which affects approximately 1 out of every 800 children. First, single shot experimentation and analysis to develop network models is considered. The discussion focuses on linear models because of the relevance of equilibrium conditions and the typical scarcity of perturbation data. Yet, deviations from linear systems modeling assumptions are also considered. For system identification, we propose forward network identification regression (FNIR) and experimental planning involving simultaneously perturbing more than a single gene concentration using D-optimal designs. The proposed methods are compared with alternatives using simulation and data sets motivated by the SOS pathway for Escherichia coli bacteria. Findings include that the optimal experimental planning can improve the sensitivity, specificity, and efficiency of the process of deriving genetic networks. In addition, topics for further research are suggested including the need to develop more numerically stable analysis methods, improved diagnostic procedures, sequential design and analysis procedures. Next, multi-stage design and analysis procedures are proposed for experimentation in which both forward and inverse predictions are relevant. Methods are proposed to derive desirable experimental plans for the next batch of tests based on both space filling and D-optimality. The space filling designs are intended to support both linea (open full item for complete abstract)

    Committee: Theodore Allen PhD (Committee Chair); Mario Lauria PhD (Committee Co-Chair); Clark Mount-Campbell PhD (Committee Member); Hakan Ferhatosmanoglu PhD (Committee Member) Subjects: Bioinformatics; Biostatistics; Engineering; Operations Research; Statistics
  • 20. Huang, Deng Experimental planning and sequential kriging optimization using variable fidelity data

    Doctor of Philosophy, The Ohio State University, 2005, Industrial and Systems Engineering

    Engineers in many industries routinely need to improve the product or process designs using data from the field, lab, and computer experiments. This research seeks to develop experimental planning and optimization schemes using data form multiple experimental sources. We use the term "fidelity" to refer to the extent to which a surrogate experimental system can reproduce results of the system of interest. For experimental planning, we present perhaps the first optimal designs for variable fidelity experimentation, using an extension of the Expected Integrated Mean Squared Error (EIMSE) criterion, where the Generalized Least Squares (GLS) method was used to generate the predictions. Numerical tests are used to compare the method performance with alternatives and to investigate the robustness of incorporated assumptions. The method is applied to automotive engine valve heat treatment process design in which real world data were mixed with data from two types of computer simulations. Sequential Kriging Optimization (SKO) is a method developed in recent years for solving expensive black-box problems. We propose an extension of the SKO method, named Multiple Fidelity Sequential Kriging Optimization (MFSKO), where surrogate systems are exploited to reduce the total evaluation cost. As a pre-step to MFSKO, we extended SKO to address stochastic black-box systems. Empirical studies showed that SKO compared favorably with alternatives in terms of consistency in finding global optima and efficiency as measured by number of evaluations. Also, in the presence of noise, the new expected improvement criterion achieves desired balance between the need for global and local searches. In the proposed MFSKO method, data on all experimental systems are integrated to build a kriging meta-model that provides a global prediction of the system of interest and a measure of prediction uncertainty. The location and fidelity level of the next evaluation are selected by maximizing an augmented e (open full item for complete abstract)

    Committee: Richard Miller (Advisor) Subjects: