# Search Results (1 - 25 of 386 Results)

Sort By
Sort Dir

Results per page

A Comparative Analysis of an Interior-point Method and a Sequential Quadratic Programming Method for the Markowitz Portfolio Management Problem
BA, Oberlin College, 2016, Mathematics
In this paper, I give a brief introduction of the general optimization problem as well as the convex optimization problem. The portfolio selection problem, as a typical type of convex optimization problem, can be easily solved in polynomial time. However, when the number of available stocks in the portfolio becomes large, there might be a significant difference in the running time of different polynomial-time solving methods. In this paper, I perform a comparative analysis of two different solving methods and discuss the characteristics and differences.

#### Subjects:

Applied Mathematics; Industrial Engineering; Mathematics; Operations Research

#### Keywords:

optimization;interior-point method;portfolio optimization;convex optimization;sequential quadratic programming;

Master of Science in Electrical Engineering (MSEE), Wright State University, 2017, Electrical Engineering
For purposes of travelling on all-terrains surfaces that are both uneven and discontinuous, legged robots have upper-hand over wheeled and tracked vehicles. The robot used in this thesis is a simulated hexapod with 3 degrees of freedom per leg. The main aim is to reduce the energy consumption of the system during walking by attaching a passive linear spring to each leg which will aid the motors and reduce the torque required while walking. Firstly, the ideal stiffness and location or the coordinates for mounting the spring is found out using gradient based algorithm called `Simultaneous Perturbation and Stochastic Approximation Algorithm’ (SPSA) on a flat terrain using data from a single walking step. Motor load is approximated by computing the torque impulse, which is the summation of the absolute value of the torque output for each joint during walking. Once the ideal spring and mount is found, the motor loading of the robot with the spring attached is observed and compared on three different terrains with the original loading without the spring. The analysis is made on a single middle leg of the robot, which is known to support the highest load when the alternating tripod gait is used. The obtained spring and mounting locations are applied to other legs to compute the overall energy savings of the system. Through this work, the torque impulse was decreased by 14 % on uneven terrain.

#### Committee:

Luther Palmer, III., Ph.D. (Advisor); Zach Fuchs, Ph.D. (Committee Member); Xiaodong Zhang, Ph.D. (Committee Member)

#### Subjects:

Electrical Engineering; Robotics

#### Keywords:

Legged robots; Energy optimization in legged robots; Optimization using SPSA; Gradient based optimization; Spring placement on a hexapod; Energy cost; Torque distribution

Aero-Structural Optimization of a 5 MW Wind Turbine Rotor
Master of Science, The Ohio State University, 2012, Aero/Astro Engineering
A 5 MW wind turbine rotor blade based on the NREL 5 MW Reference Turbine is optimized for maximum efficiency and minimum flapwise hub bending moment. Eighty three total design variables are considered, encompassing: airfoil shapes modeled by Bezier curves, defined near the root, mid-span, and tip; chord and twist distributions; and the amount of bend-twist coupling in the blade. Optimization is achieved with a genetic algorithm. A relatively new method requiring significantly less computation than finite element analysis is utilized for planning and predicting the bend-twist coupling behavior of the rotor. Airfoil performance is predicted with XFOIL, and wind turbine simulations are performed in FAST. The objective function is cost of energy (COE), defined as rotor cost (\$) divided by AEP (MWh/yr), where AEP is annual energy production. Reductions in flapwise bending loads and blade surface area are assumed to correspond to decreases in rotor cost due to material savings. As a result of the optimization, hub flapwise bending loads and blade surface area are each reduced by about 15%, without any decrease in AEP, yielding a 6.8% reduction in COE.

#### Committee:

Jack McNamara, PhD (Advisor); Mei Zhuang, PhD (Committee Member)

#### Subjects:

Aerospace Engineering

#### Keywords:

wind turbine optimization; bend-twist coupling; airfoil optimization; genetic algorithms; aerostructural optimization

Design of Thermal Structures using Topology Optimization
Doctor of Philosophy (PhD), Wright State University, 2014, Engineering PhD
The design of structures subjected to elevated temperature environments has long been an important area of study in the aerospace industry. This is especially true in the modern day, where new problems related to embedded engine aircraft and high temperature exhaust-washed structures present new structural design challenges not found in past applications. In this work, the response of a class of thermal structures whose responses are characterized by significant amounts of restrained expansion, to which exhaust-washed structures belong, are studied. To address the complex design challenges that become evident in these investigations, structural topology optimization is applied due to its unique ability to identify optimal material layout. Since conventional methods for topology optimization fail to generate effective designs in the presence of thermoelastic effects, new formulations for thermoelastic topology optimization are demonstrated. These include techniques for addressing the amount of reaction loading generated by a structural concept and methods for incorporating stress-based design criteria in topology optimization problems with design-dependent thermal loading. When taken together, the developments in this work provide a design technique in which stresses can be directly treated in thermal structures by identifying the proper arrangement of structural components in a thermal environment.

#### Committee:

Ramana Grandhi, Ph.D. (Advisor); Scott Thomas, Ph.D. (Committee Member); Ha-Rok Bae, Ph.D. (Committee Member); Jack McNamara, Ph.D. (Committee Member); Edward Alyanak, Ph.D. (Committee Member)

#### Subjects:

Aerospace Engineering; Mechanical Engineering

#### Keywords:

topology optimization; thermal structures; MDO; optimization; structural optimization

A Method for Simulation Optimization with Applications in Robust Process Design and Locating Supply Chain Operations
Doctor of Philosophy, The Ohio State University, 2002, Industrial and Systems Engineering

This dissertation contains the first proof of convergence of a genetic algorithm in the context of stochastic optimization. The class of stochastic optimization problems includes formulations in which the objective is an expected value, which can be evaluated using Monte Carlo methods. Growing computer power combined with methods presented here and elsewhere makes feasible the solution of many stochastic optimization problems with applications ranging from process design to facility location.

The dissertation also describes the proposed stochastic optimization method that combines a sequential ranking and selection procedure with an elitist genetic algorithm. A batching procedure is included to assure that batch means of solutions achieve approximate normality. The proposed method is proven under the normality assumption to converge in the long run to identify and maintain solutions with objective values within an acceptable difference, D, from the global optimal solution with probability greater than an acceptable probability, P*. Computational results illustrate that the proposed algorithm achieves promising performance compared with alternatives for a variety of problems with minimal changes.

The first application is on the stochastic optimization for “robust” engineering process design decisions making. By robust we mean designs that maximize the expected utility taking into account variation of “noise factors”.

A methodology for robust process design is presented based on direct minimization of the expected loss in some cases using the proposed optimization heuristics. The proposed methods are compared with alternatives including methods based on Taguchi’s signal-to-noise ratios. Several formulations of the loss are explored. The method is illustrated through its application to the design of robotic gas metal arc-welding parameter settings.

The second application is a simulation optimization method applied to decision making about where to locate facilities and how to transport products in a supply chain. This problem is shown to be a stochastic generalized assignment problem for which a bound is presented. We also propose a genetic algorithm, for cases in which bounds are available, that offers the possibility of stopping while guaranteeing that a solution with objective value within an acceptable difference, Δ, of the optimal value is found with probability greater than P*.

#### Keywords:

Genetic Algorithms; Stochastic Optimization; Monte Carlo Methods; Time Non-homogenous Markov Process; Taguchi Methods; Signal-to-Noise Ratio; Parameter Design; Multicriterion Optimization; Global Supply Chain Modeling and Optimization; Facility Location

Ant Colony Optimization Technique to Solve Min-Max MultiDepot Vehicle Routing Problem
MS, University of Cincinnati, 2011, Engineering and Applied Science: Mechanical Engineering
This research focuses on solving the min-max Multi Depot Vehicle Routing Problem (MDVRP) based on a swarm intelligence based algorithm called ant colony optimization. A traditional MDVRP tries to minimize the total distance travelled by all the vehicles to all customer locations. The min-max MDVRP, on the other hand, tries to minimize the maximum distance travelled by any vehicle. The algorithm developed is an extension of Single Depot Vehicle Routing Problem (SDVRP) algorithm developed by Bullnheimer et al. in 1997 based upon ant colony optimization. In SDVRP, all the vehicles start from a single depot and return to the same depot, and solution aims at finding tours of vehicles so that every customer location is visited exactly once and that minimizes the total distance travelled. Building upon the SDVRP algorithm, this study first involves developing an algorithm for the min-max variant of SDVRP problem where the maximum distance travelled by any vehicle is minimized. Later, the algorithm has been extended to address the Multi Depot variant of this problem. In this case, vehicles can start from multiple depots unlike SDVRP case and have to come back to their respective depot of origin once they visit a set of customer locations. The min-max multi-depot vehicle routing problem involves minimizing the maximum distance travelled by any vehicle in case of vehicles starting from multiple depots and travelling to each customer location at least once. This problem is of specific significance in case of time critical applications such as emergency response in large-scale disasters, and server-client network latency. The proposed algorithm uses an equitable region partitioning approach aimed at assigning customer locations to depots so that MDVRP is reduced to SDVRP. A background study on swarm intelligence based optimization techniques, region partitioning methods, approximation algorithms and also various techniques of optimization has been included in this research. The proposed method has been implemented in Matlab for obtaining the solution for the min-max MDVRP with any number of vehicles and customer locations. A comparative study is carried out to evaluate the proposed algorithm’s performance with respect to a currently available algorithm in literature in terms of the optimality of solution and time taken to reach the solution. Based on an extensive simulation study, it has been demonstrated that the ant colony optimization technique proposed in this thesis leads to more optimal results as compared to an existing method.

#### Committee:

Manish Kumar, PhD (Committee Chair); Sundararaman Anand, PhD (Committee Member); Kelly Cohen, PhD (Committee Member)

Mechanics

#### Keywords:

Ant Colony Optimization;Min-Max Multi Depot Vehicle Routing Problem;Optimization Techniques;Vehicle Routing Problem;Min-Max Single Depot Vehicle Routing Problem;Combinatorial Optimization Problems;

Risk-Averse and Distributionally Robust Optimization: Methodology and Applications
Doctor of Philosophy, The Ohio State University, 2018, Industrial and Systems Engineering
Many decision-making problems arising in science, engineering, and business involve uncertainties. One way to address these problems is to use stochastic optimization. A crucial task when building stochastic optimization models is quantifying a probability distribution to represent the uncertainty. Most often, partial information about the uncertainty is available through a series of historical data. In such circumstances, classical stochastic optimization models rely on approximating the underlying probability distribution. However, in many real-world applications, the underlying probability distribution cannot be accurately determined, even when historical data are available. This distributional ambiguity might lead to highly suboptimal decisions. An alternative approach to handle such an issue is to use distributionally robust stochastic optimization (DRSO for short), which assumes the underlying probability distribution is unknown but lies in an ambiguity set of distributions. Many existing studies on DRSO focus on how to construct the ambiguity set and how to transform the resulting DRSO into equivalent (well-studied) models such as mixed-integer programming and semide finite programming. This dissertation, however, addresses more fundamental questions, in a different manner than the literature. An overarching question that motivates most of this dissertation is which scenarios/uncertainties are critical to a stochastic optimization problem? A major contribution of this dissertation is a precise mathematical defi nition of what is meant by a critical scenario and investigation on how to identify them for DRSO. As has never been done before for DRSO (to the best of our knowledge), we introduce the notion of effective and ineffective scenarios for DRSO. This dissertation considers DRSOs for which the ambiguity set contains all probability distributions that are not far---in the sense of the so-called total variation distance---from a nominal distribution (which may be obtained from data). This dissertation then identifi es effective scenarios for two classes of DRSO problems formed via the total variation distance: (1) a class of convex stochastic optimization problems with a discrete sample space and (2) a class of inventory problems with a continuous sample space. All these classes of DRSO problems have equivalent risk-averse optimization problems that lay the foundation to identify effective scenarios. We elaborate how effective scenarios, along with other notions, can be used to choose an appropriate size for the ambiguity set of distributions. Then, we devise customized algorithms to solve DRSO formed via the total variation distance. Moreover, we survey existing algorithms to solve a closely related risk-averse optimization problem to those induced by the studied DRSO problems, and we propose new variations. Finally, to highlight the practical relevance of our findings, we implement all our modeling, theoretical, and computational results to solve problems arising in environment, energy, healthcare, and finance.

#### Committee:

Guzin Bayraksan, PhD (Advisor); Antonio Conejo, PhD (Committee Member); David Sivakoff, PhD (Committee Member)

#### Subjects:

Industrial Engineering; Operations Research

#### Keywords:

Decision-Making under Uncertainty, Mathematical Programming, Stochastic Optimization, Risk-Averse Optimization, Distributionally Robust Optimization

Analysis of Algorithms for Star Bicoloring and Related Problems
Doctor of Philosophy (PhD), Ohio University, 2015, Computer Science (Engineering and Technology)
This dissertation considers certain graph-theoretic combinatorial problems which have direct application to the efficient computation of derivative matrices (“Jacobians”) which arise in many scientific computing applications. Specifically, we analyze algorithms for Star Bicoloring and establish several analytical results. We establish complexity-theoretic lower bounds on the approximability of algorithms for Star Bicoloring, showing that no such polynomial-time algorithm can achieve an approximation ratio of O(N ^(1/3)-e ) for any e > 0 unless P = NP. We establish the first algorithm (ASBC) for Star Bicoloring with a known approximation upper-bound, showing that ASBC is an O(N ^(2/3 )) polynomial-time approximation algorithm. Based on extension of these results we design a generic framework for greedy Star Bicoloring, and implement several specific methods for comparison. General analysis techniques are developed and applied to both algorithms from the literature (CDC, Hossain and Steihaug, 1998 [1]) as well as those developed as part of the framework. We provide numerous approximability results including the first approximation analysis for the CDC algorithm, showing that CDC is an O(N ^(3/4) ) approximation algorithm. Finally, we observe that all algorithms within this generic framework produce a restricted class of star bicolorings that we refer to as Distance-2 Independent Set Colorings (D2ISC). We establish the relationship between Star Bicoloring and D2ISC. In particular we show that these two notions are not equivalent, that D2ISC is NP-complete and that it cannot be approximated to within O(N ^(1/3 -e) ) for any e > 0 unless P = NP.

#### Committee:

David Juedes, Ph.D. (Advisor); Razvan Bunescu, Ph.D. (Committee Member); Frank Drews, Ph.D. (Committee Member); Cynthia Marling, Ph.D. (Committee Member); Sergio Lopez, Ph.D. (Committee Member); Howard Dewald, Ph.D. (Committee Member)

#### Subjects:

Applied Mathematics; Computer Science

#### Keywords:

star bicoloring; acyclic bicoloring; Jacobian matrix computation; approximation algorithms; greedy star bicoloring; Jacobian matrix optimization; greedy optimization methods

Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques
Doctor of Philosophy in Engineering, University of Toledo, 2012, College of Engineering

The probabilistic reliability evaluation of power systems is a complex and highly dimensional problem that often requires a large amount of computational resources, particularly processing power and time. The complexity of this problem is only increasing with the advent of the smart grid and its accompanying technologies, such as plug-in hybrid electric vehicles (PHEVs). Such technologies, while they add convenience, intelligence, and reduce environmental impacts, also add dynamic and stochastic loads that challenge the current reliability and security of the power grid.

One method that is often used to evaluate the reliability of power systems is Monte Carlo simulation (MCS). As the complexity and dimensionality of a power system grows, MCS requires more and more resources leading to longer computational times. Multiple methods have previously been developed that aid in reducing the computational resources necessary for MCS in order to achieve a more efficient and timely convergence while continuing to accurately assess the reliability of a given system. Examples include analytical state space decomposition, population based metaheuristic algorithms (PBMs), and the use of high performance computing (HPC).

In order to address these issues, this dissertation is focused on improving the performance of algorithms used to examine the level of reliability in composite power systems through the use of computational intelligence (CI) and HPC, while also investigating the impact of PHEVs on the power grid at the composite and distribution levels. Contributions include the development and exploration of 3 variations of a new, hybrid algorithm called intelligent state space pruning (ISSP) that combines PBMs with non-sequential MCS in order to intelligently decompose, or prune, a given state space and improve computational efficiency, an evaluation of the use of latin hypercube sampling and low discrepancy sequences in place of MCS, the use of serial and parallel support vector machines for state classification when evaluating power system reliability using MCS, an investigation of the impact that PHEVs will have when integrated into the power grid at the distribution level, and the development of a new model for the probabilistic evaluation of composite system reliability that includes one of the key technologies in the smart grid, PHEVs.

#### Committee:

Lingfeng Wang, Ph.D. (Committee Chair); Mansoor Alam, Ph.D. (Committee Co-Chair); Jackson Carvalho, Ph.D. (Committee Member); Vijay Devabhaktuni, Ph.D. (Committee Member); Mohsin Jamali, Ph.D. (Committee Member); Weiqing Sun, Ph.D. (Committee Member)

#### Subjects:

Artificial Intelligence; Computer Science; Electrical Engineering

#### Keywords:

State space pruning; Monte Carlo simulation; Power system reliability; Plug-in hybrid electric vehicle; Ant colony optimization; Artificial immune system, Genetic algorithm; Particle swarm optimization; Latin hypercube sampling; Low discrepancy sequences

Compiler Techniques for Transformation Verification, Energy Efficiency and Cache Modeling
Doctor of Philosophy, The Ohio State University, 2018, Computer Science and Engineering
Performance has been the focus of computer systems for decades, from past Moore law to current parallel computers. Compiler optimizations are used to improve performance by generating code to utilize hardware (e.g. cache)component efficiently. However, modern systems such as large scale system require not only performance but also resilience and energy efficiency. Increasing concern of system resilience and energy efficiency has been shown in both industry and academia. Errors within applications, especially those escape from detection and resulting in silent data corruption, are extremely problematic. Thus, in order to improve the resilience of applications, error detection and vulnerability characterization techniques are an important step towards fault tolerant applications. Compiler transformations, which restructure programs to improve performance by leveraging data locality and parallelism, are often complex and possibly involve bugs that leads to errors in transformed programs. Thus it is essential to guarantee the correctness, however, current approaches suffers from various problems such as transformations supported or space complexity etc. This dissertation presents a novel approach that performs dynamic verification by inserting lightweight checker codes to detect errors of transformations. The errors are exposed by the execution of checker-inserted transformed program if exist. Energy efficiency is of increasingly importance in scenarios ranging from battery-operated devices to data centers striving for lower energy costs. Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying processor frequency to improve energy efficiency. Typical DVFS approaches involve default strategies such as reacting to the CPU runtime load to adapt frequency, which have inherent limitations because of processor-specific and application-specific effects. This dissertation developed a novel compile-time characterization to select frequency and number of CPU cores to use, which providing significant additional benefits over the runtime approach. Cache memory, as one of the most fundamental components of modern processors, has a significant impact on the performance of current computer systems. Compiler optimizations on efficient use of cache to reduce data movement, are often based on very approximate cost models due to the lack of precise modeling of hierarchical cache. The challenge of accurately modeling cache misses has made trace-based simulation the current method of choice. This dissertation takes a fundamentally different approach for polyhedral programs, developed a closed-form solution for modeling of misses of set-associative cache by leveraging the power of polyhedral analysis. This solution can enable program transformation choice at compile time to optimize cache misses. In sum, the dissertation makes contributions to advance compiler technology to achieve program transformation verification, to reduce energy costs, and to effectively modeling cache behaviors.

#### Committee:

Ponnuswamy Sadayappan (Advisor); Gagan Agrawal (Committee Member); Radu Teodorescu (Committee Member); Louis-Noel Pouchet (Committee Member); Sriram Krishnamoorthy (Committee Member)

Computer Science

#### Keywords:

compiler optimization; polyhedral compilation; program verification; energy optimization; DVFS; cache modeling; vulnerability analysis

A Design and Optimization Methodology for Multi-Variable Systems
Master of Science, The Ohio State University, 2015, Mechanical Engineering
The automotive industry has been pressured to improve fuel economy of average lightduty vehicle [1]. To combat the need for improved fuel efficiency, vehicle manufactures are researching technologies that can improve the efficiency of the internal combustion engine. While many scientific fields are being investigated, exhaust waste heat recovery is now a viable option with today’s technology. In order to produce the best possible waste heat recovery system, a design and optimization methodology needed to be produced for multivariable systems. This thesis discusses a proposed design and optimization methodology that can assist in the application of advanced systems. The original purpose of developing the design method was for an Organic Rankine Cycle exhaust gas heat recovery system for automotive applications. Several examples of the utilizing this design methodology are included in this thesis.

#### Committee:

Marcello Canova, PhD (Advisor); Shawn Midlam-Mohler, PhD (Committee Member)

#### Subjects:

Mechanical Engineering

#### Keywords:

system; optimize; optimization; multi-variable; multivariable; rankine cycle; particle swarm optimization

Evolutionary Optimization for Safe Navigation of an Autonomous Robot in Cluttered Dynamic Unknown Environments
Master of Science in Electrical Engineering, Cleveland State University, 2018, Washkewicz College of Engineering
We present a path planning approach based on probabilistic methods for a robot to navigate in a cluttered, dynamic, unknown environment. There are dynamic obstacles moving around and static obstacles located in the map. The robot does not have any prior information about them but should be able to navigate through the map beginning from a known starting point and safely ending at a known target point. The only information the robot has is the location of the starting point and the target point and it uses sensory information to collect information about its surroundings. Our method is compared to the D* Lite algorithm and results are presented. In the last section, the parameters of the robot are optimized using biogeography-based optimization (BBO). This is an efficient multivariable optimizer and it is shown that the results of optimization achieve significant improvement in robot navigation performance. In this thesis, we show that using evolutionary optimization methods like BBO can reduce the risk of collision and the navigation time by about 25% each. The resulting risk of collision indicates safe navigation by the robot which leads to the conclusion that this is a feasible method for real-world robots.

#### Committee:

Dan Simon (Advisor); Lili Dong (Committee Member); Mohammad S. Shirazi (Committee Member)

#### Subjects:

Electrical Engineering

#### Keywords:

Evolutionary; Optimization; Safe Navigation; Robot; BBO; biogeography based optimization;

Topology Optimization of Engine Exhaust-Washed Structures
Doctor of Philosophy (PhD), Wright State University, 2006, Engineering PhD

#### Subjects:

Engineering, Mechanical

#### Keywords:

Optimization; Topology Optimization; Thermal Structures

Wind Turbine Airfoil Optimization by Particle Swarm Method
Master of Sciences, Case Western Reserve University, 2011, EMC - Mechanical Engineering

Two-dimensional shape of a wind turbine blade was optimized by means of Particle Swarm Optimization. By following blade element theory, lift coefficient Cl and drag coefficient Cd were used as objective functions. In order to compute the objective functions, flow field around airfoils were calculated by Re-Normalization Group (RNG) k-ε model. Shapes of airfoils were defined by modified PARSEC method with 10 parameters.

Two optimization cases were conducted with maximum thickness limited to 10% and 20% of the chord length respectively. In both cases, Reynolds number was set at 2.0×106, which is the design condition of S809 airfoil. S809 airfoil is a well known airfoil used in wind turbines and many experimental data are available. The angle of attack for the optimization was set at 5.13 deg., the mount angle of S809. Non-dominated solutions obtained by this research were compared with the performance of S809 at several angles of attack.

The results of optimization showed that 1) there is a strong influence of maximum thickness of airfoil to its performance, 2) non-dominated solutions constitute a gradual relationship which implies that there are many airfoil shapes that could be considered as an optimum. The resulting shape along this Pareto front showed higher performance than the existing blade section (i.e. NREL S809) in certain conditions.

#### Committee:

James S. T'ien, PhD (Committee Chair); Meng-Sing Liou, PhD (Committee Member); J. Iwan D. Alexander, PhD (Committee Member)

#### Subjects:

Aerospace Materials; Ecology; Energy; Engineering; Fluid Dynamics; Mechanical Engineering

#### Keywords:

wind; wind turbine; airfoil; optimization; particle swarm; particle swarm optimization; PSO; S809;PARSEC;

Transportation Route Optimization for the State of Ohio’s Inland Waterway System: A Case Study for Mid-Ohio River Valley Region
Master of Science (MS), Ohio University, 2015, Civil Engineering (Engineering and Technology)
Within the transportation-engineering field, transportation optimization is a paramount concept of the utmost importance for the minimization of costs and maximization of efficiency. This research focuses on the intermodal transportation environment involving truck and barge transportation modes with a particular emphasis on the Mid-Ohio River Valley Region. The study was conducted with the primary objective of optimizing minor port locations within this region by utilizing containers on barge shipping to alleviate highway traffic congestion. In order to determine the optimal minor port locations, a mixed integer mathematical model (MIP) was developed to minimize the transportation and fixed costs associated with opening each potential port. In addition to the developed mixed integer mathematical model, a new heuristic model was specifically developed for this particular problem. The developed heuristic model resulted in similar solutions compared to the mathematical model thereby allowing it to be used as a solution methodology for transportation route optimization. The model considered warehousing freight transferred from major ports in Cincinnati, OH and Huntington, WV to the Heartland Corridor intermodal terminals of Belpre, OH, South Point, OH, and Wellsville, OH. This study considers various potential port locations with different capacities. The optimization results indicated that proposed ports should be opened in the following three locations: Proctorville, OH, Ripley, OH, and Ironton, OH. Additionally, according to the sensitivity analysis, the Proctorville, OH minor port facility provided the highest total cost savings since it was located in a critical location for intermodal transportation. This research was unique in the sense of developing and implementing optimizing approaches to solve real life intermodal transportation problem observed in Ohio River’s Inland Waterway System.

#### Committee:

Deborah McAvoy (Advisor); Naik Bhaven (Committee Member); Benjamin Sperry (Committee Member); Natalie Kruse (Committee Member)

#### Subjects:

Transportation; Transportation Planning

#### Keywords:

Intermodal Transportation Optimization; Barge and Truck Transportation; Transportation Optimization in Ohio River; Mixed Integer Mathematical Model; Heuristics

Generalizability of Predictive Performance Optimizer Predictions Across Learning Task Type
Master of Science (MS), Wright State University, 2016, Human Factors and Industrial/Organizational Psychology MS

#### Committee:

Gary Burns, Ph.D. (Advisor); Nathan Bowling, Ph.D. (Committee Member); Tiffany Jastrzembski, Ph.D. (Committee Member); Glenn Gunzelmann, Ph.D. (Committee Member)

#### Subjects:

Behaviorial Sciences; Cognitive Psychology; Psychology

#### Keywords:

training optimization; cognitive model; predictive performance optimizer; task type; learning; forgetting; parameter generalizability; training optimization scheduling;

Price-Based Distributed Optimization in Large-Scale Networked Systems
PhD, University of Cincinnati, Engineering and Applied Science: Mechanical Engineering
This work is intended towards the development of distributed optimization methods for large-scale networked systems. The advancement in technological fields such as networking, communication and computing has facilitated the development of networks which are massively large-scale in nature. One of the important challenges in these networked systems is the evaluation of the optimal point of operation of the system. The problem is essentially challenging due to the high-dimensionality of the problem, distributed nature of resources, lack of global information and dynamic nature of operation of most of these systems. The inadequacies of the traditional centralized optimization techniques in addressing these issues have prompted the researchers to investigate distributed optimization techniques. This research work focuses on developing techniques to carry out the global optimization in a distributed fashion that explores the fundamental idea of decomposing the overall optimization problem into a number of sub-problems that utilize limited information exchanged over the network. Inspired by price-based mechanisms, the research develops two methods. First, a distributed optimization method consisting of dual decomposition and update of dual variables in the subgradient direction is developed for some different classes of resource allocation problems. Although this method is easy to implement, it has its own drawbacks.To address some of the drawbacks in distributed optimization, in this dissertation, a Newton based distributed interior point optimization method is developed. The proposed approach, which is iterative in nature, focuses on the generation of feasible solutions at each iteration and development of mechanisms that demand lesser communication. The convergence and rate of convergence of both the primal and the dual variables in the system is also analyzed using a benchmark Network Utility Maximization (NUM) problem followed by numerical simulation results. A comparative study between the proposed distributed and centralized method of optimization is also provided. The proposed distributed optimization techniques have been applied to real world systems such as optimal power allocation in Smart Grid and utility maximization in Cloud Computing systems. Both the problems belong to the class of large-scale complex network problems. In the power grids, the challenges are augmented with the nature of the decision variables, coupling effect in the network, the global constraints in the system, uncertain nature of renewable power generators, and the large-scale distributed nature of the problem. In cloud computing, resources such as memory, processing, and bandwidth are needed to be allocated to a large number of users to maximize the users’ quality of experience. Finally, the research focuses on the development of a stochastic distributed optimization method for solving problems with multi-modal cost functions. As opposed to the unimodal function optimization, the widely practiced gradient descent methods fail to reach the global optimum solution when multi-modal cost functions are considered. In this dissertation, an effort is be made to develop a stochastic distributed optimization method that exploits noise based solution update to prevent the algorithm from converging into local optimum solutions. The method is applied to the Network Utility Maximization problem with multi-modal cost functions, and is compared with Genetic Algorithm.

#### Committee:

Manish Kumar, Ph.D. (Committee Chair); Sundararaman Anand, Ph.D. (Committee Member); Kelly Cohen, Ph.D. (Committee Member); David Thompson, Ph.D. (Committee Member)

Mechanics

#### Keywords:

Distributed Optimization;Mathematical Optimization;Multi-Agent Systems;Market-Based Methods;Large-Scale Networked Systems

CONCURRENT LINEAR OPTIMIZATION MODEL FOR DESIGN AND MANUFACTURING TOLERANCES WITH PROCESS AND MACHINE SELECTION INCORPORATING SCRAP RATES AND MACHINE BREAKDOWN
MS, University of Cincinnati, 2002, Engineering : Industrial Engineering
Tolerance is a key design tool. The allocation of correct tolerances is very important as it directly affects the cost, functionality and quality of the product. In the scope of process planning, there has been some work done which deals with optimization of the design and manufacturing tolerances with process (and machine) selection. They formulate the problem as a non-linear cost optimization model and use intelligent search techniques like genetic algorithm to solve it. The current work formulates the model for the concurrent optimization of the design and machining tolerances while considering shop floor parameters like machine loading, machine breakdown and scrap rates of processes into account. Piecewise-linear curves for cost-tolerance and time- tolerance have been used in formulating the problem as a Mixed Integer Program. Stochastic shop floor parameters like machine breakdown and scrap rates of processes have been included in the model. These enhancements can be viewed as a distinct positive step towards integration of the process planning and scheduling functions on the shop floor. The linear optimization model has been solved using branch and bound method. Moreover, the use of the standard branch and bound algorithm allows for analyzing the structure of the problem through sensitivity study.

#### Subjects:

Engineering, Industrial

#### Keywords:

concurrent optimization of tolerances; linear optimization; process planning with process & machine selection; machine breakdown; scrape rates of processes

THREE ESSAYS ON PRODUCTION AND INVENTORY MANAGEMENT

This dissertation consists of three essays that address issues in production and inventory management.

The first essay focuses on inventory management. We study a fixed-reorder-interval, order-up-to (R, nT) inventory replenishment policy in a two-stage serial system with stochastic demand at the lower stage. We develop a simulation based optimization procedure to estimate the long-run average cost and optimal parameter values. The numerical results show that the (R, nT) policy is, on average, 4.4% (5.8%) more expensive than the continuous review (r, nQ) policy (lower bounds). The cost difference is much smaller when the setup cost at the upstream stage and the demand rate are larger. The (R, nT) costs are relatively insensitive to the choice of reorder intervals, T, provided the best corresponding order-up-to level, R, is selected.

The second essay deals with production scheduling. We consider the computationally-hard, re-entrant flow, cyclic scheduling problem considered by Graves et al. (1983) and Roundy (1992). We present two problem formulations to minimize job flow time (work-in-process), given a target cycle length (throughput). We describe an efficient optimization method and a new ImproveAlignment (IA) heuristic. Numerical experiments indicate that proposed optimization method was significantly faster than CPLEX-8.0 and solved 40% more test instances to optimality within the specified run time and memory limits. The proposed IA heuristic quickly produced solutions which were, on average, (i) 22% better than those from the Graves' et al. heuristic and (ii) within 14% of the optimal.

The third essay focuses on resource planning. We examine a single end-product, discrete-time inventory replenishment problem in a material requirements planning (MRP) environment with demand uncertainty and supply capacity limits on replenishment orders. We develop a simulation-based optimization approach and two novel heuristics. We also evaluate the traditional MRP and safety stock approaches for this problem. Computational experiments show that the two novel heuristics perform very well (on average within 0.06% and 0.66% of optimal, respectively); traditional MRP and safety stock approaches incur higher costs, on average, 45% and 12.05% higher than optimal, respectively. We also provide managerial insights on the effects of different input factors.

#### Keywords:

Multi-Echelon Inventory; Stochastic Demand; Heuristics; Periodic-Review; Cyclic Scheduling; Flow-Time Optimization; Binary Integer Programming; Material Requirements Planning; Demand Uncertainty; Simulation based optimization

An Optimization-Based Treatment Planner for Gamma Knife Radiosurgery
Doctor of Philosophy, Case Western Reserve University, 2005, Systems and Control Engineering
This research addresses the planning of Gamma Knife radiotherapy, which is an alternative to treating a variety of brain abnormalities with surgery. The principal aim of this work is to develop an automated planning system that will make it simpler, less time consuming and hopefully more effective for the clinical personnel to develop treatment plans. Currently, treatment planning is a time-consuming task that involves an iterative process of shot selection, placement, and adjustment. Our goal is to replace the iterative part of the planning with an optimization-based real-time planner. Our strategy is to: (1) automate initial shot selection and placement using a combined process of skeletonization and bin-covering, (2) optimize the exposure time for each shot to improve the target coverage while minimizing toxicity to the surrounding tissues, and (3) fine-tune the shot configuration by adjusting shot locations, and adding or deleting shots to further improve the balance between target coverage and normal tissue toxicity. The efficiency and effectiveness of our approach is derived from (1) the use of skeletonization and bin-covering to provide good starting point for the development of the plan, (2) the easy-to-solve linear fractional program that explicitly accounts for the dual objectives of maximizing target coverage while minimizing toxicity, at the same time accounting for dose-renormalization and (3) the fine-tuning step that explicitly accounts for shot overlapping, dose renormalization, and target shape to make determining of hot spots and estimating the effects of shot movement, addition and/or deletion possible. The planning system has been implemented on a Windows-based platform and tested using clinical cases from the standard Gamma Knife treatment model as well as the automatic positioning system (APS) model. The planner consistently produces, in 1-2 minutes, plans with dose conformity compatible to manual plans normally created in 1-4 hours.

#### Subjects:

Engineering, System Science

#### Keywords:

Optimization, Gamma Knife, Multi-objective optimization, Fractional Programming, Mixed-Integer Nonlinear Programming, Bin-Covering,; Re-normalization, Shot overlapping

A Sequential Design for Approximating the Pareto Front using the Expected Pareto Improvement Function
Doctor of Philosophy, The Ohio State University, 2009, Statistics
We propose a methodology for the simultaneous optimization of multiple goal functions evaluated by a numerically intensive computer model. In a black box multiobjective problem, the goal is to identify a set of compromise solutions that provide a minimally sufficient representation of the Pareto front in the most efficient manner. To reduce the computational overhead, we adopt a surrogate-guided approach where we perform optimization sequentially via improvement. Our algorithm relies on a multivariate Gaussian process emulator which uses a novel multiobjective improvement criterion called the expected Pareto improvement function to guide the sampling of points in the Pareto efficient region. We show that the algorithm is capable of approximating the Pareto front within a computational budget.

#### Committee:

Thomas Santner, PhD (Advisor); Peter Craigmile, PhD (Committee Member); William Notz, PhD (Committee Member)

Statistics

#### Keywords:

Multiobjective optimization; simulation-based optimization; Pareto optimality; multivariate emulation; Gaussian process; expected improvement

Inverse Modeling: Theory and Engineering Examples
Master of Science, University of Toledo, 2015, Mechanical Engineering
Over the last two decades inverse problems have become increasingly popular due to their widespread applications. This popularity continuously demands designers to find alternative methods, to solve the inverse problems, which are efficient and accurate. Using effective techniques that are both highly accurate and of low computational cost is of highest priority. This thesis presents a method for solving inverse problems through Artificial Neural Network (ANN) theory. This thesis also presents a method to apply Grey Wolf Optimizer (GWO) algorithm to solve inverse problems. GWO is a recent optimization method demonstrating great results. Both of the methods are then compared to traditional methods such as Particle Swarm Optimization (PSO) and Markov Chain Monte Carlo (MCMC). Four classical engineering design problems are used to compare the four methods' performance. The results from the engineering design problems show that the GWO outperforms other methods in terms of efficiency and accuracy. The error is comparable among the proposed ANN method and PSO method, while the latter has better computational efficiency.

#### Committee:

Efstratios Nikolaidis, PhD (Committee Chair); Vijay Devabhaktuni, PhD (Committee Co-Chair); Matthew Franchetti, PhD (Committee Member); Mehdi Pourazady, PhD (Committee Member)

#### Subjects:

Mechanical Engineering

#### Keywords:

Inverse Problems; Artificial Neural Networks, Grey Wolf Optimizer; Particle Swarm Optimization; Multi Modal Optimization

A Metamodel based Multiple Criteria Optimization via Simulation Method for Polymer Processing
Doctor of Philosophy, The Ohio State University, 2012, Industrial and Systems Engineering

Polymers have been gradually replacing metals in many applications due to their versatility. Nowadays, many consumer products such as computers and automobile components rely on the technology and production from polymer companies. To ensure the sustainability of these companies, it is important to design reliable processes. To analyze and improve the processing of plastic parts, advanced computer simulation tools have been developed. Yet, the difficulty of optimizing polymer processes is that the performance measures (objectives) involved usually show conflicting behaviors. Therefore the best processing conditions for one performance measure are usually not the best for some other performance measures. This thesis proposes an Optimization via Simulation method that considers multiple performance measures simultaneously. The method is able to approximate a set of Pareto solutions without having to evaluate a large number of simulations. In order to accomplish this, design of experiments, metamodeling, Data Envelopment Analysis, and Pareto optimality are combined. After first testing the method with well-known multiobjective optimization test problems, it is illustrated with several injection molding case studies. The method is also compared with an alternative metamodel-based multiple criteria simulation optimization method. The performance of the method is evaluated, in light of the quality of the obtained Pareto frontier as well as the number of simulation runs required to obtain such a frontier. The quality of the approximated Pareto frontier is assessed by the percentage of solutions dominated by the frontier using the hypervolume indicator.

In addition to the test problems and the simulation-based case studies, we applied the optimization method to two case studies where only experimental data was used. In these cases we molded the parts under study and evaluated the values of the performance measures from the actual molded parts.

At the end, we present some preliminary ideas on how to build process windows. A process window is the range of the controllable variables at which one needs to operate the process to obtain the best compromises between the performance measures.

#### Committee:

Jose Castro, PhD (Advisor); Simge Kucukyavuz, PhD (Committee Member); Thomas Santner, PhD (Committee Member); Mauricio Cabrera-Rios, PhD (Committee Member)

#### Subjects:

Industrial Engineering; Polymers

#### Keywords:

simulation optimization;multiple criteria optimization; injection molding

Distance-Based Optimization of 48V Mild-Hybrid Electric Vehicle
Master of Science, The Ohio State University, 2018, Mechanical Engineering
This thesis discusses the implementation of a Dynamic Programming algorithm that solves for the velocity and powertrain control optimization of a 48V mild-HEV over multiple custom driving scenarios. The results are compared to a baseline vehicle forward-looking model, by both observing the effects of both optimal velocity and optimal energy management. The biggest differences in the energy management occur during regeneration events, where the majority of energy regeneration happens at stopping events if the road ahead is known, whereas the more conservative online controller uses the engine to regenerate the battery throughout the route. The computation time resulting from the Dynamic Programming optimization is found to be tenfold that of the actual travel time, hence it could not be implemented as a real-time controller as is. A method to reduce the number of control inputs on the system to a single control input is implemented. It introduces the torque split inside of the Dynamic Programming by use of an Equivalent Consumption Minimization Strategy. The results are found to be slightly sub-optimal and some change in the velocity optimization is observed. With this new method, the computation time is halved.

#### Committee:

Marcello Canova, Dr. (Advisor); Giorgio Rizzoni, Dr. (Committee Member); Levent Guvenc, Dr. (Committee Member)

#### Subjects:

Mechanical Engineering

#### Keywords:

Mild-HEV;Dynamic Programming; ECMS; Velocity Optimization; Powertrain Controls Optimization

LARGE SCALE LINEAR OPTIMIZATION FOR WIRELESS COMMUNICATION SYSTEMS
Master of Science, The Ohio State University, 2017, Mathematics
Linear Programming has many applications in the domain of wireless communication. Many problems in this field consist of a very large number of variables and constraints and therefore fit in the platform of large scale linear programming. Advancements in computing over the past decade have allowed us to routinely solve linear programs in thousand of variables and constraints, using specialized methods from large scale linear programming. There are many software packages that implement such methods, e.g. AMPL, GAMS and Matlab. This dissertation gives A concise survey of linear programming fundamentals with a focus on techniques for large scale linear programming problems in the context of wireless communication. The dissertation explains some of these techniques, in particular the delayed column generation method and the decomposition method. It also draws on examples from the active field of wireless communication. The dissertation is concluded by giving concrete examples of how to use various software packages to solve large scale linear programming problems stemming from our examples in the context of wireless communication.

#### Committee:

Ghaith Hiary (Advisor); Facundo Memoli (Committee Member)

Mathematics

#### Keywords:

Linear Optimization, Wireless Communication, Linear Programming, Large Scale Optimization, AMPL, GAMS, MATLAB