Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 628)

Mini-Tools

 
 

Search Report

  • 1. Muralidharan, Vaishali Logic Encryption Using Dynamic Keys

    MS, University of Cincinnati, 2020, Engineering and Applied Science: Computer Engineering

    Many Logic Encryption methods have been proposed for thwarting supply chain threats. Logic locking techniques can hide the design functionality from miscreants along the VLSI supply chain and hence thwart IC reverse engineering as well as other attacks. Traditional logic encryption strategies lock the original design with a key inputs (known only to the designer) which prevents the circuit from functioning as required unless a correct key value is applied. Applying incorrect key values lead to functional corruption of the circuit outputs. These encryption strategies were susceptible to a Boolean satisfiability based attack (abbreviated as SAT attack) which could iteratively eliminate incorrect classes of keys to quickly discover the correct key values. Subsequently, encryption schemes based on point functions and cyclic methods were introduced to defeat the SAT attack. Recently, logic encryption for sequential circuits has been proposed utilizing the dynamic nature of sequential circuits to increase security. However, they may be susceptible to sequential satisifiability or incremental bounded model checking based attacks. In this thesis, a novel sequential logic locking technique which necessitates the applications of dynamically changing key values to allow the correct operation of the locked design is proposed. This technique works in contrary to the previous methods which use fixed key values to corrupt the payload. Applying a logic value from the correct dynamic key sequence as a static key would still corrupt the signal in the circuit. This would force the attacker to spend more time to find the complete dynamic sequence for larger circuits. A technique to generate dynamic keys from traditional static memory using concepts based on pseudo random generators [Linear Feedback-Shift Registers and Cellular Automata] is presented in this thesis. Various sequential circuits are obfuscated to evaluate the effectiveness of this scheme against oracle-based (open full item for complete abstract)

    Committee: Ranganadha Vemuri Ph.D. (Committee Chair); Wen-Ben Jone Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Engineering
  • 2. Kim, Woojin The Persistent Topology of Dynamic Data

    Doctor of Philosophy, The Ohio State University, 2020, Mathematics

    We refine the theoretical foundations of Topological Data Analysis (TDA) for the multiscale analysis of dynamic topology, such as dynamic metric spaces or dynamic networks. Motivations include, but are not limited to, the characterization of flocking or swarming behavior of animals and social networks in the human sphere. In order to quantify the differences of such dynamics, we also generalize the Gromov-Hausdorff distance. We not only examine the resulting metric geometry, but also find practical algorithms for approximating those novel distances. To establish our results, we primarily exploit concepts from algebraic topology, metric geometry, combinatorics, and category theory, blending these with ideas of persistence in TDA. More specifically, the main achievements of this thesis include (a) The development of stable and informative invariants that encode spatiotemporal topological features of dynamic metric spaces and dynamic networks, (b) The establishment of a comparison framework for dynamic metric spaces or dynamic networks by extending the Gromov-Hausdorff distance or the Gromov-Wasserstein distance, (c) Generalization of the erosion distance by Patel for quantifying within polynomial time the difference between dynamic metric spaces or more generally multiparameter persistence modules, and (d) Extension of the notion of persistence diagram for summarizing persistence modules over posets, which often arise from dynamic data, from a standpoint of combinatorics and category theory.

    Committee: Facundo MĂ©moli (Advisor) Subjects: Mathematics
  • 3. Jankovsky, Zachary Development of Computational and Data Processing Tools for ADAPT to Assist Dynamic Probabilistic Risk Assessment

    Doctor of Philosophy, The Ohio State University, 2018, Nuclear Engineering

    Dynamic Probabilistic Risk Assessment (DPRA) methodologies are those that explicitly account for time when modeling the interactions between elements of a system. DPRA methodologies and the Dynamic Event Tree (DET) methodology specifically can offer significant benefits over traditional Probabilistic Risk Assessment (PRA) for certain systems and transients. The introduction of time-dependence allows uncertainties to be resolved in the ordering of events and in the difference in impact between an event occurring earlier versus later in the transient. The generation of a DET may be performed in a more mechanistic way than a traditional event tree as events are added according to system conditions represented by the system simulator rather than by the judgment of the analyst. However, challenges remain both in adequately addressing the relevant physical phenomena and in the analysis of results. The goal of this work is to advance the maturity of the ADAPT DET driver by introducing new computational and data processing tools which will allow more advanced insights to be gleaned from DPRA. The first new tool is a platform for combining multiple simulators to generate a single DET in ADAPT. The platform allows for more detailed handling of complex phenomena since a more general code can be run for the entire transient or for more temporal phases of a transient to be handled under a single DET. While DETs have been produced using multiple simulators before, this new approach allows for any number of simulators to be arranged in a desired transition scheme. In a case study, an overall pressurized water reactor Interfacing Systems Loss of Coolant Accident (ISLOCA) accident progression was tracked using MELCOR while specific uncertainties of radiation dose to operators taking action within a building were resolved using RADTRAD. The second tool addresses the current lack of tools to gather insights from DETs. Traditional Importance Measures (IMs) are not ideal for dyn (open full item for complete abstract)

    Committee: Tunc Aldemir PhD (Advisor); Carol Smidts PhD (Committee Member); Marat Khafizov PhD (Committee Member); Matthew Denman PhD (Committee Member) Subjects: Nuclear Engineering
  • 4. Regatti, Jayanth Dynamic Routing for Fuel Optimization in Autonomous Vehicles

    Master of Science, The Ohio State University, 2018, Electrical and Computer Engineering

    We consider the problem of minimizing fuel consumption in autonomous vehicles and pose it from the point of view of developing algorithms to route the vehicle along a fuel efficient path in the presence of complete information. We first adopt a dynamic model of traffic flow and using this model, develop the optimization problem as a stochastic shortest path problem in an infinite horizon general state MDP. The special structure of the problem enables us to prove the existence of optimal and epsilon optimal policies. In addition to this, we developed a software simulator CATS, that can aid research in autonomous transportation and cyber-security. As a microscopic traffic simulator, CATS offers several advantages in testing algorithms and logging fine grained details in a simulation environment that mimics a real traffic scenario. We currently support features like fully connected V2V and V2X, traffic light switching, varying driver behaviors, etc. In addition to offering what a traditional traffic simulator offers, CATS can also be customized in many ways, such as altering the driving algorithm of the vehicles. This software can be used as a Monte Carlo simulator to train a vehicle to learn driving policies, as well as to train a traffic controller to learn optimal switching policies. CATS can also aid game theoretic research in cybersecurity due its V2V, V2X modules. CATS can also be integrated with modern machine learning frameworks thereby making deployment of machine learning algorithms more accessible to transportation and connected vehicle researchers. CATS API, developed in python, provides an easy to use interface to researchers to get started with minimal programming knowledge. This software is currently under development and can also be envisioned to be an educational tool to introduce autonomous vehicles in classrooms.

    Committee: Abhishek Gupta (Advisor); Levent Guvenc (Committee Member) Subjects: Electrical Engineering; Operations Research; Transportation Planning
  • 5. Harianto, Jonny The effect of manufacturing errors on predicted dynamic factors of spur gear

    Master of Science, The Ohio State University, 1995, Mechanical Engineering

    This thesis studies the effect of manufacturing errors on predicted dynamic factors of spur gears. Three types of dynamic factors are defined and studied: dynamic load factors, dynamic tooth force factors, and dynamic bending moment factors. Three different computer programs for predicting dynamic factors are introduced. These programs are a MATLAB forced vibration analysis using a six degree of freedom model, a multi-degree of freedom Dynamic Transmission Error Program (DYTEM) that uses a six degree of freedom model, and the Geared Rotor Dynamics Program (GRD) that uses a finite element method. After comparing the three program's results with experimental data provided by NASA (National Aeronautics and Space Administration, the DYTEM program is used for dynamic factors prediction. The effects of different profile tolerances for AGMA quality 10, 12, and 14 gears are presented.

    Committee: Donald Houser (Advisor) Subjects:
  • 6. Adekoya, Oluwaseun A Comparative Study Between Dynamic Programming and Model Predictive Control for Closed-Loop Control

    MS, University of Cincinnati, 2024, Engineering and Applied Science: Mechanical Engineering

    The development of dynamic systems (both physical plant and control systems) in a sequential manner often results in sub-optimal solutions. However, solutions obtained using combined physical and control system design methodologies have been observed to yield optimal solutions. The overarching interest in obtaining closed-loop solutions with decent computational cost requirements brings about the topic of interest - a comparison of two of the most popular methods employed to cater for this: Model Predictive Control and Dynamic Programming. If the primary requirement is real-time control with a need to handle constraints dynamically, Model Predictive Control (MPC) is the more practical choice. If the problem allows for offline computation and requires globally optimal solutions, and the state and action spaces are not extremely large, Dynamic Programming (DP) may be more practical. This work studies both methods with respect to accuracy, type of closed-loop feedback solutions, and computational efficiency. Both methods are incorporated within a nested control co-design formulation. To validate the accuracy of both techniques, their practical application is demonstrated through case studies involving a single link manipulator, a single pendulum-type crane, and a quarter car suspension system. Each case study includes a model description, problem formulation, and results obtained using both MPC and DP techniques. The findings highlight the effectiveness of nested formulations with feedback methods in achieving optimal control co-design, with comprehensive assessments of each approach.

    Committee: Michael Alexander-Ramos Ph.D. (Committee Chair); Manish Kumar Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 7. Rickman, William Surrogate Markov Models for Validation and Comparative Analysis of Proper Orthogonal Decomposition and Dynamic Mode Decomposition Reduced Order Models

    Master of Science, Miami University, 2025, Mechanical and Manufacturing Engineering

    Reduced order modeling (ROM) methods, such as those based upon Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD), offer data-based turbulence modeling with potential applications for flow control. While these models are often cheaper than numerical approaches, their results require validation with source data. Within the literature, the metrics and standards used to validate these models are often inconsistent. Chabot (2014) produced a data-driven framework for validating these ROMs that used surrogate Markov models (SMMs) to compare how the system dynamics evolved rather than how any single metric evolved. These SMMs were constructed by clustering the flow data into different states of suitably similar flow fields, and the Markov model then mapped how likely each state was to transition into another. While this method was successful, there persisted an amount of uncertainty in how the outlier states within this clustering scheme were determined. Additionally, the study only examined the application of this procedure to POD-Galerkin ROMs. This study aims to tie the outlier state determination directly to the models' parent data. The study will also apply this procedure to ROMs generated from DMD to investigate how this framework's effectiveness carries over to different classes of ROMs.

    Committee: Edgar Caraballo (Advisor); Andrew Sommers (Committee Member); Mehdi Zanjani (Committee Member) Subjects: Aerospace Engineering; Fluid Dynamics; Mathematics; Mechanical Engineering; Statistics
  • 8. Vaddi, Pavan Kumar Abnormal Event Detection and Classification, and Dynamic Probabilistic Risk Assessment for Industrial Cyber Security Applications

    Doctor of Philosophy, The Ohio State University, 2024, Mechanical Engineering

    The increasing adaptation of nuclear power plants to incorporate software-based components along with digital communication networks in their operation has resulted in improved control, automation, monitoring, and diagnostics, while simultaneously opening those power plants to a new dimension of threats, cyber-attacks. In contrast to traditional information technology systems, cyber-attacks on cyber-physical systems such as nuclear power plants can impact the physical world and may put the safe operation of the plant in jeopardy while endangering many lives. In addition, incorrect diagnosis of a cyber-attack as a component malfunction that elicits a wrong response from the plant operator can further deteriorate the system. Hence identifying and correctly classifying an abnormal event as either a safety event or a cyber-attack is of utmost importance. Additionally, it is essential to systematically estimate the risks associated with cyber-attacks on nuclear power plants. Two objectives are proposed in this research: 1. Develop an abnormal event classifier that detects and classifies an abnormal event as either a safety event caused by component failures or a cyber-attack. 2. Develop a dynamic probabilistic risk assessment framework for cybersecurity risk analysis in nuclear power plants. For objective – 1, a dynamic Bayesian network-based classifier that uses both the physical behavior (sensor data) and the network data of a cyber-physical system simultaneously to detect and classify abnormal events is built. Bayesian network structures to infer the states of hardware components, software components and communication devices such as network switches are defined. A hardware-in-the-loop experimental system was setup with two tanks in conjunction with a full scope nuclear power plant simulator, and a set of experiments were designed to successfully test the proposed classifier. For objective - 2, the framework of Dynamic Probabilistic Risk Assessment (DPRA) is exte (open full item for complete abstract)

    Committee: Carol Smidts (Advisor); Abdollah Shafieezadeh (Committee Member); Marat Khafizov (Committee Member); Tunc Aldemir (Committee Member) Subjects: Mechanical Engineering
  • 9. Dodo, Obed Macromolecular Engineering and Applications of Advanced Dynamic Polymers and their Nanocomposites

    Doctor of Philosophy, Miami University, 2023, Chemistry and Biochemistry

    In the future, well-engineered and optimized flexible electronic devices will be woven into everyday accessories such as clothes, furniture, safety, and healthcare monitoring devices. Dynamic polymer nanocomposites (DPNs) are an excellent class of materials that have a huge potential in the future of flexible electronics. DPNs are achieved through macromolecular engineering of dynamic polymers enhanced with electrically conductive nanofillers or nanocomposites with self-healing capabilities enabled via dynamic chemical linkages. Integration of multiple types of dynamic linkages into one polymer network is challenging and not well understood especially in the design and fabrication of DPNs. This dissertation presents facile methods for synthesizing flexible, healable, conductive, recyclable, and thermoresponsive DPNs using three dynamic chemistries playing distinct roles. Dynamic hydrogen bonds account for material flexibility and recycling character. Thiol-Michael exchange accounts for thermoresponsive properties. Diels-Alder reaction leads to covalent bonding between polymer matrix and nanocomposite. Overall, the presence of multiple types of orthogonal dynamic bonds provided a solution to the trade-off between enhanced mechanical performance and material elongation in DPNs. Efficient reinforcement was achieved using <1 wt.% carbon nanotubes (CNT) as nanofillers. Increased mechanical strength, electrical conductivity, and re-processability were achieved all while maintaining material flexibility and extensibility, hence highlight the strong promise of these DPNs in the rapidly growing fields of flexible compliant electrodes. Additionally, structure-property relationships highlighting the impact of network architecture, chain-length, cross-link density, and CNT loading are explored. Controlled addition of CNT as nanofiller produces electrically conductive and mechanically enhanced DPNs with demonstrated application in the regulation of current flow towards a (open full item for complete abstract)

    Committee: Dominik Konkolewicz d.konkolewicz@miamioh.edu (Advisor) Subjects: Chemistry; Materials Science; Nanoscience; Organic Chemistry; Physical Chemistry
  • 10. Cortez, AlĂĄn Path Planning with Dynamic Obstacles and Resource Constraints

    Master of Science, The Ohio State University, 2022, Mechanical Engineering

    This thesis considers path planning with resource constraints and dynamic obstacles for an unmanned aerial vehicle (UAV), modeled as a Dubins agent. Incorporating dynamic obstacles and resource constraints into the path planning problem designs a model that is more realistic of what an agent will encounter during its missions. A resource constraint is a path dependent load variable, such as noise or probability of detection by radar, that accrues as the agent moves towards the goal position. The resource constraint represents the total cumulative noise disturbance created by the UAV along its trajectory during flight in or around an urban area. By prescribing a maximum allowable cumulative noise disturbance, the UAV can be constrained to follow appropriate noise regulations while navigating in an urban environment. Hazards such as storms, turbulence, and ice are important to consider during the path planning stage as dynamic obstacles because these hazards can damage or destroy the UAV if encountered. These weather hazard dynamics can be recorded and learned and occur in a time-scale that is within the duration of the flight of the UAV. Incorporating these obstacles into the path planner can allow the agent to forecast how the weather will change and have it react accordingly. Obstacles such as other aerial vehicles that are handled by detect-and-avoid methods will not be looked at in this thesis. Dynamic obstacles can pose a challenge as the computation time and storage needed increases the longer it takes to find the path. Ice data from the Aviation Weather Service was employed to create training data sets for learning dynamic weather (ice) phenomena. Dynamic Mode Decomposition (DMD) was used to learn and propagate the evolution of ice conditions at flight level. These obstacles were integrated into a Hybrid $A^\star$ path-planner with backtracking to handle resource constraints. It is shown through numerical simulations that the agent is able to navi (open full item for complete abstract)

    Committee: Mrinal Kumar (Advisor); Ayonga Hereid (Committee Member) Subjects: Mechanical Engineering
  • 11. Kim, Minhae Essays in Industrial Organization and Econometrics

    Doctor of Philosophy, The Ohio State University, 2022, Economics

    This dissertation uses econometric methods to introduce an estimator and develop models to estimate the effect of the internet on bank branches. Chapter 1 introduces the nested pseudo likelihood estimator to estimate dynamic discrete choice models in continuous time. Chapter 2 uses this estimator to estimate the impact of the internet penetration on brick-and-mortar bank branches. In Chapter 3, I provide additional evidence on the effect of the internet in banking industry by examining the effect of Community Connect Broadband Grant Program, which helps rural areas to establish broadband service, on bank branches. In Chapter 1, we introduce a sequential estimator for continuous time dynamic discrete choice models (single-agent models and games) by adapting the nested pseudo likelihood (NPL) estimator of Aguirregabiria and Mira (2002; 2007), developed for discrete time models with discrete time data, to the continuous time case with data sampled either discretely (i.e., uniformly-spaced snapshot data) or continuously. We establish conditions for consistency and asymptotic normality of the estimator, a local convergence condition, and, for single agent models, a zero Jacobian property assuring local convergence. We carry out a series of Monte Carlo experiments using an entry-exit game with five heterogeneous firms to confirm the large-sample properties and demonstrate finite-sample bias reduction via iteration. In our simulations we show that the convergence issues documented for the NPL estimator in discrete time models are less likely to affect comparable continuous-time models. We also show that there can be large bias in economically-relevant parameters, such as the competitive effect and entry cost, from estimating a misspecified discrete time model when in fact the data generating process is a continuous time model. Chapter 2 examines the effect of the internet on market structure and consumer welfare in the US retail banking industry. The internet is (open full item for complete abstract)

    Committee: Jason Blevins (Advisor); Adam Dearing (Committee Member); Matthew Weinberg (Committee Co-Chair) Subjects: Economics
  • 12. Rajakumar Deshpande, Shreshta Predictive Energy Optimization in Connected and Automated Vehicles using Approximate Dynamic Programming

    Doctor of Philosophy, The Ohio State University, 2021, Mechanical Engineering

    Global CO2 emissions regulations, in conjunction with increasing customer demands are requiring significant improvements in vehicle energy (or fuel) efficiency. In this drive to reduce fuel consumption, improvements in the powertrain (or propulsion system) continue to be a major area of focus, particularly shifting to higher levels of electrification. A next step in the evolution of improving fuel efficiency is to have the propulsion system controller make use of vehicle-level information. In this context, Connected and Automated Vehicle (CAV) technologies offer the potential for enhancing the vehicle fuel efficiency as well as improving vehicle safety and comfort by leveraging information from advanced mapping and location, and Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication. The focus of this thesis is to develop Dynamic Programming (DP) and Approximate Dynamic Programming (ADP) based approaches that combine the energy-saving potentials of powertrain electrification and CAV technologies, and further compound them. In this work, an ADP-based scheme is used to jointly optimize the vehicle velocity and energy management strategy of an electrified CAV over real-world driving routes. This predictive controls framework uses preview information from the route and environment to achieve significant fuel efficiency improvements even in the presence of variabilities (such as driver aggressiveness and varying traffic signal information). The controller was then implemented and tested in a demonstration vehicle at a proving ground facility over reconstructed route scenarios. Further, this thesis explores approaches to reducing the computational complexity of optimization methods based on Dynamic Programming, which can restrict its use in many real-time applications. To this end, two sub-optimal methodologies are proposed. One of them, the integrated DP-ECMS (Dynamic Programming-Equivalent Consumption Minimization Strategy) method embeds a heur (open full item for complete abstract)

    Committee: Marcello Canova PhD (Advisor); Abhishek Gupta PhD (Committee Member); Chris Atkinson PhD (Committee Member); Giorgio Rizzoni PhD (Committee Member) Subjects: Automotive Engineering; Mechanical Engineering
  • 13. Roemer, Jake Practical High-Coverage Sound Predictive Race Detection

    Doctor of Philosophy, The Ohio State University, 2019, Computer Science and Engineering

    Data races pose a fundamental issue plaguing the reliability of parallel software. A shared-memory program has a data race if two conflicting memory accesses (accesses to the same memory location by different threads, where at least one access is a write) can execute consecutively (no interleaving events exist). Modern shared-memory programming languages including C++ and Java provide undefined or ill-defined semantics for executions with data races. So in the presence of data races, shared-memory programs are vulnerable to fatal crashes, data corruption, and other unexpected errors. Data races manifest nondeterministically creating the complex task of writing and diagnosing shared-memory programs. Existing research offers a variety of program analyses to detect and report data races to developers. Most notably, predictive dynamic analyses detect data races that can occur in executions other than the observed execution. Predictive analysis defines the set of necessary ordering between events while ensuring the same behavior as the observed execution to detect races that industry standard happens-before analysis cannot detect. The existing techniques are sound (report only true races) but often miss races because the techniques are impractical to run extensively on large programs in practice due to performing costly reasoning about reordered thread interleavings. Alternative partial-order-based predictive techniques that scale to large programs fail to detect all races knowable from the observed execution because of the conservative ordering between events that preserve a valid reordered execution for the techniques to be sound. The partial-order-based predictive techniques remain impractical compared to industry standard race detectors. A major contributing factor to predictive analyses poor performance is the metadata required to weaken the observed ordering. The metadata tracks conflicting critical sections and is a vital component of predictive analysis tha (open full item for complete abstract)

    Committee: Michael Bond (Advisor); Atanas Rountev (Committee Member); Paul Sivilotti (Committee Member) Subjects: Computer Engineering; Computer Science
  • 14. Trazkovich, Alex Effect of Copolymer Sequence on Mechanical Properties of Polymer Nanocomposites from Molecular Dynamics Simulations

    Doctor of Philosophy, The Ohio State University, 2019, Chemical Engineering

    When incorporated into polymers, nanoparticles are known to modify the structure and dynamics of nearby polymer chains. Because nanoparticles have a high surface area to volume ratio, the properties of the polymer-nanoparticle interphase region can have a significant effect on the overall composite properties even at relatively low nanoparticle loading. In this work, we study the polymer-nanoparticle interphase region using molecular dynamics simulations, and we analyze the impact of a nanoparticle on local structure, dynamics, and viscoelastic properties. Of particular interest here is a class of systems which consists of nanoparticles incorporated into two-component copolymers where one component of the copolymer interacts more favorably with the nanoparticle than the other. In these systems, modifying the particular copolymer sequence may modify the interphase properties, and composite properties may therefore be adjusted even while maintaining the same overall monomer ratio. These systems have been the subject of several simulation studies focused on nanoparticle dispersion and assembly; however, relatively little simulation work has focused specifically on the impact of copolymer sequence on properties of the copolymer-nanoparticle interphase. We simulate a simple nanocomposite consisting of a single spherical nanoparticle surrounded by coarse-grained polymer chains. The polymers are composed of two different monomer types that differ only in their interaction strengths with the nanoparticle. By studying a series of regular multiblock copolymers with adjustable block length as well as a random copolymer, we examine the effect of copolymer sequence blockiness on the structure as well as the end-to-end vector autocorrelation, bond vector autocorrelation, and self-intermediate scattering function relaxation times as a function of distance from the nanoparticle surface. We find that, depending on block length, blocky copolymers can have faster or slower inte (open full item for complete abstract)

    Committee: Lisa Hall (Advisor); Isamu Kusaka (Committee Member); Kurt Koelling (Committee Member); Yiying Wu (Committee Member) Subjects: Chemical Engineering
  • 15. Lee, Ji Hyun Development of a Tool to Assist the Nuclear Power Plant Operator in Declaring a State of Emergency Based on the Use of Dynamic Event Trees and Deep Learning Tools

    Doctor of Philosophy, The Ohio State University, 2018, Nuclear Engineering

    Safety is the utmost important requirement in nuclear power plant operation. An approach to develop a real-time operator support tool (OST) for declaring site emergency is proposed in this study. Temporal behavior of the early stages of a severe accident can be used to project the likelihood of different levels of offsite release of radionuclides based on the results of accident simulations with severe accident codes. Depending on the severity of the accident and the potential magnitude of the release of radioactive material to the environment, an offsite emergency response such as evacuation or sheltering may be warranted. The approach is based on the simulation of the possible nuclear power plant (NPP) behavior following an initiating event and projects the likelihood of different levels of offsite release of radionuclides from the plant using deep learning (DL) techniques. Two convolutional neural network (CNN) models are implemented to classify possible scenarios under two different labels. Training of the DL process is accomplished using results of a large number of scenarios generated with the ADAPT/MELCOR/RASCAL computer codes to simulate the variety of possible consequences following a station blackout event involving the loss of all AC power for a large pressurized water reactor. The ability of the model to predict the likelihood of different levels of consequences is assessed using a separate test set of MELCOR/RASCAL calculations. The set of data to be used in training and testing the machine were obtained previously from the Ph.D. dissertation work performed by Dr. Douglas Osborn. The OST is illustrated for a station blackout event in a pressurized water reactor for possible offsite dose outcomes at: 1) 2-mile area, 2) 10-mile area, 3) 2-mile boundary, and, 4) 10-mile boundary which are being considered as key locations for emergency response planning 4 days after release starts. Also, two meteorological conditions, historical and standard meteorology, a (open full item for complete abstract)

    Committee: Tunc Aldemir PhD (Advisor); Alper Yilmaz PhD (Advisor); Richard Denning PhD (Committee Member); Carol Smidts PhD (Committee Member) Subjects: Computer Engineering; Nuclear Engineering
  • 16. Sherbaf Behtash, Mohammad A Decomposition-based Multidisciplinary Dynamic System Design Optimization Algorithm for Large-Scale Dynamic System Co-Design

    MS, University of Cincinnati, 2018, Engineering and Applied Science: Mechanical Engineering

    Dynamic systems incorporating physical plant and control systems should be designed in an integrated way to yield desirable and feasible solutions. Conventionally, these systems are designed in a sequential manner which often fails to produce system-level optimal solutions. However, combined physical and control system design (co-design) methods are able to manage the interactions between the physical artifact and the control part and consequently yield superior optimal solutions. Small-scale to moderate-scale dynamic systems can be addressed by using existing co-design methods effectively; nonetheless, these methods can be impractical and sometimes impossible to apply to large-scale dynamic systems which may hinder us from determining the optimal solution. This work addresses this issue by developing a new algorithm that combines decomposition-based optimization with a co-design method to optimize large-scale dynamic systems. Specifically, the new formulation applies a decomposition-based optimization strategy known as Analytical Target Cascading (ATC) to a co-design method known as Multidisciplinary Dynamic System Design Optimization (MDSDO) for the co-design of a representative large-scale dynamic system consisting of a plug-in hybrid-electric vehicle (PHEV) powertrain. Moreover, since many of dynamic systems may consist of several time-dependent linking variables among their subsystems, a new consistency measure for the management of such variables has also been proposed. To validate the accuracy of the presented method, the PHEV powertrain co-design problem has been studied with both simultaneous and ATC methods; results from the case studies indicate the new optimization formulation's ability in finding the system-level optimal solution.

    Committee: Michael Alexander-Ramos Ph.D. (Committee Chair); Sam Anand Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member) Subjects: Mechanical Engineering
  • 17. Deng, Guodong SYNTHESIS AND CHARACTERIZATION OF IONICALLY CROSS-LINKED NETWORKS THROUGH THE USE OF ION-PAIR COMONOMERS

    Doctor of Philosophy, University of Akron, 2018, Polymer Engineering

    Structurally dynamic polymer networks, which can break and reform under external stimuli (e.g. heat, mechanical stress, light, etc.) have emerged as a useful approach to fabricate stimuli-responsive polymers (SRPs). One general route to introduce dynamic bonds is through non-covalent supramolecular interactions. Ion-pairing, is another potential non-covalent dynamic bond distinct from other interactions like hydrogen bonding and metal-ligand interactions. In this study, we demonstrate a new route to prepare structurally dynamic polymer networks by direct copolymerization of organic ion-pair comonomers (IPCs) with n-butyl acrylate (BA). A series of these copolymers were prepared and their results of rheological, mechanical, thermal and morphological behaviors will be comprehensively studied to understand the structure-property relationships. Firstly, a phosphonium-based IPC was synthesized and copolymerized with BA to prepare polyampholyte ionomers. Evidence of microphase separation of the ion-pairs to produce vitrified ion-rich domains acting as physical cross-links was found in polyampholyte ionomers by rheological and atomic force microscopy measurements. Comparison to analogous cationic and anionic ionomers with pendant counter-ions demonstrated the strong impact of direct ion-pair cross-linking on the material's viscoelastic properties. Characterization of the corresponding polyelectrolytes showed a ca. 125 ÂșC increase in the glass transition temperature (Tg) from the cationic to the polyampholytic polyelectrolyte. This elevated Tg allowed the vitrification of the ion-rich domains at ambient temperatures in the polyampholyte networks over a range of ion-pair concentrations. An increase in molecular weight could lead to highly entangled samples, which displayed an improved mechanical properties. Secondly, the copolymers of the ion-pair co-monomer and cationic monomer were found to effectively plasticize the synthesized polyampholytic polyelectrolyte and tune t (open full item for complete abstract)

    Committee: Kevin Cavicchi Dr. (Advisor); Nicole Zacharia Dr. (Committee Member); Mark Soucek Dr. (Committee Chair); Matthew Becker Dr. (Committee Member); Yi Pang (Committee Member) Subjects: Materials Science; Polymer Chemistry; Polymers
  • 18. Nordbeck, Patric Dynamics of Affordance Actualization

    MA, University of Cincinnati, 2017, Arts and Sciences: Psychology

    The actualization of action possibilities (or affordances) can often be accomplished in numerous, equifinal ways. For instance, an individual could walk over to a rubbish bin to drop an item in or throw the piece of rubbish into the bin from some distance away. The aim of the current study was to investigate the behavioral dynamics arising from such metastable task-spaces using a ball-to-bin transportation task. Participants were instructed to transport balls from a starting area to a bin located 9 meters away. Time interval between the sequential presentation of 52 balls was manipulated as a control parameter and was expected to push participants through different behavioral modes (i.e. the degree to which participants modulated the distance moved prior to throwing or dropping the ball into the target bin). As expected, the results revealed a large degree of within and between participant variability in task actualization. To better understand how this behavioral variability emerged as a function of task constraints, a two-parameter task manifold was devised using the Cusp Catastrophe Model. Simulations demonstrated that this two parameter state manifold can not only effectively capture the wide range of participant behaviors observed, but also explains how these behaviors are an emergent consequence of under-constrained task goals.

    Committee: Anthony Chemero Ph.D. (Committee Chair); Rachel Kallen Ph.D. (Committee Member); Michael Richardson Ph.D. (Committee Member) Subjects: Psychology
  • 19. Van Winkle, Scott Dynamic Bandwidth and Laser Scaling for CPU-GPU Heterogenous Network-on-Chip Architectures

    Master of Science (MS), Ohio University, 2017, Electrical Engineering & Computer Science (Engineering and Technology)

    As the relentless quest for higher throughput and lower energy cost continues in heterogenous multicores, there is a strong demand for energy-efficient and high-performance Network-on-Chip (NoC) architectures. Heterogenous architectures that can simultaneously utilize both the serialized nature of the CPU as well as the thread level parallelism of the GPU are gaining traction in the industry. A critical issue with heterogenous architectures is finding an optimal way to utilize the shared resources such as the last level cache (LLC) and NoC without hindering the performance of either the CPU or the GPU core. Photonic interconnects are a disruptive technology solution that have the potential to increase the bandwidth, reduce latency, and improve energy-efficiency over traditional metallic interconnects. In this thesis, we propose a CPU-GPU heterogenous architecture called SHARP (Shared Heterogenous Architecture with Reconfigurable Photonic Network-on-Chip) that combines CPU and GPU cores around the same router. SHARP architecture is designed as a Single-Writer Multiple-Reader (SWMR) crossbar with reservation-assist to connect CPU/GPU cores. The architecture consists of 32 CPU cores and 64 GPU computational units. As network traffic exhibits temporal and spatial fluctuations due to application behavior, SHARP can dynamically reallocate bandwidth and thereby adapt to application demands. In this thesis, we propose to dynamically reallocate bandwidth and reduce power consumption by evaluating buffer utilization. While buffer utilization is a reactive technique that deals with fluctuations in application demands, we also propose a proactive technique wherein we use machine learning (ML) to optimize the bandwidth and power consumption. In ML, instead of predicting the buffer utilization, we predict the number of packets that will be generated by the heterogenous cluster. Simulation results where evaluated using PARSEC 2.1 and SPLASH2 benchmark suits for the CPU and Ope (open full item for complete abstract)

    Committee: Avinash Kodi (Committee Chair); Savas Kaya (Committee Member); Harshavardhan Chenji (Committee Member); Eric Stinaff (Committee Member) Subjects: Computer Engineering; Electrical Engineering
  • 20. Hall, Brenton Using the Non-Uniform Dynamic Mode Decomposition to Reduce the Storage Required for PDE Simulations

    Master of Mathematical Sciences, The Ohio State University, 2017, Mathematical Sciences

    Partial Differential Equation simulations can produce large amounts of data that are very slow to transfer. There have been many model reduction techniques that have been proposed and utilized over the past three decades. Two popular techniques Proper Orthogonal Decomposition and Dynamic Mode Decomposition have some hindrances. Non-Uniform Dynamic Mode Decomposition (NU-DMD), which was introduced in 2015 by Gueniat et al., that overcomes some of these hindrances. In this thesis, the NU-DMD's mathematics are explained in detail, and three versions of the NU-DMD's algorithm are outlined. Furthermore, different numerical experiments were performed on the NU-DMD to ascertain its behavior with repect to errors, memory usage, and computational efficiency. It was shown that the NU-DMD could reduce an advection-diffusion simulation to 6.0075% of its original memory storage size. The NU-DMD was also applied to a computational fluid dynamics simulation of a NASA single-stage compressor rotor, which resulted in a reduced model of the simulation (using only three of the five simulation variables) that used only about 4.67% of the full simulation's storage with an overall average percent error of 8.90%. It was concluded that the NU-DMD, if used appropriately, could be used to possibly reduce a model that uses 400GB of memory to a model that uses as little as 18.67GB with less than 9% error. Further conclusions were made about how to best implement the NU-DMD.

    Committee: Ching-Shan Chou (Advisor); Jen-Ping Chen (Committee Member) Subjects: Aerospace Engineering; Applied Mathematics; Computer Science; Mathematics; Mechanical Engineering