Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 40)

Mini-Tools

 
 

Search Report

  • 1. Chhabra, Harpreet Singh Python API to post-process CFD data

    MS, University of Cincinnati, 2023, Engineering and Applied Science: Aerospace Engineering

    A growing interest in finding new ways to solve Computational Fluid Dynamics (CFD) problems has motivated researchers to employ a data science approach for numerical analysis. To optimize time and memory-related issues with data science, one of the approaches can be the use of machine learning (ML) algorithms. As Python is a go through language for ML developments, there arises a need for data visualization. The in-line data visualization capability within the same environment can simplify envisioning the data before and after modeling. To accelerate and simplify this data processing, the Gas Turbine Simulation Laboratory (GTSL) at the University of Cincinnati (UC) initiated this project to support data interactions and manipulation needs within the Python environment. A variety of algorithms for reading, writing, converting file formats, data extraction, splitting blocks, in-line visualization, and computing variables from CFD data were rewritten to be more efficient. The object-oriented nature of the code reduces the complexity of the Advanced Programming Interface (API) and eases the addition of classes and modules without compromising the performance. One of the other advantages of using Python is that many of the modules are compatible with most operating systems used, making the API easily portable and user-friendly. Another way to optimize time and memory is by parallel computing. Utilizing the Curriculum Practical Training (CPT) with an internship at Altair Engineering Inc., the power of the Message Passing Interface (MPI) was explored. One of the strategies used by the Altair Flux solver to divide the global domain for parallel solving was Domain Decomposition (DDM). DDM splits the domain into homogeneous regions based on their physics, and Flux assigns each of these regions to a processor for parallel solving. However, the performance of parallel computing is limited by the slowest processor, and the regions formed will not necessarily be of (open full item for complete abstract)

    Committee: Paul Orkwis Ph.D. (Committee Chair); Daniel Cuppoletti Ph.D. (Committee Member); Shaaban Abdallah Ph.D. (Committee Member) Subjects: Aerospace Engineering
  • 2. Scyphers, Madeline Bayesian Optimization for Anything (BOA): An Open-Source Framework for Accessible, User-Friendly Bayesian Optimization

    Master of Science, The Ohio State University, 2024, Environmental Science

    I introduce Bayesian Optimization for Anything (BOA), a high-level BO framework and model wrapping toolkit, which presents a novel approach to simplifying Bayesian Optimization (BO) with the goal of making it more accessible and user-friendly, particularly for those with limited expertise in the field. BOA addresses common barriers in implementing BO, focusing on ease of use, reducing the need for deep domain knowledge, and cutting down on extensive coding requirements. A notable feature of BOA is its language-agnostic architecture. Using JSON serialization, BOA facilitates communication between different programming languages, enabling a wide range of users to integrate BOA with their existing models, regardless of the programming language used, with a simple and easy-to-use interface. This feature enhances the applicability of BOA, allowing for broader application in various fields and to a wider audience. I highlight BOA's application through several real-world examples. BOA has been successfully employed in a high-dimensional (184 parameters) optimization Soil & Water Assessment Tool (SWAT+) model, demonstrating its capability in parallel optimization with SWAT and non-parallel models, such as SWAT+. I employed BOA in a multi-objective optimization of the FETCH3.14 model. These case studies illustrate BOA's effectiveness in addressing complex optimization challenges in diverse scenarios.

    Committee: Gil Bohrer (Advisor); James Stagge (Committee Member); Joel Paulson (Committee Member) Subjects: Artificial Intelligence; Computer Science; Environmental Engineering; Environmental Studies; Statistics
  • 3. Behler, Joshua Assessing Python Bindings of C Libraries with Respect to Python Idiomatic Conformance

    MS, Kent State University, 2023, College of Arts and Sciences / Department of Computer Science

    In the development of Python bindings of C libraries, developers often mirror the styling and practices of C code onto the Python module. These modules do not integrate well within standard Python code, causing a clash of coding idioms and introducing the complexities of low-level C features not present in Python. Such complexities include manual memory management, procedural code in an object-oriented paradigm, and lack of integration into standard Python features. The thesis proposes a set of Python binding idioms to serve as guidelines while developing a binding for a C library. These idioms are based on related literature that defines 21 Pythonic idioms for use in general coding to improve a program's readability and performance. The 11 idioms defined here include: Iteration, Context Managing, Casting, Printability, Mapping Structures to Classes, Maintaining Free Functions, Destructors, Rasing Errors, Docstrings, Annotations, and Selective Importing. Alongside these, four standard rules for method, class, and variable naming are also defined. These idioms and naming standards are designed to help a Python programmer understand and use a binding module by improving how Python interacts with the C library. A number of these idioms are identifiable in a Python binding via built-in Python tools such as dir. To measure how well current Python bindings use these idioms, a set of 7 Python bindings are analyzed to detect if these idioms are present in a given binding. For the idioms that cannot be completely verified through the Python library alone, the C library that the binding uses is also analyzed, and then compared to the Python binding for verification. The Python bindings that are chosen are all official bindings developed by or promoted by the original authors of the C libraries. Additionally, two commonly accepted Pythonic modules are also analyzed for comparison. As a case study these idioms are applied to the development of the pylibsr (open full item for complete abstract)

    Committee: Jonathan Maletic (Advisor); Qiang Guan (Committee Member); Mikhail Nesterenko (Committee Member); Gregory DeLozier (Committee Member) Subjects: Computer Science
  • 4. Marapakala, Shiva Machine Learning Based Average Pressure Coefficient Prediction for ISOLATED High-Rise Buildings

    Master of Science in Mechanical Engineering, Cleveland State University, 2023, Washkewicz College of Engineering

    In structural design, the distribution of wind-induced pressure exerted on structures is crucial. The pressure distribution for a particular building is often determined by scale model tests in boundary layer wind tunnels (BLWTs). For all combinations of interesting building shapes and wind factors, experiments with BLWTs must be done. Resource or physical testing restrictions may limit the acquisition of needed data because this procedure might be time- and resource-intensive. Finding a trustworthy method to cyber-enhance data-collecting operations in BLWTs is therefore sought. This research analyzes how machine learning approaches may improve traditional BLWT modeling to increase the information obtained from tests while proportionally lowering the work needed to complete them. The more general question centers on how a machine learning-enhanced method ultimately leads to approaches that learn as data are collected and subsequently optimize the execution of experiments to shorten the time needed to complete user-specified objectives. 3 Different Machine Learning models, namely, Support vector regressors, Gradient Boosting regressors, and Feed Forward Neural networks were used to predict the surface Averaged Mean pressure coefficients cp on Isolated high-rise buildings. The models were trained to predict average cp for missing angles and also used to train for varying dimensions. Both global and local approaches to training the models were used and compared. The Tokyo Polytechnic University's Aerodynamic Database for Isolated High-rise buildings was used to train all the models in this study. Local and global prediction approaches were used for the DNN and GBRT models and no considerable difference has been found between them. The DNN model showed the best accuracy with (R2 > 99%, MSE < 1.5%) among the used models for both missing angles and missing dimensions, and the other two models also showed high accuracy with (R2 > 97%, MSE < 4%).

    Committee: Navid Goudarzi (Committee Chair); Prabaha Sikder (Committee Member); Mustafa Usta (Committee Member) Subjects: Artificial Intelligence; Design; Engineering; Urban Planning
  • 5. Raffai, Akosh Evaporation and condensation from epiphytes during saturated conditions in a maritime live-oak forest (Georgia, USA).

    Master of Science in Mechanical Engineering, Cleveland State University, 2023, Washkewicz College of Engineering

    Most of the rain that falls over land, falls over forests, which cover approximately one-third of global land surface. Significant immediate and wide-ranging impacts are exerted on hydrological, ecological, and societal systems due to canopy-rainfall interactions, altering rainwater supply to the surface. All storm-related hydrological processes are impacted by the relative rates that canopy surfaces retain, evaporate, and redistribute rain. Many forest canopies host a community of plants called epiphytes that are generally capable of storing and evaporating substantial water. Epiphytes are comparatively under-researched regarding their role in rainfall partitioning compared to bark and leaves. Skidaway Island in Savannah, Georgia, has a forest canopy that hosts an epiphyte community consisting primarily of these three groups on a single host tree species, Quercus virginiana (southern live oak). The objective of this research was to determine the amount of time the study epiphytes were saturated, the amount of rainfall evaporated by the epiphytes, and the amount of condensation received by the epiphytes. It was found that saturation time had a positive relationship with Pleopeltis and detritus biomass, and decreased with Tillandsia. An indirect positive relationship between Pleopeltis and detritus biomass with evaporation and condensation amounts was observed.

    Committee: John Van Stan (Advisor); Yong Tao (Committee Chair); Michael Gallagher (Committee Member) Subjects: Environmental Science; Mechanical Engineering
  • 6. Oerther, Catie Analyzing the Need for Nonprofits in the Housing Sector: A Predictive Model Based on Location

    Bachelor of Arts, Wittenberg University, 2023, Computer Science

    This paper aims to present a study on developing a program that assists nonprofit organizations in determining the ideal location for building their facilities based on community needs, thus maximizing their potential for success. The study highlights the importance of location in the success of nonprofit organizations, and the challenges they face in identifying suitable areas for their operations. The paper reviews existing literature on nonprofit organizations, location analysis, and data analysis techniques, and proposes a methodology for developing the program. The methodology involves data collection and analysis, and machine learning algorithms to predict community needs. The program provides a user-friendly interface for nonprofit organizations to access and analyze the data and offers recommendations for suitable locations based on their criteria. The study concludes that the proposed program can be a valuable tool for nonprofit organizations to make informed decisions about their location and maximize their potential for success in serving their communities.

    Committee: Tyler Highlander (Advisor); Adam Parker (Committee Member); Kevin Steidel (Committee Member) Subjects: Business Administration; Computer Science; Geography; Management; Operations Research; Social Work
  • 7. Koch, Johnathan Applying Computational Resources to the Down-Arrow Problem

    Master of Science in Mathematics, Youngstown State University, 2023, Department of Mathematics and Statistics

    A graph G is said to arrow a graph H if every red-blue edge coloring of G presents a monochromatic H, and is written G→H. The down-arrow Ramsey set reports all subgraphs H of a graph G for which G→H. Formally, the down-arrow Ramsey set is a graph G is ↓G:= {H⊆G: G→H }. Calculating this set by way of scientific computing is computationally prohibitive with the resources commonly available to graph theorists and other academics. Using existing research into complete graphs, the down-arrow Ramsey sets for small complete graphs (Kn for 2 ≤ n ≤ 7) can be generated quickly. For larger complete graphs (Kn for 8 ≤ n ≤ 11) specific pre-processing steps are leveraged to speed up calculations in addition to existing data sets. Presented is work on the development of a Python script to generate the down-arrow Ramsey set of a graph through efficient memory management and parallel computing methodologies. The down-arrow generator is used to report new results on complete graphs as well as complete bipartite graphs, and assorted other graphs.

    Committee: Alexis Byers PhD (Advisor); Alina Lazar PhD (Committee Member); Anita O'Mellan PhD (Committee Member) Subjects: Computer Science; Mathematics
  • 8. Bossenbroek, Jamie Automatic Proteoform Detection in Top-Down Mass Spectrometry

    Master of Science, The Ohio State University, 2023, Computer Science and Engineering

    Top-down Mass Spectrometry is a valuable tool for analyzing intact proteins. An important part of protein analysis is identifying proteins with varying structure, or proteoforms. Mass Spectrometry data is represented as a list of peaks, each with an m/z, retention time, and intensity value. Many tools have been developed to automatically analyze MS data and identify proteoform features by grouping peaks into isotopic envelopes. Here, we present a new top-down MS proteoform detection tool which groups peaks with similar m/z values into vertical lines and then groups vertical lines into isotopic envelopes based on their retention times, intensity distribution, and spacing along the m/z axis. This algorithm takes advantage of localized noise removal to remove high intensity noise peaks and improve analysis time. It uses the Averagine model to estimate a theoretical distribution when grouping vertical lines into envelopes. Finally, a neural network is used to score identified features by comparing the peaks in each experimental and theoretical envelope pair. This tool demonstrated high consistency when tested across multiple replicates. It was also benchmarked against the TopFD proteoform detection tool as a measurement of validity. Comparison showed that this tool was most successful in identifying higher intensity isotopic envelopes. It was correctly able to distinguish overlapping isotopic envelopes, but struggled in regions with many high intensity peaks. Future refinements such as grouping isotopic envelopes into envelope collections can be added to this tool to improve feature identification accuracy.

    Committee: Xia Ning (Advisor); Raghu Machiraju (Committee Member); Xiaowen Liu (Committee Member) Subjects: Biomedical Research; Computer Science
  • 9. Al Ismaeel, Abdulmoez Life Cycle Assessment of Concrete Parking Structures to Enhance Durability and Structural Performance

    Doctor of Philosophy, The Ohio State University, 2023, Civil Engineering

    The main objective of this study is to provide designers, manufacturers, and owners of new parking facilities with best practices and design choices considering lifecycle costs and extreme loading scenarios for several selected parking structures in Ohio. To achieve this overall goal, an interactive tool was developed using Python software to perform lifecycle cost analysis while considering various parameters like joint sealant, flange-to-flange connectors, and general repairs due to corrosion after environmental exposure. Also, snow load effects were investigated when a plow pushes all the uniform snow accumulated on the top of the roof slabs of thirteen parking garage structures to the corners or edges. Furthermore, the additional live load that could come from large numbers of driverless cars on cast-in-place and precast concrete parking structures was investigated. In this dissertation, a lifecycle assessment methodology is proposed for cast-in-place and precast concrete parking structures to identify and address durability and structural performance issues with the objective of answering these specific questions: (1) how to perform overall lifecycle assessment of parking structures, (2) how to perform performance assessment of double-tee beam flange-to-flange connections and joint leakage, and (3) how to investigate a parking structure's ability to carry unexpected loads. The author had access to design, repair, and maintenance data from several existing concrete parking structures. Historical maintenance and repair records were used to assess the impact of design changes to improve the durability and structural performance. An interactive tool is developed in Python software to perform lifecycle cost analysis considering various parameters including joint sealants, flange-to-flange connector, periodical damage repairs, and general maintenance due to environmental exposure. The new program also evaluates the fatigue stress conditions considering the design li (open full item for complete abstract)

    Committee: Halil Sezen (Advisor); Abdollah Shafieezadeh (Committee Member); Jieun Hur (Committee Member) Subjects: Civil Engineering
  • 10. Shaik, Salma Analyzing Crime Dynamics and Investigating the Great American Crime Decline

    Doctor of Philosophy, University of Toledo, 2022, Industrial Engineering

    The main objectives of this dissertation are to investigate the effects of arrests and officers on the Great American Crime Decline, estimate short-term and long-term effects of arrests and policing officers on major crimes, and identify the causal directions between crime, arrests, and officers. Statistical and econometric models such as Fixed Effects Poisson Regression, Panel ARDL Estimation and Panel Granger Causality Testing methods are employed. To avoid spurious regression, tests for cross-section dependency, unit roots, slope-homogeneity and co-integration are conducted to identify the best modeling approaches for effect estimation and causality detection. Data from various sources such as U.S. Census Bureau, F.B.I, Vera Institute of Justice, ICPSR were collected and prepared. In order to carry out a fine-grained analysis, policing agencies are divided into different groups based on population. The dataset for GACD study consisted of 1778 policing agencies from 1990-1999. Arrests of violent, property, disorder, drug sale and possession offenses, and police officers were the predictors while incarceration served as the control variable. For causality study, data on 1553 policing agencies from 1974-2020 was gathered and violent and property arrests, and officers were the independent variables. Results of the GACD study reveal that across all agencies, drug possession and disorder arrests, and officers had deterrence effect on crime, mostly on property crime. Interestingly, officers had a significant deterrence effect on both violent and property crimes only in very large and large agencies. Also, property crimes started to decline at least 3 years earlier than violent crimes. It can be insightful to further examine this delay to understand if property crimes have any effect on violent crimes. From the second study it was observed that both short-term and long-term significant relationships exist between arrests and crime across all agencies. Granger te (open full item for complete abstract)

    Committee: Matthew Franchetti Dr. (Committee Chair); Ahalapitiya Jayatissa Dr. (Committee Member); Yue Zhang Dr. (Committee Member); Benjamin George Dr. (Committee Member); Alex Spivak Dr. (Committee Member) Subjects: Criminology; Industrial Engineering; Statistics
  • 11. McKinsey, Vince Statistical Analysis of Specific Secondary Circuit Effect under Fault Insertion in 22 nm FD-SOI Technology Node

    Master of Science, The Ohio State University, 2021, Electrical and Computer Engineering

    Hardware Trust and Assurance (HT&A) is the study of securing hardware from faults and security threats in much the same way as software security is for software. However, because hardware is, by definition, built using physical components, it is often cost prohibitive to patch out threats and bugs after said hardware is implemented and deployed as such fixes require the physical replacement or modification of said hardware. This paper explores the scope of secondary circuit effects metrics as a use for HT&A, like Switching Rate Activity (SRA) which is explored in this paper. This paper does so by going over the previous use cases of SRA then showing how SRA can be used in a statistical manner to confirm the existence of and locate a fault in a design. The goal of this is to work toward more statisical analysis of secondary circuit effects, like power or netlists. Using Xcelium, Genus, Global Foundries 22nm Fully-Depleted Silicon on Insulator node, a prime number based testbench, and the Carry Multiplexed Adder, it was shown that a statistically measurable difference exists in the SRAs of a design when a fault is introduced. With a p-score of 1.03*10-14 for the means hypothesis test and a p-score of 0.407 for the variance test, the chosen faults did in fact change the behavior of the design when viewed from an SRA lens. In addition, this paper shows that, given the measurable SRA statistical difference of faults and the fact that faults will not propagate their effects all throughout a design when viewed from an SRA perspective, a path exists to find and locate faults from just the SRA effects alone.

    Committee: Steven Bibyk (Advisor); Ayman Fayed (Committee Member) Subjects: Electrical Engineering
  • 12. Pattiam Giriprakash, Pavithran Systemic Identification of Radiomic Features Resilient to Batch Effects and Acquisition Variations for Diagnosis of Active Crohn's Disease on CT Enterography

    Master of Science in Biomedical Engineering, Cleveland State University, 2021, Washkewicz College of Engineering

    The usage of radiomics for extracting high-dimensional features from radiographic imaging to quantify subtle changes in tissue structure and heterogeneity has shown great potential for disease diagnosis and prognosis. However, radiomic features are known to be impacted by acquisition-related changes (e.g., dose and reconstruction variations in CT scans) as well as technical variations between cohorts (i.e., batch effects due to varying dosage and tube currents). Using features which are not resilient to such imaging variations can result in poor performance of the downstream radiomics classifier models. In this study, we present a framework to systematically identify radiomic features that are resilient to both batch effects and acquisition differences, as well as evaluate the impact of such variations on radiomic model performance. We demonstrate the utility of our approach in the context of distinguishing active Crohn's disease (CD) from healthy controls using a uniquely accrued cohort of 164 CTE scans accrued from a single institution, which included (a) batch effects due to variations in effective dosage and tube current, as well as (b) scans simultaneously acquired at multiple doses and reconstructions (3 variations per patient). Our framework involves systematically evaluating the impact of acquisition variations (based on feature robustness to explicit dose/acquisition changes) and batch effects (based on feature stability to implicit dosage/current variations). Resilient radiomic features identified after accounting for both types of variations yielded the best random forest classifier performance across both discovery (AUC=0.819 ± 0.043) and validation (AUC=0.787) cohorts when using full-dose images; also found to be significantly more generalizable than features that were not optimized for such variations (AUC=0.419 in validation). This subset of radiomic features that were both robust and stable (resilient) also maintained their performance when evaluate (open full item for complete abstract)

    Committee: Satish E. Viswanath (Committee Chair); Hongkai Yu (Committee Member); Moo-Yeal Lee (Advisor) Subjects: Biology; Biomedical Engineering; Biomedical Research; Medical Imaging; Radiology
  • 13. Baliu-Rodriguez, David Advancing the Methods for Qualitative and Quantitative Analyses of Microcystins in Water and Biological Tissues using Liquid Chromatography and High-Resolution Mass Spectrometry

    Doctor of Philosophy, University of Toledo, 2021, Chemistry

    Microcystins (MCs) are secondary metabolites generated by cyanobacteria, which can be present in drinking water sources during harmful algal blooms (HABs). MCs are potent liver toxins that inhibit the function of protein phosphatases 1 and 2A (PP1 and PP2A) by binding to the enzymes' active sites. Large MC doses lead to acute liver failure, but prolonged exposure to low levels of MCs may be more prevalent and pernicious. The effects of such exposure in humans are not well understood and are generally extrapolated from animal models. Further complicating their study, over 275 MC congeners have been discovered, though many remain unidentified. This dissertation advances MC research by describing novel extraction and data analysis methods coupled with liquid chromatography-mass spectrometry (LC-MS) for quantitative and qualitative analyses of MCs. A method was developed to extract and quantify MCs from mouse liver with limits of quantification (LOQs) lower than previously reported. MCs were extracted from 40-mg liver samples using 85:15 (v:v) CH3CN:H2O containing 200 mM ZnSO4 and 1% formic acid. Solid-phase extraction with a C18 cartridge was used for sample cleanup. MCs were detected and quantified using LC-orbitrap-MS with simultaneous MS/MS detection of the 135.08 m/z fragment from the conserved Adda amino acid for structural confirmation. The method was used to extract six MCs (MC-LR, MC-RR, MC-YR, MC-LA, MC-LF, and MC-LW) and the MC-LR cysteine adduct (MC-LR-Cys), which can be created from MC-LR in vivo by the glutathione detoxification pathway, from spiked liver tissue. Matrix-matched internal standard calibration curves were constructed for each MC (R2 ≥ 0.993), with LOQs between 0.25 ng per g of liver tissue (ng/g) and 0.75 ng/g for MC-LR, MC-RR, MC-YR, MC-LA, and MC-LR-Cys, and 2.5 ng/g for MC-LF and MC-LW. The protocol was applied to extract and quantify MC-LR and MC-LR-Cys from the liver of mice that had been gavaged with 50 μg or 100 μg of MC-LR per kg body (open full item for complete abstract)

    Committee: Dragan Isailovic (Advisor); Jon Kirchhoff (Committee Member); David Kennedy (Committee Member); Peter Andreana (Committee Member) Subjects: Analytical Chemistry; Biochemistry
  • 14. Natarajan, Keerthana Integrating Machine Learning with Web Application to Predict Diabetes

    MS, University of Cincinnati, 2021, Education, Criminal Justice, and Human Services: Information Technology

    Diabetes is one of the highest causes of death in the world. Diabetes is caused when the blood glucose level is too high in the body. Gradually, high blood glucose leads to heart disease, stroke, eye, and foot problems. To prevent the dreadful effects among people, early detection is required that would lead to proper medical treatment and change in lifestyle. Therefore, with the rise of machine learning we can predict if a patient has diabetes or not. Furthermore, we will integrate the trained model to a web application that will connect the model to generate predictions in real-time considering factors responsible for diabetes like body mass index (BMI), age, insulin, etc. In this paper, we are using the Pima Indian dataset that is originally from the National Institute of Diabetes, Digestive and Kidney Diseases for diabetes prediction model design using machine learning. The proposed system in this paper is the Soft Voting ensemble classifier. The algorithm with the best accurate result was used in making predictions. This model was deployed to the web using flask (a python framework), it takes inputs from the user to make predictions. This model is implemented using python programming language and flask (a web base framework) hosted in GCP. Soft Voting ensemble classifiers even perform better than other classifiers with an accuracy of 91.55% which is quite promising considering the other classification models in the literature for this problem.

    Committee: Nelly Elsayed Ph.D. (Committee Chair); Bilal Gonen Ph.D. (Committee Member); M. Murat Ozer Ph.D. (Committee Member) Subjects: Computer Science
  • 15. Fadeel, Abdalsalam Development and Application of a Computational Modeling Scheme for Periodic Lattice Structures

    Doctor of Philosophy (PhD), Wright State University, 2021, Engineering PhD

    Sandwich structures are widely used for aerospace, marines and other applications due to their light-weight, strength, and strain energy absorption capability. The cores of the sandwich structure are typically fabricated by using high strength cellular materials such as aluminum and titanium alloys, or polymer foams, and honeycombs. Lattice cell structures (LCS) of different configurations such as body centered cubic (BCC), tetrahedron and pyramidal are being investigated as core material due to their design freedom and periodic nature. Due to the recent advent of additive manufacturing (AM), new research is being sought in the areas of designing and developing application-specific LCS configurations. However, experimental investigation of LCS is costly in time and materials. Therefore, in this dissertation, finite element models are developed using ABAQUS and validated according to previous experimental results to design application-specific LCS. First, an efficient and user-friendly tool was developed and this tool is called the Lattice Structure Designer (LSD). The LSD was developed from ABAQUS GUI and using Python scripting. This tool can be used to create the lattice models, define the materials, define the geometry, define the boundary conditions, apply loads, and submit the jobs to perform the computational analysis. The same tool can be used to access the database files and calculate any additional outputs. This ABAQUS plug-in has effectively helped to capture the responses beyond the plasticity levels and capture the failure mechanisms of the lattice structure. In this research, three types of lattices such as body centered cubic (BCC), tetrahedron with horizontal struts (TetH), and pyramidal (Pyr) are considered. These models are used to understand the failure mechanisms and relation between post-yielding deformations and the topologies of the lattice. All of these configurations were tested under compression in the z direction under quasi-static condition (open full item for complete abstract)

    Committee: Ahsan Mian Ph.D. (Advisor); Raghavan Srinivasan Ph.D. (Committee Member); Daniel Young Ph.D. (Committee Member); Joy Gockel Ph.D. (Committee Member); Golam Newaz Ph.D. (Committee Member) Subjects: Aerospace Engineering; Aerospace Materials; Engineering; Mechanical Engineering; Polymers
  • 16. Alvarado, Chance The Effects of University Testing Regimes on the Burden of COVID-19

    Master of Mathematical Sciences, The Ohio State University, 2021, Mathematics

    Universities across the globe have adopted numerous intervention strategies in an attempt to curtail the spread of COVID-19 and keep students, faculty, and staff safe. A common strategy employed in on-campus student populations is that of frequent surveillance testing leading to isolation, contact tracing, and quarantine of suspected exposures. Limiting the spread of infection within the on-campus student body without placing undue burden on quarantine and isolation resources is a key question for university leadership. By examining the interplay between test frequency and test sensitivity under varying forces of infection it is shown that, in the absence of vaccination, frequent testing must be employed to limit the spread of infection while also minimizing the strain on quarantine and isolation resources. Testing twice-weekly using a test of high or low sensitivity for R0 values in the range of 2 to 3 generally contains the spread of COVID-19 without placing undue burden on quarantine and isolation resources. It is important to note that testing regimes that yield low quarantine and isolation burden do not necessarily lead to sucient mitigation of infection in a large university population.

    Committee: Joseph Tien Dr. (Advisor); Grzegorz Rempala Dr. (Advisor) Subjects: Applied Mathematics; Epidemiology; Public Health
  • 17. Neupane, Babita Exploring forms of masonry vaults built without centering

    MS, Kent State University, 2020, College of Architecture and Environmental Design

    Inspired by the historical construction technique and contemporary form-finding methods, as well as practices in designing and constructing masonry vaults, the thesis explores different forms that can be built without centering during construction. This thesis examines the literature associated with the geometries, construction techniques and methods used to generate masonry arches, vaults, and domes without centering or with minimum supports. The main idea about which lies on the self-supporting courses where each masonry unit has its own equilibrium condition. Based on the principles of equilibrium, this research develops algorithms for two cases: when bricks are not bonded with mortar and are bonded with mortar. The algorithms to generate an arch when subjected to additional vertical load for both cases have also been investigated. Moreover, using the Python component in Grasshopper, a tool is developed using the proposed algorithms in this thesis. The tool is used for parametric designs to create new forms of masonry vaults which could be built without centering.

    Committee: Rui Liu Dr. (Advisor); Elwin C. Robison Dr. (Committee Member); Qindan Huang Dr. (Committee Member) Subjects: Architectural; Architecture; Design
  • 18. Kondapalli, Swetha An Approach To Cluster And Benchmark Regional Emergency Medical Service Agencies

    Master of Science in Industrial and Human Factors Engineering (MSIHE) , Wright State University, 2020, Industrial and Human Factors Engineering

    Emergency Medical Service (EMS) providers are the first responders for an injured patient on the field. Their assessment of patient injuries and determination of an appropriate hospital play a critical role in patient outcomes. A majority of states in the US have established a state-level governing body (e.g., EMS Division) that is responsible for developing and maintaining a robust EMS system throughout the state. Such divisions develop standards, accredit EMS agencies, oversee the trauma system, and support new initiatives through grants and training. But to do so, these divisions require data to enable them to first understand the similarities between existing EMS agencies in the state in terms of their resources and activities. Benchmarking them against similar peer groups could then reveal best practices among top performers in terms of patient outcomes. While limited qualitative data exists in the literature based on surveys of EMS personnel related to their working environment, training, and stress, what is lacking is a quantitative approach that can help compare and contrast EMS agencies across a comprehensive set of factors and enable benchmarking. Our study fills this gap by proposing a data-driven approach to cluster EMS agencies (by county level) and subsequently benchmark them against their peers using two patient safety performance measures, under-triage (UT) and over-triage (OT). The study was conducted in three phases: data collection, clustering, and benchmarking. We first obtained data related to the trauma-specific capabilities, volume, and Performance Improvement activities. This data was collected by our collaborating team of health services researchers through a survey of over 300 EMS agencies in the state of OH. To estimate UT and OT, we used 6,002 de-identified patient records from 2012 made available by the state of Ohio's EMS Division. All the data was aggregated at county level. We then used several clustering methods to group counties us (open full item for complete abstract)

    Committee: Pratik J. Parikh Ph.D. (Advisor); Subhashini Ganapathy Ph.D. (Committee Member); Corrine Mowrey Ph.D. (Committee Member) Subjects: Computer Science; Industrial Engineering; Statistics
  • 19. Ramanayaka Mudiyanselage, Asanga Data Engineering and Failure Prediction for Hard Drive S.M.A.R.T. Data

    Master of Science (MS), Bowling Green State University, 2020, Computer Science

    Failing hard drives within data centers can be costly, but it can be very difficult to predict failure of these devices since they are designed to be reliable and, as such, they do not typically fail often or quickly. Due to this goal of reliable design, any data set that records hard drive failures tends to be highly imbalanced, containing many more records of hard drives continuing to function when compared to those that fail. Accordingly, this study focuses on predicting the failure of hard drives using S.M.A.R.T. data records as recorded by the entire Backblaze Data Set, covering multiple years of data beginning in 2013. In order to perform this analysis, a Data Engineering process is developed for collecting, combining, and cleaning the data set before various resampling algorithms, machine learning algorithms and distributed and high performance computing techniques are applied to achieve proper feature selection and prediction. In addition, this data is divided on a per manufacturer basis in order to improve results, resulting in increased performance.

    Committee: Robert Green Ph.D. (Advisor); Robert Dyer Ph.D. (Committee Member); Yan Wu Ph.D. (Committee Member) Subjects: Computer Science
  • 20. Fernandes Dias, Claudio Driver's Safety Analyzer: Sobriety, Drowsiness, Tiredness, and Focus

    Master of Science in Engineering, Youngstown State University, 2020, Department of Electrical and Computer Engineering

    The Driver's Safety Analyzer was designed after deeply researching over the subject and realizing the need of it in the world. The car safety system, in a whole, is developed to protect the driver from having a car accident. Breaking the safety system in different parts, first, there will be the controlling system that prevents the car from leaving its lane if there is another car in its blind spot. Second, the system which automatically stops the car when approaching a vehicle. Third the system is developed to protect the driver from the car accident, such as the airbags, etc. However, it is known that many car accidents are caused by the driver's irresponsibility. According to psychologists, the human beings are, by average, overconfident of themselves. They will always assume they are above average (more capable than an ordinary human). This behavior is what pushes the person to commit wrong decisions such as, driving after drinking alcoholic beverages, driving without sleeping, texting while driving, and driving for long periods of hours without resting. The Driver's Safety Analyzer is developed to possibly solve the overconfidence behavior problem. Using microprocessor, microcomputer, and coding the system can monitor the driver's behavior and control the environment preventing the driver to commit wrong decisions.

    Committee: Coskun Bayrak PhD (Advisor); Frank Li PhD (Advisor); Faramarz Mossayebi PhD (Committee Member) Subjects: Computer Engineering; Computer Science; Electrical Engineering; Engineering; Experiments; Systems Design