Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 185)

Mini-Tools

 
 

Search Report

  • 1. Zhang, Chi Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning Approaches

    Doctor of Philosophy, The Ohio State University, 2022, Civil Engineering

    Uncertainty quantification is important in many engineering and scientific domains, as uncertainties, of both aleatory and epistemic types, are ubiquitous and inevitable since the complete knowledge cannot be achieved. The probability of failure quantifies the probability of a system failing to meet a specific performance requirement. It is a vital measurement of performance when uncertainties are considered, and it can facilitate the design optimization and decision making for critical infrastructure systems. The computational costs of uncertainty quantification are often prohibitive due to the nature of multi-query analysis and expensive numerical models. Surrogate models can be used to facilitate the reliability analysis. Kriging is the among the most popular surrogate models for reliability analysis due to its capability of providing uncertainty information. How to best utilize the simulation data to construct the Kriging model is a primary research topic in the reliability domain. This dissertation offers the following novel contributions to this research topic: • A novel methodology for adaptive Kriging reliability methods is proposed. It considers the global impact of adding new training points and focuses on reducing the error in the most effective manner. • An effective multi-fidelity reliability method is proposed. The information source and training points can be selected simultaneously to achieve optimal construction of the surrogate model. • A two-phase approach for reliability updating with adaptive Kriging is proposed. The error of posterior failure probability introduced by the Kriging model is quantified. • Adaptive Kriging method is integrated with value of information analysis, and a knowledge sharing scheme is developed to enhance the training efficiency. While surrogate models such as Kriging substantially reduce the computational cost of multi-query analyses, they still require costly simulations of complex computational models. (open full item for complete abstract)

    Committee: Abdollah Shafieezadeh (Advisor); Halil Sezen (Committee Member); Jieun Hur (Committee Member) Subjects: Civil Engineering
  • 2. Dallakoti, Pramish Structural Reliability Study of Highway Bridge Girders Based on AASTHO LRFD Bridge Design Specifications

    Master of Science, University of Toledo, 2020, Civil Engineering

    In structural reliability analysis, load and resistance factors are calibrated using various methodologies. Proper calibration of load and resistance factor is essential to obtain the design confidence to meet the design consistency or to obtain a desirable reliability index. The reliability index thus calculated is the measure of the reliability of the structural design. Closed-form solution is an elemental method to determine the reliability index for simpler loads and resistance cases. In some basic cases such as when both loads and resistance are normally or log normally distributed, exact solutions are obtained. But in almost all cases, loads have normal distribution, and resistance has log-normal distribution. In such cases, more rigorous and advance calibration techniques are used to calculate the safety index such as Rackwitz - Fiessler procedure and Monte Carlo Method. In this study, the computer-based Monte Carlo Method was used to calculate the safety index or reliability index. The objective of this study is to describe such methodologies to determine the reliability of the structural designs. In this study, reliability analyses are performed to calibrate load and resistance factors using AASTHO LRFD bridge design specifications for reinforced concrete T-beam bridge girders. Reliability indices are calculated for the three spans continuous bridges with equal span length. Both exterior and interior girders are studied to understand the effect of loads and their trends. Reliability analysis suggests that bending moment is governing over shear. Similarly, there is a significant increase in contributions of dead loads over live load for bending and shear with the increase in span length. Systematic variations of the load and resistance parameters are done to investigate the change in reliability index. Various graphs for reliability index versus sets of statistical and design parameters are plotted for the parametric study. Separate graphs are plotted to (open full item for complete abstract)

    Committee: Douglas Nims (Committee Chair); Luis Mata (Committee Member); Alex Spivak (Committee Member) Subjects: Civil Engineering
  • 3. Zhu, Shengnan 4H-SiC Power MOSFETs Design and Reliability

    Doctor of Philosophy, The Ohio State University, 2023, Electrical and Computer Engineering

    The adoption of 4H-SiC power MOSFETs in automotive applications is on the rise, driven primarily by performance and reliability improvements. This dissertation presents eleven 650 V SiC power MOSFET designs, which have been fabricated by X-Fab on two 6-in SiC wafers and packaged for characterization. We evaluate the effects of the JFET region and layout topology designs on the static and dynamic performance and reliability of the 650 V SiC power MOSFETs. In addition, research work on gate oxide reliability, including gate leakage current and gate oxide lifetime prediction, is also conducted for commercially available planar and trench SiC power MOSFETs from various vendors. The JFET region design variation used in the 650 V SiC power MOSFETs includes variations in JFET width and doping concentration. Packaged devices undergo typical I-V and C-V characterizations and double-pulse tests. Results prove that a narrow JFET region with increased JFET doping concentration achieves low specific ON-resistance (Ron,sp) and reduced gate-drain capacitance (Cgd). Off-state TCAD simulation and short-circuit measurements demonstrate that a smaller JFET region benefits the device reliability by better shielding the gate oxide during high-temperature reverse bias (HTRB) stress and increases the short-circuit withstand time (SCWT). We propose a new cell topology (Dodecagonal cell, or Dod cell for short) for planar SiC MOSFETs. The Dod cell structure features a twelve-sided P+ region with an ohmic contact located on the top, surrounded by six hexagonal poly-Si gate regions that are connected by poly-Si bars. The hexagonal JFET regions are placed inside the gate regions. Similar to the previously proposed Octogonal (Oct) cell, the Dod cell is designed with minimum JFET regions to achieve low Cgd and is suitable for high-frequency switching conditions. Compared with the Oct cell, the new Dod cell reduces the Ron,sp by optimizing the geometry, resulting in improved static performan (open full item for complete abstract)

    Committee: Anant Agarwal (Advisor) Subjects: Electrical Engineering; Engineering; Solid State Physics
  • 4. Sherbaf Behtash, Mohammad Reliability-Based Formulations for Simulation-Based Control Co-Design

    PhD, University of Cincinnati, 2022, Engineering and Applied Science: Mechanical Engineering

    Combined plant and control design (control co-design, or CCD) methods are generally used to address the synergistic coupling between the plant and control parts of a dynamic system. Recently, reliability-based design optimization (RBDO) principles have been used within CCD to address the design of stochastic dynamic systems. However, since the new reliability-based CCD (RBCCD) algorithms use all-at-once (AAO) formulations of CCD, only most-probable-point (MPP) methods can be used as a reliability analysis technique. This is a limitation as the use of such methods for highly-nonlinear RBCCD problems introduces solution error that could lead to system failure. A multidisciplinary feasible (MDF) formulation for RBCCD problems would eliminate this issue as the dynamic equality constraints would be satisfied through forward simulation. Since the RBCCD problem structure would be similar to traditional RBDO problems, any accurate reliability analysis method could be used. Therefore, in this work, a novel reliability-based MDF formulation of multidisciplinary dynamic system design optimization (RB-MDF-MDSDO) has been proposed for RBCCD. To quantify the uncertainty propagation, an accurate reliability analysis method using generalized polynomial chaos (gPC) expansions has been proposed. The effectiveness of the RB-MDF-MDSDO formulation and the proposed reliability analysis method are established via two test problems. The performance of the gPC method relative to the current state of the art, MPP methods, is relatively unknown for RBCCD applications. Specifically, the only known information pertains to RBDO applications, where the gPC expansion method is generally known to be more accurate, but also computationally more expensive than the MPP methods. Therefore, to benchmark the performance of the gPC expansion method against MPP methods, the first-ever double-loop and single-loop MPP-based formulations of RB-MDF-MDSDO are developed, and their solution accuracy and e (open full item for complete abstract)

    Committee: Michael Alexander-Ramos Ph.D. (Committee Member); David Thompson Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Sam Anand Ph.D. (Committee Member) Subjects: Engineering
  • 5. Thapa, Badal Reliability Analysis of Linear Dynamic Systems by Importance Sampling-Separable Monte Carlo Technique

    Master of Science, University of Toledo, 2020, Mechanical Engineering

    For many problems, especially nonlinear systems, the reliability assessment must be done in the time domain. Monte-Carlo simulation (MCS) can accurately assess the reliability of the system. However, its computational cost is highly expensive for the complex dynamic system. Importance Sampling (IS) method is a more efficient method than standard MCS for the reliability assessment of a system. It has been applied to dynamic systems when the excitation is defined by a Power Spectral Density (PSD) function. The central idea of the IS method is about generating sample time histories using a sampling PSD and introducing the likelihood ratio to each replication to give the unbiased estimator of the probability of failure. Another more efficient method than MCS for the reliability assessment of the dynamic system is the Separable Monte-Carlo (SMC) method. However, this method has been applied to linear dynamic systems as following. It starts with the step of drawing frequencies from PSD of excitation, calculation of system responses to each frequency, and storing them in a database. Then the stored frequencies and the respective responses are chosen randomly with the replacement for each replication to find the system response to the linear combination of the respective sinusoidal functions. Therefore, SMC can assess the reliability of the system with a proper database. The size of the database would depend on the shape of the PSD function and the complexity of the system. This research proposed a new method by combining IS with SMC to assess the reliability of linear dynamic systems. In this method, the database of the proposed method formed by using a sampling PSD is used to estimate the reliability of the system for the true spectrum The proposed method is more efficient than both IS or SMC methods individually in terms of both computational time and accuracy. The proposed method is demonstrated using a 10-bar truss.

    Committee: Mohammad Elahinia (Committee Chair); Mahdi Norouzi (Committee Co-Chair); Shawn P. Capser (Committee Member) Subjects: Mechanical Engineering
  • 6. Wang, Zeyu Reliability Analysis and Updating with Meta-models: An Adaptive Kriging-Based Approach

    Doctor of Philosophy, The Ohio State University, 2019, Civil Engineering

    Uncertainties are ubiquitous in various engineering and science fields. Examples include analysis and design of structures and infrastructure systems against natural or manmade hazards, rocket and satellite design in aerospace engineering and safety analysis in nuclear engineering. To enhance the performance of those systems, actions taken by designers and decision-makers should be toward a set of performance objectives with higher reliability or resilience. Moreover, as sensing technologies are maturing and becoming more cost efficient, allowing their implementation at large scales, information about the state of the built and natural environments is becoming more available. This information can be leveraged to reevaluate or update forecasts of the performance of these systems and enhance confidence in our forecasts of the future performance. Analysis of the new information can therefore lead to more effective risk-informed decisions. Uncertainty Quantification (UQ) techniques such as reliability analysis and updating can help with quantitative assessment and real-time updates of infrastructure performance through the estimation of probability of failing to meet one or a set of objectives. The state-of-the-art techniques based on surrogate models, such as Kriging, open new avenues for reliability analysis by adaptively learning the shape of the limit state and substituting the originally time-consuming performance function with the estimated one. However, (I) the process of unnecessary training, (II) lack of accuracy measurement for the failure probability estimate, (III) high computational demand for high-dimensional problems, and (IV) lack of capability to perform reliability analysis with real-time updating still remain as significant challenges. To address the aforementioned limitations, this study offers the following novel contributions: - A methodology called Reliability analysis through Error rate-based Adaptive Kriging (REAK) is proposed to significantl (open full item for complete abstract)

    Committee: Abdollah Shafieezadeh (Advisor); Halil Sezen (Committee Member); Alper Yilmaz (Committee Member); Jieun Hur (Committee Member) Subjects: Civil Engineering; Engineering; Mechanical Engineering; Operations Research
  • 7. Lockhart, Jonathan Software Development Process and Reliability Quantification for Safety Critical Embedded Systems Design

    PhD, University of Cincinnati, 2019, Engineering and Applied Science: Electrical Engineering

    Embedded systems are at the forefront of everyday life, being utilized in smart devices, such as cell phones and internet of things (IoT), devices around the home, as well as the latest components in aerospace and automobile technology. Reliance on these devices is critical to the current day to day operations of society, and these devices are required to be secure and reliable to maintain the safety of those who depend on them. Among these devices, trusted safety critical embedded systems are rigorously designed with security and reliability so they can be counted on to perform their assigned responsibilities with a low probability of failure, as such a failure could cost people their lives in the worst case scenario. Trusted embedded systems are often developed with hardware, using the latest in field programmable gate arrays (FPGA), and integrated circuits (ICs), as hardware development has a long established process for producing high quality, fault tolerant systems and reporting performance in a standardized way. The processes utilized are mature and repeatable, and this shows the reliability of these systems is consistent. These systems have increased in complexity, performing more and more tasks with each incremental increase in hardware performance. Unfortunately the end to Moore's Law is coming, and though the development of new architectures and techniques has allowed for the end to be delayed, a shift in design is required to continue increasing the complexity of trusted embedded systems. Software is being looked at to continue the trend of complex, high performance systems, but suffers from its utilization in modern, agile development environments and the use of unreliable metrics for reliability. Therefore, software is not currently always suitable for integration into safety critical systems, and requires a new, encompassing development procedure that utilizes techniques and metrics to allow it to be used in hardware/software solutions. This dissertati (open full item for complete abstract)

    Committee: Carla Purdy Ph.D. (Committee Chair); Wen-Ben Jone Ph.D. (Committee Member); Daniel M. Peairs Ph.D. (Committee Member); Ranganadha Vemuri Ph.D. (Committee Member); Philip Wilsey Ph.D. (Committee Member) Subjects: Computer Engineering
  • 8. Awad, Mohammad Assessment of Pseudo-Continuous Arterial Spin Labeling (pCASL) Inter-Session Reliability in the Quantification of Cerebral Perfusion

    Master of Science (MS), Wright State University, 2019, Anatomy

    Arterial spin labeling (ASL) is a magnetic resonance imaging (MRI) technique used for measuring cerebral blood flow (CBF) in a completely non-ionizing and non invasive fashion. ASL is useful in perfusion studies on healthy adult & pediatric subjects, individuals who need multiple follow-ups, and patients with varying cerebrovascular diseases where changes in CBF can be used as an indicator of tissue viability. We used a variation of the ASL technique known as pseudo-continuous ASL (pCASL). This form of ASL is the clinical standard (Alsop et al., 2015). However, it is not well documented the that pCASL is reliable between sessions spanning days to weeks. In this study, we assessed the inter-session reliability of CBF through the use of the pCASL technique. We hypothesize that the pCASL technique can be used to quantify CBF measurements across a 24-hour and 48-hour period. Subjects included 15 healthy, active duty Air Force military personnel recruited by the Wright Patterson Air Force Base from a larger experiment. Of the 15 subjects scanned on day 1 and day 2, 2 did not return for scanning on the third day. All participants were scanned in three identical evening sessions separated by 24 hours. MR imaging was conducted on a 3T MRI scanner with a 24-channel head coil. Each of the iv three days began with a baseline imaging scan followed by sham transcranial direct current stimulation (tDCS) and another identical imaging session. MRI acquisition included a 12-min resting-state function MRI (fMRI), three task fMRI, a T1-weighted MRI, diffusion tensor imaging (DTI), magnetic resonance spectroscopy (MRS) imaging, and resting pCASL. Our work only shows the baseline imaging from each day and the resting pCASL results. Quantitative CBF maps were computed from the raw pCASL data using proton density maps and a single compartment perfusion model through the use of the clinical processing pipeline on the MRI. These CBF maps w (open full item for complete abstract)

    Committee: Kathrin Engisch Ph.D. (Committee Chair); Matthew Sherwood Ph.D. (Committee Co-Chair); David Ladle Ph.D. (Committee Member) Subjects: Anatomy and Physiology
  • 9. Miran, Seyedeh Azadeh RELIABILITY-BASED MANAGEMENT OF BURIED PIPELINES CONSIDERING EXTERNAL CORROSION DEFECTS

    Master of Science in Engineering, University of Akron, 2016, Engineering

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each sub-system is considered as a series system of detected and newly generated defects within that sub-sys (open full item for complete abstract)

    Committee: Qindan Huang Dr (Advisor); Qindan Huang Dr (Committee Chair); Ping Yi Dr (Committee Member); Shengyong Wang Dr (Committee Member) Subjects: Civil Engineering; Engineering; Industrial Engineering
  • 10. Li, Xiangfei Reliability Assessment for Complex Systems Using Multi-level, Multi-type Reliability Data and Maximum Likelihood Method

    Master of Science (MS), Ohio University, 2014, Industrial and Systems Engineering (Engineering and Technology)

    This thesis develops a methodology to access reliability of complex systems using multi-level, multi-type reliability data. Binomial (Pass/Fail), life testing, and accelerated life testing data at the component, subsystem, and system levels will be integrated to estimate the system reliability using the maximum likelihood method. Matlab software is used to simulate the multiple data sets of the component-, subsystem- and system-level. Matlab is used to solve the partial derivatives of the log-likelihood function with the simulated data set. A Fisher information matrix is developed to analyze the confidence intervals of parameters and reliability functions.

    Committee: Tao Yuan (Advisor); Diana Schwercha (Committee Member); Gary Weckman (Committee Member); John D Stowe (Committee Member) Subjects: Industrial Engineering
  • 11. King, Jason Risk Quantification and Reliability Based Design Optimization in Reusable Launch Vehicles

    Master of Science in Engineering (MSEgr), Wright State University, 2010, Mechanical Engineering

    Due to the inherent natural variability of parameters with reusable launch vehicles, design considerations without use of a reliability or safety index may be unreliable and vulnerable to vehicle failures. Generally in preliminary air vehicle design little information is known regarding design variable uncertainties, consequently requiring a technique that can quantify epistemic uncertainties. Evidence Theory is employed to accomplish this task resulting in a reliability bound of belief and plausibility. Due to the discontinuous nature of the belief and plausibility function it is necessary to implement a continuous function known as plausibility decision to be used to calculate sensitivities that can be implemented in a gradient-based reliability-based design optimization algorithm. This research develops a new plausibility decision approximation that calculates sensitivities with respect to uncertain design variables without introducing extra computational cost or numerical integration. This new metric was demonstrated in a sensitivity analysis as well as a reliability based design optimization of the aeroelastic flutter reliability of a reusable launch vehicle's wing.

    Committee: Ramana Grandhi PhD (Advisor); Scott Thomas PhD (Committee Member); Ronald Taylor PhD (Committee Member); Ramana Grandhi PhD (Committee Member) Subjects: Aerospace Engineering; Design; Engineering; Mechanical Engineering
  • 12. Schilling, Walter A Cost Effective Methodology for Quantitative Evaluation of Software Reliability using Static Analysis

    Doctor of Philosophy in Engineering, University of Toledo, 2007, Electrical Engineering

    Software reliability represents an increasing risk to overall system reliability. As systems have become larger and more complex, mission critical and safety critical systems have increasingly had functionality controlled exclusively through software. This change has resulted in a shift of the root cause of systems failure from hardware to software. Market forces have encouraged projects to reuse existing software as well as purchase COTS solutions. This has made the usage of existing reliability models difficult. Traditional software reliability models require significant testing data to be collected during development. If this data is not collected in a disciplined manner or is not made available to software engineers, these modeling techniques can not be applied. It is imperative that practical reliability modeling techniques be developed to address these issues. This dissertation puts forth a practical method for estimating software reliability. The proposed software reliability model combines static analysis of existing source code modules, functional testing with execution path capture, and a series of Bayesian Belief Networks. Static analysis is used to detect faults within the source code which may lead to failure. Code coverage is used to determine which paths within the source code are executed as well as the execution rate. The Bayesian Belief Networks combine these parameters and estimate the reliability for each method. A second series of Bayesian Belief Networks then combines the data for each method to determine the overall reliability for the system. In order to use this model, the SOSART tool is developed. This tool serves as a reliability modeling tool and a bug finding meta tool suitable for comparing the results of different static analysis tools. Verification of the model is presented through multiple experimental instances. Validation is first demonstrated through the application to a series of Open Source software packages. A second validation (open full item for complete abstract)

    Committee: Mansoor Alam (Advisor) Subjects:
  • 13. Miller, Timothy Architectural Solutions for Low-power, Low-voltage, and Unreliable Silicon Devices

    Doctor of Philosophy, The Ohio State University, 2012, Computer Science and Engineering

    In the past several years, technology scaling has reached an impasse, where performance has become limited not by transistor switching delays but by hard limits on power consumption brought on by limits on power delivery, cooling and battery capacities. Although transistors have continued to scale down in size, power density has increased substantially. In the future, it may become impractical to power an entire chip at nominal voltage. The main tool designers have to avoid this power wall is to lower supply voltage, but this combines with the increasing effects of process variation to make semiconductors slower and less reliable. We propose several solutions to these problems. For logic faults, we provide a tunable reliability target, where the tradeoff between reliability and energy efficiency can be adjusted dynamically for the system, application, and environment. For faults in memories, we develop a new, low-latency forward error correction technique that is a practical solution to the high bit cell failure rate of caches at low voltage. As voltage is lowered, performance is reduced both by generally increasing transistor delay and also by amplifying the effects of process variation; we mitigate the effects of variation through the use of dual voltage supplies and clock dividers. For efficiency, we propose two dual-voltage and dual-frequency techniques for increasing performance of unbalanced workloads. For reliability, we propose an intelligent processor wake-up schedule to eliminate voltage emergencies that can arise from sudden increases in current demand, particularly those associated with common synchronization primitives.

    Committee: Radu Teodorescu (Advisor); Xiaodong Zhang (Committee Member); Dhabaleswar Panda (Committee Member) Subjects: Computer Engineering; Computer Science; Energy
  • 14. Raffle, Holly Assessment and Reporting of Intercoder Reliability in Published Meta-Analyses Related to Preschool Through Grade 12 Education

    Doctor of Philosophy (PhD), Ohio University, 2006, Educational Research and Evaluation (Education)

    This work focused on issues related to data collection in meta-analysis, specifically the coding process by which coders extract data from primary studies. This work used content analysis to evaluate 118 meta-analyses reported in 99 journal articles from the educational literature to establish how educational researchers assess and report intercoder reliability. The principal investigator provides detailed information regarding the assessment of intercoder reliability using an item-by-item approach and a generalizability theory framework for the instrument used in the study. The principal investigator explored six topics related to the assessment and reporting of intercoder reliability: the use of meta-analysis in education, coders and coding strategy, coder training procedures, the code book, addressing intercoder reliability, and reporting intercoder reliability. Although the literature review established that researchers perceive coder unreliability as a threat to the internal validity of a study, of the 118 meta-analyses in the study, authors reported intercoder reliability analyses less than half (37.3%) of the time. The principal investigator presents recommendations for best practice in intercoder reliability analysis and reporting based on the study methodology and research findings.

    Committee: Gordon Brooks (Advisor) Subjects: Education, General
  • 15. Kaplar, Mary Lying Happily Ever After: Altruistic White Lies, Positive Illusions, and Relationship Satisfaction

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2006, Psychology

    Although some lies can damage relationships, we hypothesized that altruistic white lies (i.e., lies of minimal importance told to protect another) may benefit romantic relationships by buffering individuals against the potentially damaging effects of hurtful, albeit relatively minor, information. Positive relationship illusions (e.g., believing your relationship is more immune than others' relationships to conflict and divorce) have been shown to be positively associated with relationship satisfaction. We hypothesized that altruistic white lies may help create positive illusions within relationships. In order to evaluate the potential links between altruistic white lies, positive relationship illusions, and relationship satisfaction, we created and validated a new scale called the Lying In Amorous Relationships Scale (LIARS). This scale assesses individual differences in attitudes toward telling altruistically motivated white lies to a romantic relationship partner. In a series of three studies we assessed the factor structure, internal consistency, test-retest reliability, and construct validity of the LIARS, as well as whether favorable attitudes toward altruistic white lies are positively correlated with positive illusions and relationship satisfaction. The results of Study 1 indicated that the LIARS is a reliable, unidimensional scale that is best conceptualized as a single factor. In Study 2 the LIARS demonstrated good discriminant validity with measures of academic achievement and locus of control, as well as good predictive validity with behavioral intentions to tell one's partner an altruistic white lie in response to a variety of scenarios. As predicted, LIARS scores also differed as a function of participants' marital status and affiliation with the university. Contrary to predictions, the LIARS scores of men and women did not differ. Additionally, the LIARS did not demonstrate convergent validity with measures of empathic concern and perspective taking. S (open full item for complete abstract)

    Committee: Anne Gordon (Advisor) Subjects:
  • 16. Chenoweth, Diane Analysis of raters' comments on dimensions of interviewer trustworthiness /

    Master of Arts, The Ohio State University, 1971, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 17. Bilikam, John Systems reliability coupling models and the estimation of coupling parameters /

    Master of Science, The Ohio State University, 1968, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 18. Elsea, Braya Reliability and Validity of the Athlete Self-Efficacy Scale in U.S. Collegiate Athletes

    Master of Arts, Marietta College, 2024, Psychology

    Bandura's self-efficacy theory (1977) is a widely researched topic. Self-efficacy can positively impact different aspects of people's lives. More specifically, self-efficacy has been found to improve cognitive performance, leading to overall better physical performance in athletes across various sports. However, these studies do not provide a consistent measurement of self-efficacy. The Athlete Self-Efficacy Scale (ASES) developed by Kocak (2020) in Turkey is the only scale that provides an English version; it was found to be reliable and valid with their sample of Turkish athlete participants. The current study evaluated the reliability and validity of the ASES in an American population. Data from 165 collegiate athletes was collected across a variety of team and individual sports. Cronbach's Alpha Reliability coefficient, a Confirmatory Factor Analysis and an Exploratory Factor Analysis were conducted to examine scale and the four underlying factors (Sport Discipline Efficacy, Psychological Efficacy, Professional Thought Efficacy, Personality Efficacy). The current study found that the ASES was reliable when all the scales were included, but only two of the factors (Sport Discipline Efficacy and Professional Thought Efficacy) were found to have a value greater than the value recommended by the original researchers to support reliability Sport Discipline Efficacy had a coefficient of 0.780 (>0.70) and Professional Thought Efficacy had a coefficient of 0.720 (>0.70). Fit indices for the scale provided by the Confirmatory Factor Analysis suggest the four-factor model was all found to be at least good fit. The Exploratory Factor Analysis suggested a three-factor model for the scale. It also removing two items (Questions 6 and 13) from the scale. The results from the study could be caused by the smaller sample size, language translation done by the original researchers, and the level of competition across the two samples. It is re (open full item for complete abstract)

    Committee: Alicia Doerflinger Ph.D. (Committee Member); Ryan May Ph.D. (Advisor) Subjects: Psychological Tests; Psychology
  • 19. Zhao, Zhenlin Reliability Enhancement of Distribution Grid through Grid-Forming Inverter-Based Resources

    Master of Science, The Ohio State University, 2024, Electrical and Computer Engineering

    With the gradually increasing penetration of distributed generation, they have a significant impact on the power system. The distributed generation is connected with the grid through inverters, which is referred to as Inverter-Based Resources (IBRs). Currently, from the perspective of control strategy, IBRs are mainly classified into two modes: the Grid-Following (GFL) mode and the Grid-Forming (GFM) mode. Among the two, the Grid-Forming (GFM) control is regarded as a promising solution for integrating large-scale power electronics inverters into modern power grids due to its grid-friendly dynamics, particularly its voltage source behavior. At the same time, Grid-Forming IBRs can operate in different modes: grid-connected mode and islanded mode. However, they have different operating characteristics compared to traditional synchronous generation, particularly in terms of fault response due to current limit. Thus, they increasingly affect the system's stable and reliable operation. Therefore, this raises the issue of how GFM inverters can enhance the reliability of power systems. Additionally, there is a lack of detailed quantitative analysis on this issue. Therefore, in this thesis, a Grid-Forming IBR is modeled and an actual power system distribution network model is used as the test system to thoroughly and comprehensively test the impact of Grid-Forming IBRs on enhancing the reliability of the power system under various operating conditions. Furthermore, by incorporating the concept of dynamic boundaries, the system boundaries during islanded operation are adaptively modified to maximize the effectiveness of the IBR. Considering the current limit of the inverters, this approach ensures that they can perform optimally under various conditions. Ultimately, a detailed analysis and summary of the enhancement of grid reliability by GFM inverters were formed.

    Committee: Mahesh Illindala (Advisor) Subjects: Electrical Engineering
  • 20. Kim, Hyeong Jun Energy storage operational modeling to maximize arbitrage value and improve reliability

    Doctor of Philosophy, The Ohio State University, 2024, Industrial and Systems Engineering

    Energy storage is widely used to respond to the uncertain balance of electricity supply and demand and prepare for the contingency. Among many purposes of energy storage, this dissertation will focus on arbitrage trade, peak load shift, and frequency regulation. For the first part, a two-stage stochastic programming model is introduced to schedule energy storage devices and maximize arbitrage profits for the storage operator. In addition, the model considers adjustments depending on the uncertain price of the real-time electricity market when the decision in the day-ahead market is made. Then, value of stochastic solution is computed to see effect of the stochastic programming. Furthermore, several interesting cases are observed and illustrated, such as simultaneous charging and discharging. These are considered as an sub-optimal solution in general, but this occurs in specific conditions. Second, when storage is used for peak load shift, it improves resource adequacy of the power systems by contribution of the power from energy storage. In this chapter, a non-performance penalty is imposed to ensure that energy storage operators reserve energy for such shortages. A stochastic dynamic programming model is used to obtain optimal decision policy for the storage device. Using this model, case studies are conducted for the two different systems. System load of these systems are peaked in the summer and winter, so these are analyzed and compared. In the third part, energy storage capacity value and expected profits are estimated when it provides energy, capacity, and frequency regulation services. To estimate capacity value, three steps approach is adopted. First, discretized stochastic dynamic programming is used to obtain decisions policies for the discretized states. These decision policies are used to get actual decisions by solving mixed-integer optimization in a rolling-horizon fashion. Then, capacity value of energy storage is estimated using simulation. A case (open full item for complete abstract)

    Committee: Chen Chen (Advisor); Ramteen Sioshansi (Committee Member); Antonio Conejo (Committee Member); Matthew Pratola (Committee Member) Subjects: Energy; Industrial Engineering; Operations Research