Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 14)

Mini-Tools

 
 

Search Report

  • 1. DeBruin, Luke Modeling and Control for Advanced Automotive Thermal Management System

    Master of Science, The Ohio State University, 2016, Mechanical Engineering

    This research investigates the design and implementation of a light-duty truck's thermal management system control strategy developed from model-based techniques. To give robust durability and improve fuel economy, the control strategy must stabilize the dynamics of the engine operating temperatures, while also minimizing the energy consumed by the system. First, a detailed plant model is obtained by applying first principle physics (conservation laws) to model the thermal management system from its key components. The component models are combined to accurately predict the flow rates and temperatures of the system. The thermal system model is fed by a vehicle drivetrain mechanical model that calculates the heat rejection to the thermal model through a backward-looking approach. The nonlinear model is calibrated on supplier data and validated using experimental data recorded by a vehicle data acquisition system. Information from the engine control unit, flow rates, and temperatures were previously recorded for various driving profiles while the vehicle operated on a chassis dynamometer according to standard test procedure. The model accurately predicts the temperature dynamics of the system during transient operations of fully-warm drive cycles. Specifically, the Environmental Protection Agency's Federal Test Procedure for a highway drive cycle was used to test the model validity. The validated model provides a benchmark for comparing new controllers to the baseline thermal management control. Next, a model-based control strategy is developed to operate the thermal management system for tracking the desired fluid temperatures and limit the usage of the radiator fan, hence saving energy. In order to do so, the full system architecture was simplified using heat transfer analysis before utilizing an order-reduced, physical model that is linearized analytically. The reduced, linear plant models are then used to design a feedback controller by applying the Se (open full item for complete abstract)

    Committee: Marcello Canova PhD (Advisor); Lisa Fiorentini PhD (Committee Member) Subjects: Automotive Engineering; Mechanical Engineering
  • 2. White, Corey Sequential sampling models of the flanker task: Model comparison and parameter validation

    Doctor of Philosophy, The Ohio State University, 2010, Psychology

    The present study tests sequential sampling models of processing in the flanker task. In a standard flanker task, participants must identify a central target that is flanked by items that indicate the same response (congruent) or the opposite response (incongruent). The standard finding is that incongruent flankers produce interference that primarily affects the early component of the decision process. The goal was to contrast different mechanisms of visual attention and to identify a simple processing model that can be used to augment analyses of this task. Several models were contrasted in their ability to account for flanker data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. Models that assume dynamic focusing of attention provided the best overall account of the behavioral data. Among the dynamic models, a spotlight model that assumes gradual narrowing of attention provided the best balance of fit and parsimony, though dual process models that assume discrete selection of the target also captured the main trends in the data when only standard congruent and incongruent conditions were used. Importantly, the experimental manipulations were reflected by the appropriate model parameters, supporting future use of these models to decompose behavioral data from the flanker task into meaningful psychological constructs. The results of this study also indicate that standard flanker experiments do not provide strong evidence for contrasting gradual versus discrete target selection, and consequently they do not provide strong evidence to support or refute theories of the underlying mechanisms of dynamic attention models.

    Committee: Roger Ratcliff PhD (Advisor); Simon Dennis PhD (Committee Member); Alex Petrov PhD (Committee Member); Gail McKoon PhD (Committee Member) Subjects: Psychology
  • 3. Walling, Caryl Bridging the Gap for Contingent Faculty: An Analysis of the Professional Development and Growth Resources Used in Public Universities Across Michigan

    Doctor of Philosophy, University of Toledo, 2023, Higher Education

    The purpose of this study was to explore the extent that contingent faculty from Michigan's 15 public universities engage with on and off-campus professional development (PD) to improve their teaching practice. Addressing a spectrum of research questions, this study utilized an explanatory sequential mixed-methods approach, combining quantitative surveys and qualitative interviews, to provide a nuanced understanding of the experiences and motivations of contingent faculty members. The initial quantitative phase surveyed 4,745 contingent faculty members through a web- based survey, exploring the availability of on and off-campus PD offerings and the factors influencing their participation. The subsequent qualitative phase was conducted through ten Zoom interviews with contingent faculty from nine universities. This phase delved into the various PD resources utilized by contingent faculty and the underlying motivations driving their engagement. The on-campus exploration revealed the prevalence of in-person seminars and computer-based training from Centers for Teaching and Learning (CTLs), that aligned with broader institutional trends. However, faculty interviews exposed discontent rooted in CTL unfulfilled promises, insufficient communication, and a perceived emphasis on theory over practical application. Contingent faculty expressed a strong desire for peer interactions, mentorship, and discipline-specific development, emphasizing the importance of immediately applicable knowledge. The study further explored on-campus factors influencing contingent faculty. Transitioning to off-campus PD, the study uncovered a significant commitment to continuous learning among contingent faculty. Engagement in live in-person seminars, conferences, social media, and internet resources emerged as critical elements in their professional growth. Notably, the unexpected involvement with artificial intelligence (AI) in discussions around lesson planning and academic integrity reflec (open full item for complete abstract)

    Committee: Edward Janak Ph.D. (Committee Chair); Michael Prior Ph.D. (Committee Member); Judy Lambert Ph.D. (Committee Member); Debra Brace Ph.D. (Committee Member) Subjects: Education
  • 4. Abdel Halim, Jalal Towards Building a Versatile Tool for Social Media Spam Detection

    Master of Science, University of Toledo, 2023, Cyber Security

    With the rapid increase of social network spam, it's essential to empower users with the tools to detect the harmful spam effectively. However, existing tools cannot meet the requirements. In this paper, we propose and develop a live detection tool that can detect ham and spam text and images from social networks, this tool will be trained on user collected data (Image and Text) using different classifiers, where text and images are pre-processed and then passed onto the classifier that the user can choose, the user is then able to save the model and load it whenever they want to use a social network, where this tool will show the user a notification alerting them whether the post they are looking at is spam or ham before they even get the chance to read the text or look at the image, thus protecting them from clicking on malicious links that might harm their computer and steal their data. Evaluation results have demonstrated the effectiveness of our tool.

    Committee: Weiqing Sun (Committee Chair); Hong Wang (Committee Member); Ahmad Javaid (Committee Member) Subjects: Computer Science
  • 5. Kang, Inhan Psychometric Process Modeling: A Modeling Framework to Study Intra-individual Processes Underlying Responses and Response Times in Psychological Measurement

    Doctor of Philosophy, The Ohio State University, 2022, Psychology

    Despite their successful accounts for latent structures of observations and individual differences as a function of variations in latent variables, psychometric models are oblivious of the intra-individual processes of a respondent during the measurement procedure. Without theoretical accounts for what cognitive components drive the response processes and how this causality arises, some important questions on validity and measurement remain unanswered. To address this issue, we propose psychometric process modeling that integrates psychometric models with decision-making theories in perceptual and cognitive psychology to study the internal processes of an individual in psychological and educational measurement. This approach redefines latent variables as cognitive components in psychological processes of measurement and establishes their causal relationships with manifest variables. We provide examples of psychometric process models and discuss their theoretical implications and practical applicability. The first three studies in the dissertation show that psychometric process models provide a theoretical explanation of the existence and cognitive sources of conditional dependence in and between responses and response times (RTs) and a method to derive practically useful information and diagnosis for respondents and items. Another study demonstrates how we can develop psychometric process models for ordinal responses and RTs from personality and attitude measurement, based on various confidence judgment theories and psychometric models. The resulting models have different representations of intra-individual processes of measurement and we can find a better theoretical account by comparing these models. We also present several ongoing projects on psychometric process modeling, including 1) a proposal for a racing accumulator framework to reinterpret traditional psychometric models as different information processing rules and extend them to develop process models for (open full item for complete abstract)

    Committee: Roger Ratcliff (Advisor); Paul De Boeck (Committee Member); Brandon Turner (Committee Member) Subjects: Psychology
  • 6. Kasarabada, Yasaswy Efficient Logic Encryption Techniques for Sequential Circuits

    PhD, University of Cincinnati, 2021, Engineering and Applied Science: Computer Science and Engineering

    Logic encryption, a prominent solution to the hardware IP security problem, protects a circuit by adding 'encryption' logic to lock the design functionality using a set of newly introduced key inputs. Logic encryption for combinational circuits focuses on adding combinational gates as encryption logic. When encrypting sequential circuits, most techniques advocate the modification of the FSM to either prevent entering normal operation or force the design into corrupted state(s) on the application of a wrong key. Another avenue taken by sequential logic encryption is to lock the scan-chains by inserting key gates on the scan connections between the flip-flops in the circuit to reduce the ability to set and observe the internal state of the circuit. Boolean satisfiability based attack methods are successful in decrypting combinational logic encrypted circuits. Subsequently proposed SAT-resilient techniques are susceptible to other type of attacks like removal, bypass or functional analysis attacks. Although SAT methods can be used to attack sequential circuit using scan chains, this approach is rendered ineffective if the scan chains are absent or are locked using key gates, as described above. Due to these limitations, sequential logic encryption techniques claim SAT-resiliency. One of the goals of this dissertation is to test the validity of this claim by developing SAT-based attack methods that can attack logic encrypted sequential circuits without scan access. Circuit unrolling is a promising technique that is used to develop such an attack method. The decryption efficiency of this attack is evaluated against modern sequential logic encryption techniques. Furthermore, more robust and highly effective encryption techniques to counter the sequential SAT attack method are proposed in this work by analyzing the attributes of the attack that contribute towards its success against other sequential logic encryption schemes. Emphasis is placed on extracting information re (open full item for complete abstract)

    Committee: Ranganadha Vemuri Ph.D. (Committee Chair); Mike Borowczak Ph.D. (Committee Member); John Emmert Ph.D. (Committee Member); Wen-Ben Jone Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 7. Feng, Jianshe Methodology of Adaptive Prognostics and Health Management in Dynamic Work Environment

    PhD, University of Cincinnati, 2020, Engineering and Applied Science: Mechanical Engineering

    Prognostics and health management (PHM) has gradually become an essential technique to improve the availability and efficiency of a complex system. With the rapid advancement of sensor technology and communication technology, a huge amount of real-time data are generated from various industrial applications, which brings new challenges to PHM in the context of big data streams. On one hand, high-volume stream data places a heavy demand on data storage, communication, and PHM modeling. On the other hand, continuous change and drift are essential properties of stream data in an evolving environment, which requires the PHM model to be capable to capture the new information in stream data adaptively, efficiently and continuously. This research proposes a systematic methodology to develop an effective online learning PHM with adaptive sampling techniques to fuse information from continuous stream data. An adaptive sample selection strategy is developed so that the representative samples can be effectively selected in both off-line and online environment. In addition, various data-driven models, including probabilistic models, Bayesian algorithms, incremental methods, and ensemble algorithms, are employed and integrated into the proposed methodology for model establishment and updating with important samples selected from streaming sequence. Finally, the effectiveness of proposed systematic methodology is validated with four typical industrial applications including power forecasting of a combined cycle power plant, fault detection of hard disk drive, virtual metrology in semiconductor manufacturing processes, and prognosis of battery state of capacity. The result comparison between the proposed methodology and state-of-art benchmark methods indicates that the proposed methodology is capable to build an adaptive PHM with sustainable performance to deal with dynamic issues in processes, which provides a promising way to prolong the PHM model lifetime after implementation.

    Committee: Jay Lee Ph.D. (Committee Chair); Hossein Davari Ardakani Ph.D. (Committee Member); Thomas Richard Huston Ph.D. (Committee Member); Manish Kumar Ph.D. (Committee Member); Zonghchang Liu Ph.D. (Committee Member); Jing Shi Ph.D. (Committee Member) Subjects: Engineering
  • 8. Park, Joonsuk Using Sequential Sampling Models to Detect Selective Infuences: Pitfalls and Recommendations.

    Doctor of Philosophy, The Ohio State University, 2019, Psychology

    Sequential sampling models such as the Di usion Decision Model (DDM) and the Linear Ballistic Accumulator (LBA) are often used as measurement tools in psychology. However, two practical issues regarding the use of them have received limited attention: Identi abilities of the models and the appropriateness of the follow-up testing procedures in terms of statistical power. In the present research, I address these problems to ll the gap in the literature. Speci cally, I do the following two things. First, I formally conduct identi ability analyses of DDM and LBA. As a result, I argue that some version of DDM, namely the "full DDM," is unidenti able, even when multiple experimental conditions are employed. I show that this problem arises due to the excess flexibility of the model, and it can only be solved by reducing the number of free parameters to be estimated. Second, I demonstrate that the use of t-tests while comparing parameter estimates cannot be justi ed because such a practice assumes an over-simpli ed, single-level hierarchical model. As such, the statistical power is shown to be suboptimal. Instead, it is recommended that one employ an alternative procedure that explicitly models uncertainties about the parameter estimates, such as meta-regression or Hierarchical Bayes (HB). It is shown that such solutions are better theoretically grounded, exhibit larger statistical power, or yield more precise parameter estimates. Recommendations for substantive researchers are provided based on these considerations.

    Committee: Trish Van Zandt (Advisor); Brandon Turner (Advisor); Jolynn Pek (Committee Member) Subjects: Quantitative Psychology
  • 9. Dharmadhikari, Pranav Hemant Hardware Trojan Detection in Sequential Logic Designs

    MS, University of Cincinnati, 2018, Engineering and Applied Science: Computer Engineering

    Modern digital era empowers the resurgence of the Internet of Things (IoT) concept proposed back in the 1980s. IoT applications involve a network of embedded systems that can share information amongst them. The present-day Integrated Circuits (ICs) get smaller and smaller driven by Moore's law and allows designers to embed enhanced functionalities on these tiny devices. Due to the outsourcing of the design at multiple stages in the IC design flow, state of the art ICs are vulnerable to the intervention by third-party vendors. Hence, there are concerns over security and reliability of the hardware. One particularly stealthy yet intrusive way of undermining the security of an IC is through the insertion of Hardware Trojans in the design. A hardware Trojan is a malicious circuit inserted into the genuine circuit without the designer's knowledge. Predominantly adopted Design for Testability (DFT) approach involves the inclusion of scan chain in sequential logic designs to improve testability but adds area and pin overhead on the overall chip. Due to severe area and cost constraints, sequential circuits often used in IoT devices do not consist of scan chains. Malicious Trojan circuits, which themselves may be state machines, inserted into such systems are hard to detect. We present an effective methodology to detect sequential Trojan circuits inserted into sequential hardware designs without scan chains typically used in IOT applications. The methodology consists of three steps: I. Sequential testability metrics are used to identify nets susceptible to Trojan insertion (suspect nets). II. Model checking technique is utilized to determine the length of the activation sequences of suspect nets obtained from the previous step. Nets with longer activation sequences are classified as Trojan nets. III. Finally, a signal tracing and classification algorithm is proposed to trace back from payload circuit of Trojan to progressively separate the Trojan net (open full item for complete abstract)

    Committee: Ranganadha Vemuri Ph.D. (Committee Chair); Wen-Ben Jone Ph.D. (Committee Member); Carla Purdy Ph.D. (Committee Member) Subjects: Computer Engineering
  • 10. Frazier, Marian Adaptive Design for Global Fit of Non-stationary Surfaces

    Doctor of Philosophy, The Ohio State University, 2013, Statistics

    Computer experiments are used in many modern engineering contexts in which performing a physical experiment is deemed too costly, time-consuming, or dangerous. In these situations, scientists develop a computer code (or simulator) that they feel approximates reality. The computer code that simulates these physical experiments is often very complex, and results in a long run time. Hence, the sampling points must be chosen carefully and intelligently, since it may take weeks or months to fully sample the input space. An efficient design method that can investigate the response surface in a small number of samples is a must. In a sequential (adaptive) design, a small initial space-filling sample is taken from the input space. The simulation is run at these design points, and an estimated model for the response surface is built based on those outputs. Based on this model, the statistician decides where the next sampled point should be based on the experimental goals. In this thesis, we present a new family of adaptive design methods created to be used when one wants to achieve an accurate fit of the entire response surface. These criteria, which were inspired by an expected improvement criterion, are developed specifically for use when it is believed that the response surface may exhibit non-stationary behavior. They focus on the search for areas with large changes in slope, with the idea that sudden changes in slope are an indication of non-stationary "breaks" in the response. While seeking out these boundary points, the methods still achieve an effective fit of the entire response surface using a small number of design points. We compare the new design criteria to existing adaptive methods in a systematic empirical study. Our methods are shown to perform well in a variety of situations, including on stationary and non-stationary data sets, and when the simulator is assumed to be deterministic or stochastic. Finally, the proposed design method is applie (open full item for complete abstract)

    Committee: William Notz (Advisor); Christopher Hans (Committee Member); Jason Seligman (Committee Member) Subjects: Statistics
  • 11. Intermaggio, Victor Modeling Confidence and Response Time in Brightness Discrimination: Testing Models of the Decision Process with Controlled Variability in Stimulus Strength

    Master of Arts, The Ohio State University, 2012, Psychology

    We applied a sequential sampling model (RTCON2; Ratcliff & Starns, submitted) to confidence judgments from a brightness discrimination task. Subjects in the experiments were asked whether test items were bright or dark, and responded with one of six possible choices that ranged from “very sure dark” to “very sure bright”. The confidence judgment and response time data are qualitatively similar to those typically found in recognition memory tasks. We fitted RTCON2 to the data and used the model to explain the mechanisms underlying the confidence judgment decision process. Because this was a perceptual task, we were able to control the stimulus strength values (brightness) and the variability of those values within an experimental condition. This allowed us to test the model's assumptions and quality of measurement of an important model component: between-trial variability in stimulus strength. We demonstrate the utility of RTCON2 when used in conjunction with explicit knowledge of external stimulus strength values. We also implement and discuss other applications of the combination of control over perceptual stimuli with the computational process model of RTCON2.

    Committee: Roger Ratcliff PhD (Advisor); Gail McKoon PhD (Committee Member); Trisha Van Zandt PhD (Committee Member) Subjects: Behavioral Sciences; Cognitive Psychology; Experimental Psychology; Psychology; Quantitative Psychology
  • 12. Potter, Kevin When You are Confident that You are Wrong: Response Reversals and the Expanded Poisson Race Model

    Master of Arts, The Ohio State University, 2011, Psychology

    There exist several mathematical models that can explain choice, confidence, and reaction time in the context of simple decision making. The assumptions these models make about the accumulation of evidence used in making a decision also have implications for response reversals. This project used the expanded judgment task to test the expanded Poisson race model's predictions of the frequency of response reversals. For the first experiment, subjects had to decide between two decks of red and green cards, one of which was predominantly red while the other was predominantly green. Subjects viewed sequences of 1, 3, or 5 cards randomly drawn from one the decks, made a decision, and then viewed a further 2, 4, or 6 cards and made a confidence judgment in which they had the opportunity to reverse their decision. The results found that, qualitatively, the model fit the trends of the empirical data but tended to underestimate the average number of reversals. To examine whether the deviations from the model were due to order effects, a second experiment was run in which subjects only viewed 3 cards initially and 2 cards in the end, but every possible order of red and green cards was presented. However, no significant order effects were observed, providing support for the model's assumption that the serial position of a card had no impact on a subject's decision. Recommendations are given for the modification of the expanded Poisson race model to better handle the variability expressed in the data, future experiments are suggested to help further determine the extent of order effects in the tasks, and finally, potential applications with response reversals in competitive model selection are discussed.

    Committee: Trisha Van Zandt PhD (Advisor); Roger Ratcliff PhD (Committee Member); Thomas Nygren PhD (Committee Member) Subjects: Quantitative Psychology
  • 13. LAM, CHEN QUIN Sequential Adaptive Designs In Computer Experiments For Response Surface Model Fit

    Doctor of Philosophy, The Ohio State University, 2008, Statistics

    Computer simulations have become increasingly popular as a method for studying physical processes that are difficult to study directly. These simulations are based on complex mathematical models that are believed to accurately describe the physical process. We consider the situation where these simulations take a long time to run (several hours or days) and hence can only be conducted a limited number of times. As a result, the inputs (design) at which to run the simulations must be chosen carefully. For the purpose of fitting a response surface to the output from these simulations, a variety of designs based on a fixed number of runs have been proposed.In this thesis, we consider sequential adaptive designs as an “efficient” alternative to fixed-point designs. We propose new adaptive design criteria based on a cross validation approach and on an expected improvement criterion, the latter inspired by a criterion originally proposed for global optimization. We compare these new designs with others in the literature in an empirical study and they shown to perform well. The issue of robustness for the proposed sequential adaptive designs is also addressed in this thesis. While we find that sequential adaptive designs are potentially more effective and efficient than fixed-point designs, issues such as numerical instability do arise. We address these concerns and also propose a diagnostic tool based on cross validation prediction error to improve the performance of sequential designs. We are also interested in the design of computer experiments where there are control variables and environmental (noise) variables. We extend the implementation of the proposed sequential designs to achieve a good fit of the unknown integrated response surface (i.e., the averaged response surface taken over the distributions of the environmental variables) using output from the simulations. The goal is to find an optimal choice of the control variables while taking into account the distr (open full item for complete abstract)

    Committee: WILLIAM NOTZ PhD (Advisor); THOMAS SANTNER PhD (Committee Member); ANGELA DEAN PhD (Committee Member) Subjects: Statistics
  • 14. Qiang, Qiang FORMAL: A SEQUENTIAL ATPG-BASED BOUNDED MODEL CHECKING SYSTEM FOR VLSI CIRCUITS

    Doctor of Philosophy, Case Western Reserve University, 2006, Computer Engineering

    Bounded Model Checking (BMC) is a formal method of verifying Very Large Scale Integrated (VLSI) circuits. It shows violation of a given circuit property by finding a counter-example to the property along bounded state paths of the circuit. The BMC problem is inherently NP-complete and is traditionally formulated to a boolean SATisfiability (SAT) problem, which is subsequently solved by a SAT solver. Automatic Test Pattern Generation (ATPG), as an alternative to SAT, has already been shown an effective solution to NP-complete problems in many computer-aided design areas. In the field of BMC, ATPG has already achieved promising results for simple properties; its effectiveness for more complicated nested properties, however, remains unknown. This thesis presents the first systematic framework of ATPG-based BMC capable of checking properties in all nested forms on gate level. The negation counterpart to a property is mapped into a structural monitor, which is tailored to a flattened model of the input circuit. A target fault is then injected at the monitor output, and a modified ATPG-based state justification algorithm is used to search a test for this fault. Finding such a test corresponds to formally establishing the property. The framework can easily incorporate any existing ATPG tool with little modification. The proposed framework has been implemented in a computer program called FORMAL, and has been used to check a comprehensive set of properties of GL85 microprocessor and USB 2.0 circuits. Experimental results show that the ATPG-based approach performs better in both capacity and efficiency than the SAT-based techniques, especially for large bounds and for properties that require large search space. Therefore, ATPG-based BMC has been demonstrated an effective supplement to SAT-based BMC in VLSI circuit verification.

    Committee: Daniel Saab (Advisor) Subjects: