Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 111)

Mini-Tools

 
 

Search Report

  • 1. Mazumder, Souvik Reliability Based Inspection of Sign, Signal and Luminary Supports in Ohio

    Master of Science, University of Toledo, 2016, Civil Engineering

    This thesis is an initial investigation into developing guidelines and criteria for reliability based inspection of overhead sign supports, bridge mounted sign supports, high light mast supports and signal supports. The desired result of reliability based inspection is to use the support condition and age to determine the inspection interval and depth of inspection to keep the all the supports in the inventory at a desired level of safety while economizing on the resources required to perform inspections. Ohio's inventory of supports is aging and it is important to safely and economically make decisions about inspection, maintenance and replacement. Currently, the Ohio Department of Transportation (ODOT) routinely performs qualitative ground based inspection of its supports. Per ODOT's inspection guidelines, overhead sign supports are inspected once every five years; bridge mounted sign supports and signal supports are inspected annually and there is no formal requirement for high light mast support structural inspections. The literature and standards on support inspection were reviewed. No previous work on reliability based inspection of supports was found. Due to the absence of reliability based inspection models for supports, the concepts of reliability based bridge inspection program were adopted to support inspection. In this study, the reliability of ODOT's ground based inspection and detailed inspection results for a sample of Ohio's supports were assessed to estimate the probability of failure of the supports and evaluate the probability of detection of the support inspection. Based on this assessment, a cost effective reliability based support inspection model was suggested and a model inspection form was provided. Reliability based inspection depends in large part on the accuracy of the inventory and depth of previous inspections. A limitation of the results of this study is that the available database does not have the temporal data necessary t (open full item for complete abstract)

    Committee: Douglas Nims (Advisor); Mark Pickett (Committee Member); Liangbo Hu (Committee Member) Subjects: Civil Engineering
  • 2. Randall, David The vertical distribution of phytoplankton in a stratified lake /

    Master of Science, The Ohio State University, 1971, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 3. Pound, Cynthia On the testing of the equality of two binomial proportions /

    Master of Science, The Ohio State University, 1970, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 4. Garnham, John An investigation in the theory of distributions /

    Master of Science, The Ohio State University, 1961, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 5. Phillips, Gail The use of the normal distribution to approximate the distribution of test means based on items that are scored 1, 2, 3,...K.

    Master of Science, The Ohio State University, 1970, Graduate School

    Committee: Not Provided (Other) Subjects:
  • 6. Geis, Nicholas Sign Changes of Partial Sums of Random Multiplicative Functions

    Doctor of Philosophy, The Ohio State University, 2024, Mathematics

    Let $f$ be a Rademacher random multiplicative function and $$M_f(u):=\sum_{n \leq u} f(n)$$ be the partial sum of $f$. Let $V_f(x)$ denote the number of sign changes of $M_f(u)$ up to $x$. The primary goal of this work is to prove the first explicit almost-sure growth rate for $V_f(x)$ proven by the author in [14]. Namely, we show that for any $\alpha > 2$ \[ V_f(x) = \Omega \Big( (\log \log \log x)^{1/\alpha} \Big) \] as $x\to \infty$, almost surely. We accomplish this by constructing a family of intervals $[y_k, X_k]$, with both $y_k, X_k \to \infty$ as $k \to \infty$, that contain at least 1 sign change of $M_f(u)$ eventually, almost surely. This is the content of Chapter 5 and requires the main result of Chapter 3. A secondary goal of this work is to provide the necessary background for the first non-effective proof that $M_f(u)$ changes signs infinitely often almost surely by Aymone, Heap and Zhao [6]. This and related examples are the content of Chapter 4. Along the way, we also discuss the work of Wintner [32], who first introduced random multiplicative functions. We do this to correct a minor mistake in the original proofs. This is done in Chapter 2. Chapter 6 is concerned with a generalized problem on the sign changes of $M_f(u)$ when the sum is restricted to $y$-smooth numbers, i.e. positive integers $n$ such that if a prime $p \mid n$ then $p \leq y$. Although we do not answer the posed problem, we state results that are analogous to ones utilized in the proof of our main sign-counting result.

    Committee: Ghaith Hiary (Advisor); Wenzhi Luo (Committee Member); James Cogdell (Committee Member) Subjects: Mathematics
  • 7. Henderson, Nikolas First-Passage Percolation on the Complete Graph with Weight-Zero Edges

    Doctor of Philosophy, The Ohio State University, 2024, Mathematics

    Given any graph, one can generate a random metric on the vertices by assigning random weights to the edges, then letting the distance d(x, y) between any two vertices x and y be the total weight of the lightest path from x to y. This model started off on the integer lattice Zn under the name first-passage percolation, calling to mind a fluid percolating through a porous medium. More recently, however, the model has transitioned to the complete graph Kn, the Erdos-Renyi graph G(n, p), and other less geometric graph models, starting with [14] in 1999. While the theory has developed over the last 25 years, it has maintained a significant blind spot, namely that of weight-zero edges. Indeed, weightless edges have been what connected FPP on Zn with classical bond percolation, and the equivalent connections have not been explored on even Kn, where the theory of G(n, p) serves as a natural bond percolation analogue. We seek to begin building this connection by investigating the effects of weight-zero edges on the first passage model on Kn, examining the typical distance and radius of these random environments and seeing how the theory of G(n, p) can shed light on the behavior of these metrics.

    Committee: David Sivakoff (Advisor); Matthew Kahle (Committee Member); Cesar Cuenca (Committee Member); Arthur Burghes (Other) Subjects: Mathematics; Statistics
  • 8. Patrick, Megan RF Steganography to Send High Security Messages through SDRs

    Master of Science in Electrical Engineering (MSEE), Wright State University, 2024, Electrical Engineering

    This research illustrates a high-security wireless communication method using a joint radar/communication waveform, addressing the vulnerability of traditional low probability of detection (LPD) waveforms to hostile receiver detection via cyclostationary processing (CSP). To mitigate this risk, RF steganography is used, concealing communication signals within linear frequency modulation (LFM) radar signals. The method integrates reduced phase-shift keying (RPSK) modulation and variable symbol duration, ensuring secure transmission while evading detection. Implementation is validated through software-defined radios (SDRs), demonstrating effectiveness in covert communication scenarios. Results include analysis of message reception and cyclostationary features, highlighting the method's ability to conceal messages from hostile receivers. Challenges encountered are discussed, with suggestions for future enhancements to improve real-world applicability.

    Committee: Zhiqiang Wu Ph.D. (Advisor); Xiaodong Zhang Ph.D. (Committee Member); Bin Wang Ph.D. (Committee Member) Subjects: Electrical Engineering
  • 9. Omolo, Leakey Using an Ensemble of Machine Learning Algorithms to Predict Economic Recession

    Master of Science in Mathematics, Youngstown State University, 2024, Department of Mathematics and Statistics

    The Covid19 pandemic and the current wars in some countries have put incredible pressures on the global economy. Challenges for the U.S. include not only economic factors, major disruptions and reorganizations of the supply chains, but also those of national security and global geopolitics. This unprecedented situation makes predicting economic crisis for the coming years crucial yet challenging. In this thesis, we propose a method based on various machine-learning models to predict the probability of a recession for the US economy in the next year. We collect the U.S' monthly macroeconomics indicators and recession data from January of 1983 to December of 2023 to predict the probability of an economic recession in 2024. The performance of the individual economics indicator for the coming year was predicted separately, and then all of the the predicted indicators were used to forecast a possible economic recession. Our results showed that the U.S. will face a high probability of being in a recession period in the last quarter of 2024.

    Committee: Nguyet Nguyen PhD (Advisor); G. Jay Kerns PhD (Committee Member); Lucy Kerns PhD (Committee Member) Subjects: Economics; Mathematics; Statistics
  • 10. Kasapoglu, Baris DYNAMIC SEISMIC PROBABILISTIC RISK ASSESSMENT OF NUCLEAR POWER PLANTS

    Doctor of Philosophy, The Ohio State University, 2024, Civil Engineering

    The history of probabilistic risk assessment (PRA) in the nuclear industry goes back to the 1960s. Traditional PRA procedures use the fixed event tree/fault tree approach and only qualitatively account for the time dependence of events. Dynamic PRA is regarded as an advanced approach over traditional PRA with the advantage of its ability to incorporate time dependency. For assessing the seismic risk of nuclear power plants (NPPs), seismic PRA has been used since the 1980s. In recent years, concerns have been raised over the potential conservatism in current practice since it contains significant uncertainties quantified by engineering judgment. To overcome this problem, research has been conducted to modernize tools and methods to provide more realistic seismic risk assessment approaches, so-called computational risk assessment. However, there are some issues that have not been addressed yet, such as integration of state-of-the-art structural modeling and analysis techniques into SPRA procedure for failure estimation of safety-critical equipment and time dependency into accident progression during and after an earthquake. This study aims to investigate the potential conservatism in the current approaches used for seismic fragility evaluation of NPP structure and components (e.g., separation of variables and hybrid methods) and integrate the state-of-the-art techniques (e.g., dynamic PRA, detailed finite element models, explicit time history analysis, etc.) into the traditional SPRA. To achieve this on a real-world problem, a hypothetical scenario is created. A representative NPP is developed for three case studies that research objectives are addressed. The first case study estimates the core damage probability (CDP) of the NPP for a beyond design basis earthquake using state-of-the-art structural models within a dynamic probabilistic risk assessment framework. Randomness of seismic excitation and uncertainty of structural parameters are considered using a simula (open full item for complete abstract)

    Committee: Halil Sezen (Advisor); Richard Denning (Committee Member); Ali Nassiri (Committee Member); Tunc Aldemir (Committee Member) Subjects: Civil Engineering
  • 11. Wang, Ziyun Bivariate Mixed Effects Model with Non-stationary Stochastic Processes for Prediction of Rapid Disease Progression: Empirical Performance and Construction

    PhD, University of Cincinnati, 2023, Medicine: Biostatistics (Environmental Health)

    In recent years, advancements in real-time prediction have been achieved by introducing a more flexible term representing a non-stationary stochastic process to replace the classic random slope in the linear mixed effects models. The resulting model has been used to form predictive probabilities for clinically relevant target functions involving rates of change in the mean response function for people with cystic fibrosis (CF). However, data patterns are changing over time, especially with the introduction of ivacaftor treatment in 2012 followed by other highly effective modulator therapies. In this dissertation, I focused on evaluating the epidemiologic impact of secular trends in CF care and treatments on acute lung decline prediction and characterized the changing patterns. Specifically, I evaluated the performance of predicting rapid lung decline events through a novel data-driven definition. There are often multiple, related, noisily-measured outcomes that are critical to monitoring and predicting disease progression of individuals over time. However, the current approach has been limited to a single outcome. Considering the case of two outcomes, I propose a bivariate mixed effects model utilizing integrated Brownian motion for each mean response function. Estimation of the proposed model was implemented through a combined approach of the Newton-Raphson and Fisher Scoring algorithm and profile likelihood. I also propose a bivariate target function that simultaneously predicts under the two-outcome scenario based on clinically meaningful thresholds of rates of change. This novel approach is applied to achieve real-time prediction of key changes in nutrition and lung function for children with CF who are followed in a national patient registry.

    Committee: RhondaRhonda SzczesniakSzczesniak Ph.D.Ph.D. (Committee Chair); Marepalli Rao Ph.D. (Committee Member); Roman Jandarov Ph.D. (Committee Member); Richard Brokamp Ph.D. (Committee Member); Marepalli Rao Ph.D. (Committee Member) Subjects: Biostatistics
  • 12. Ignacio, John Carlos From Data to Performance: Leveraging Sparse Testing with Genomic Selection for Wheat (Triticum aestivum L.) Breeding

    Doctor of Philosophy, The Ohio State University, 2023, Horticulture and Crop Science

    Breeders have increasingly adopted genomic selection (GS) and sparse testing as methods to improve genetic gains and reduce field testing costs in early-stage trials. However, the impact of the number of sparse testing environments (TE) on breeding success and cost-effectiveness remains uncertain. In this study, we utilized both empirical and simulated yield data from three environments and three years of field testing to evaluate the influence of TE on prediction accuracy, genetic gain (∆G), and the probability of identifying a new cultivar (Pc). Additionally, we investigated the effects of incorporating late-stage data into early-stage predictions, modeling marker-by-environment interaction (MEI), and increasing the number of sparse-tested full-sibs on prediction accuracy. Our findings con-sistently demonstrated that increasing the number of TE enhanced prediction accuracy, with the greatest improvement observed when transitioning from zero TE to one TE, where zero TE involved training with late-stage lines only. GS with two TE was nearly as accurate as three TE. Furthermore, ∆G and Pc were highest at one TE when testing equal number of plots. Interestingly, when the total cost of evaluation was even, the ∆G and Pc were similar for one to three TE. Modeling MEI increased prediction accuracy in trials with high MEI variance relative to genetic variance. The most significant improvement in accuracy occurred when 3-6 full-sibs per family were sparse-tested. This research contributes to the improvement of GS in sparse testing designs and provides valuable insights to enhance breeding success while optimizing the use of resources in early-stage trials.

    Committee: Clay Sneller (Advisor); William Notz (Committee Member); Leah McHale (Committee Member); David Francis (Committee Member) Subjects: Agriculture; Genetics; Plant Sciences; Statistics
  • 13. Spino, Amy Moral Fallibilism

    Master of Arts (MA), Ohio University, 2023, Philosophy (Arts and Sciences)

    In the meta-ethical debates about moral knowledge, there are many theoretical positions to consider. If one is to have an account of moral knowledge, that will inevitably be affected by how one thinks about knowledge in general. I will be transferring a general theory of knowledge and epistemic justification to the more specific domain of ethics, through the lens of epistemic fallibilism. My goal, in applying this epistemic framework to moral discourse, is to outline how moral fallibilism (my theory) can provide a unique and attractive account of moral knowledge. I will accomplish the application of epistemology to ethical theory by implementing Stewart Cohen's account of fallibilism (with its central notion of “relevance”), and by highlighting the aim and position of moral fallibilism by contrasting it with Mackie's error theory. Finally, I will illustrate moral fallibilism by applying it to contemporary moral concerns; the debate about abortion, in particular. Ultimately, I propose a fresh theory of moral knowledge that emphasizes the varying degrees of justification for our ethical beliefs while defending, at the same time, a moderate account of moral objectivity.

    Committee: Christoph Hanisch (Advisor) Subjects: Epistemology; Ethics; Philosophy
  • 14. Abdelrazek, Mohamed Beam Discovery and Tracking for Mobile MIMO

    Doctor of Philosophy, The Ohio State University, 2022, Electrical and Computer Engineering

    The real-time applications and the IoT promote the need for a newer idle spectrum to support the required high traffic. This pushes toward the emergence of the millimeter-wave (mmWave) and the sub-Terahertz (sub-THz) bands in wireless communication. Albeit these higher frequency bands offer wide spectrum help improving the spectral efficiency, it comes with the challenge of alleviating the severe attenuation. MmWave transceivers use large antenna arrays to form high-directional beams and overcome severe attenuation. A large array size leads to a costly beam alignment process if no prior information about beam directions is available. Beam alignment has two phases: beam discovery, and beam tracking. Beam discovery is finding the beam direction by consuming several pilot symbols to find the optimum direction. Moreover, beam tracking is a common approach to keep the discovered beams tightly coupled without frequent beam discovery to eliminate the overhead associated with realignment. Both phases become more difficult as the beams get narrower since slight mismatches lead to significant degradation in SNR as the beam coherence times are short. As a result, beams may lose alignment before they can be readjusted periodically with the aid of pilot signals. In this thesis, we introduce two complementary proposals. The first proposal is for the issue of beam tracking, and the second proposal is for the issue of beam discovery. In the first part of the thesis, we propose a model where the receiver adjusts beam direction continuously over each physical-layer sample according to a carefully calculated estimate of the continuous variation of the beams. Our approach contrasts the classical methods, which fix the beams in the same direction between pilots. In our approach, the change of direction is configured using the estimate of variation rate via two different methods; a Continuous-Discrete Kalman filter and an MMSE of a first-order approximation of the variation. Our method (open full item for complete abstract)

    Committee: C. Emre Koksal (Advisor); Eylem Ekici (Committee Member); Abhishek Gupta (Committee Member) Subjects: Computer Science; Electrical Engineering; Information Science
  • 15. Brune, Ryan Computer Simulations of Novel Annihilating Random Walk Problems

    Master of Science, The Ohio State University, 2022, Statistics

    This paper provides a brief overview of some of the existing literature concerning annihilating random walks and provides a toy example for the reader. Two novel annihilating random walk models are additionally considered. The first examines the case where a + b → ∅. The second examines a case where there is both annihilating and coalescence between a and b particles, where a + a → a, and a + b → ∅. For both problems, computational results from simulations are provided, with keen interest on the decay of a particles. Possibilities for future research are also examined.

    Committee: David Sivakoff (Advisor); Radu Herbei (Committee Member); Mario Peruggia (Committee Member) Subjects: Statistics
  • 16. Smucker, Karoline Preservice Secondary Mathematics Teachers' Approaches to Probabilistic and Stochastic Problem Solving Using Computer Simulations

    Doctor of Philosophy, The Ohio State University, 2022, EDU Teaching and Learning

    Probabilistic simulations have long served as instructional tools in statistics and probability education. With advances in technology, computer simulation environments where large quantities of data can be collected and analyzed have been suggested as venues for problem solving in contexts involving both known and unknown probability distributions. This research used task based interviews to examine how four secondary preservice mathematics teachers approached seven probabilistic and stochastic contexts which included designed computer simulation environments. The interviews included tasks involving both known and unknown probability distributions. Participants' problem solving and stochastic modeling practices were considered, along with the role the simulations may have played in their approaches. This research proposes an empirically grounded model for problem solving with simulations in stochastic contexts based on participants' tendencies. Results suggest that participants' problem solving varied based on whether the task involved a known or unknown distribution. When tasks involved known distributions, the simulations were used primarily to confirm or test mathematical ideas from outside the simulation. In contexts involving unknown distributions, participants used the simulation environment to investigate empirically and provide approximations and inferences. Prior experience in statistics and probability played a key role in problem solving outside the simulation, and included mathematizing the contexts using distributions, lists, charts, and formulas. In the simulation environments, participants worked mathematically with the data using calculations, proportions, counts, and graphs. Data collection in the simulation environments tended to be iterative, with participants testing, evaluating, and refining ideas over several rounds, though the amount of data which they considered to be necessary to draw conclusions varied significantly based on context.

    Committee: Azita Manouchehri (Advisor); Jan Nespor (Committee Member); Arnulfo Perez (Committee Member) Subjects: Educational Technology; Mathematics Education; Statistics; Teacher Education; Teaching
  • 17. Dozier, Robbie Navigating the Metric Zoo: Towards a More Coherent Model For Quantitative Evaluation of Generative ML Models

    Master of Sciences, Case Western Reserve University, 2022, EECS - Computer and Information Sciences

    This thesis studies a family of high-dimensional generative procedures modeled by Deep Generative Models (DGMs). These models can sample from complex manifolds to create realistic images, video, audio, and more. In prior work, generative models were evaluated using likelihood criteria. However, likelihood has been shown to suffer from the Curse of Dimensionality, and some generative architectures such as Generative Adversarial Networks (GANs) do not admit a likelihood measure. While some other metrics for GANs have been proposed in the literature, there has not been a systematic study and comparison between them. In this thesis I conduct the first comprehensive empirical analysis of these generative metrics, comparing them across several axes including sample quality, diversity, and computational efficiency. Second, I propose a new metric which employs the concept of typicality from information theory and compare it to existing metrics. My work can be used to answer questions about when to use which kind of metric when training DGMs.

    Committee: Soumya Ray (Advisor); Michael Lewicki (Committee Member); Harold Connamacher (Committee Member) Subjects: Artificial Intelligence; Computer Science
  • 18. Truman, Anna Educational Attainment: An Agent-Based Model

    Master of Science in Mathematics, Youngstown State University, 2022, Department of Mathematics and Statistics

    Educational attainment is a subject of great importance in today's world. The challenge to maximize educational outcomes is now seen on a national scale. There have been many studies on the interventions and factors that can be used to improve educational attainment. Using interventions that have proven effective in these studies, an original survey exploring the impact of these interventions on educational attainment is collected and combined with census data. It is then integrated into an agent-based model that predicts educational outcomes. Controlling for different variables, this agent-based model gives insights into the influence of different intervention combinations.

    Committee: Alicia Prieto-Langarica PhD (Advisor); Alexis Byers PhD (Committee Member); Lucy Kerns PhD (Committee Member); Alejandra Donaji Herrera Reyes PhD (Committee Member) Subjects: Education; Mathematics
  • 19. Hossain, Imran Graph Matrices under the Multivariate Setting

    Master of Sciences, Case Western Reserve University, 2022, EECS - Computer and Information Sciences

    We expand on the framework of graph matrices first introduced by Ahn et al. [1], which are a class of random matrices whose entries' dependence can be described by a small graph. While Ahn et al. assume that a univariate distribution underlies this dependence, we relax this assumption and introduce graph matrices whose input structure is derived from a multivariate probability distribution. We then show spectral norm bounds on these graph matrices as being consistent with those under the univariate setting using the trace power method. Our result expands Ahn et al's work by allowing for random matrices with more complicated dependencies between elements. We present potential applications that have such dependencies under the multivariate setting in fields such as graph theory.

    Committee: Harold Connamacher (Advisor); Vincenzo Liberatore (Committee Member); Mark Meckes (Committee Member) Subjects: Computer Science; Mathematics
  • 20. Song, Bingxin How does instructional manipulations drive response biases in recognition memory? A diffusion model analysis

    Master of Science, The Ohio State University, 2022, Psychology

    Response bias from stimulus probability/response proportion manipulations has been studied in both perception and to a smaller degree in recognition memory. The present study conducted six Experiments with different combinations of accuracy, speed, and stimulus probability instructions to investigate how response bias was produced in a word recognition memory task. Each experiment used three stimulus probabilities with two biased conditions “3 old to 1 new words” and “1 old to 3 new words”, and one unbiased condition “1 old to 1 new words.” Memory performance was also manipulated by using high and low frequency words as stimuli. We applied the diffusion decision model (DDM) to decompose experimental data into components of processing. The diffusion model is a cognitive model that decomposes the underlying decision process into different components of processing. We found that the stimulus probability manipulation mainly shifted the starting point of the decision process closer to the biased response boundary to produce the response bias in data. We also examined whether stimulus difficulty only altered drift rate, and whether accuracy/speed instructions only changed boundary separation in the decision process. The bias manipulation allows receiver operating characteristic (ROC) functions to be generated. In signal detection theory, if the strength distributions are normal, z-transformed ROC (z-ROC) functions are linear. The z-ROC functions from the experiments were irregular because the bias manipulation did not produce large enough effects for reliable results. Detailed analysis of results and implications of the application of the DDM are discussed.

    Committee: Roger Ratcliff (Advisor); Scott Hayes (Committee Member); Andrew Leber (Committee Member) Subjects: Cognitive Psychology