Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 27)

Mini-Tools

 
 

Search Report

  • 1. Percy, Edward Corrected LM goodness-of-fit tests with applicaton to stock returns

    Doctor of Philosophy, The Ohio State University, 2006, Economics

    Standard goodness-of-fit tests are biased towards acceptance of any hypothesized distribution if the test statistics do not contain explicit corrections for the fact that estimates of model parameters are used rather than unknown true values. Goodness-of-fit tests that use only the most extreme distributional deviation are not as efficient as those that use all the entire distribution. Whether or not the true distribution has infinite variance, the bias can be avoided by Lagrange Multiplier goodness-of-fit tests proposed herein. If a sample is independent and identically distributed according to a distribution F (with time series data a transformation can be applied to estimate an IID series) then the distribution transform of the data produces a histogram that is approximately uniform over the unit interval. Large deviations from uniformity provide evidence against F. The construction of an alternative hypothesis space surrounding the null hypothesis ensures that deviations in any direction can be detected. Such tests can be constructed so that they have more power against alternative hypotheses and less size distortion than standard tests. They achieve these improvements by correcting for the presence of unknown model parameters. The test statistic is asymptotically chi-squared. Exact finite sample sizes are calculated employing Monte Carlo simulations; however, for samples with as few as 30 observations, size distortion is quite low. Unknown model parameters can be estimated by the maximum likelihood principle without asymptotically biasing the test. Furthermore, the test meets the optimality conditions of the Neyman-Pearson lemma against any simple alternative hypothesis in its parameter space. It is an omnibus test with the null hypothesis nested in the space of alternatives. Tests against many non-standard distributions are conducted including symmetric stable distributions, generalized Student-t distributions, generalized error distributions (GED), and mixtur (open full item for complete abstract)

    Committee: J. McCulloch (Advisor) Subjects:
  • 2. Gao, Suyang Power Calculations in Meta Analysis

    PhD, University of Cincinnati, 2024, Medicine: Biostatistics (Environmental Health)

    Meta-analysis is a powerful statistical technique applied in research and evidence synthesis. It involves combining results from multiple independent studies to draw overall conclusions. The effect size measurement is the primary statistic that the researchers interested in. Another approach is to combine the results of several studies addressing the same testing of hypothesis problem, putting emphasis on p-values emanating from the individual studies. Combining p-values require thoughtful consideration of statistical properties, alternative hypotheses, and the specific context of the analysis. Suppose we have the p-values obtained from m independent hypothesis tests. Under the null hypothesis, we assume that the underlying test statistics have continuous probability distributions, and the corresponding p-value always follows the uniform distribution. Under the alternative hypothesis, the pdf is influenced by both the sample size and effect size of the underlying study. Determining the distribution of the combination test statistic of p-values is very complex. The main thrust of my research is to provide guidelines on the number of m of studies to be included in the meta-analysis work. The determination of number of m depends on the individual sample sizes of the studies, the alternative hypothesis (i.e., effect size), the significance level, and the power provided. We primarily discuss the two classic tests by Tippett and Fisher, comparing their performance using p-values from different underlying hypothesis tests. Additionally, we assess the performance of three tests based on natural combining statistics: geometric mean, arithmetic mean, and harmonic mean.

    Committee: Marepalli Rao Ph.D. (Committee Chair); Roman Jandarov Ph.D. (Committee Member); Jeffrey Welge Ph.D. (Committee Member) Subjects: Biostatistics
  • 3. Pi, Jianzong Authentication, System Identification and Optimization of High Dimensional Dynamical Systems

    Doctor of Philosophy, The Ohio State University, 2024, Electrical and Computer Engineering

    This dissertation addresses challenges in optimal control, parameter identification, hypothesis test and optimization on high-dimensional dynamical systems encountered in real-world applications. The first project focuses on optimally controlling a discrete-time singularly perturbed system with two timescales, providing theoretical guarantees on performance by framing the problem as a dynamical optimization task. In the second project, we focus on authentication on Physical Unclonable Functions (PUFs) using kernel methods to handle hypothesis tests in high-dimensional spaces and to improve performance compared to conventional methods. Additionally, the dissertation explores identifying minimally contributing states in high-dimensional systems through conditional independence testing using kernel methods, demonstrating feasibility with observational data. Lastly, we focused on parameter identification on battery simulation systems for their output behaviour improvements.

    Committee: Abhishek Gupta (Advisor); Emre Koksal (Advisor); Andrea Serrani (Committee Member); Zobeida Cruz-Monserrate (Committee Member) Subjects: Electrical Engineering
  • 4. Yazbeck, Maha Novel Forward-Inverse Estimation and Hypothesis Testing Methods to Support Pipeline and Brain Image Analyses.

    Doctor of Philosophy, The Ohio State University, 2024, Industrial and Systems Engineering

    This dissertation addresses two applied problems relating to images. The first relates to images of pipeline corrosion and the second relates to images of the human brain and individuals with Attention-Deficit/Hyperactivity Disorder (ADHD). The corrosion of oil and gas pipelines is important because there are thousands of leaks every year costing billions of dollars for cleanups. ADHD is important because a substantial fraction of the world population has the disorder causing significant suffering and hundreds of billions of dollars of losses to the world economy. To address both image analysis problems, novel statistical and operations research techniques are proposed which have potentially wide applicability. Relating to pipeline corrosion, an established simulation method is called the “voxel” method which permits predictions about how images and pipelines or other media will change as corrosion evolves. In most realistic cases, we find that the parameter values or “inputs” (Xs) needed to run the simulation are unknown. We only have the images which are essentially outputs (Ys) which can be generated by real world experiments or simulations. The phenomenon of having incomplete inputs for simulation is common in many engineering and science situations and a critical challenge for both people and artificial intelligence. We and others have called this important subject, “empirical forward-inverse estimation” since we can gather data (empirically) in the forward manner progressing from assumed inputs (Xs) to measured outputs (Ys) and then generate inverse predictions from Ys to Xs. With (hopefully) accurately estimated X values, the experimental setup or simulation can then predict the future corrosion evolution and whether repair in critically needed. Relating to forward-inverse analyses, 24 variants of an established two stage method or framework are studied in relation to enhanced inverse prediction accuracy for two test cases including pipeline corrosion (open full item for complete abstract)

    Committee: Theodore T. Allen (Advisor); William (Bill) Notz (Committee Member); Samantha Krening (Committee Member); Marat Khafizov (Committee Member) Subjects: Engineering; Industrial Engineering; Materials Science; Statistics
  • 5. Krishnaveti Suresh, Vikram Essays on Bayesian Testing and Methods for Longitudinal Data

    PhD, University of Cincinnati, 2023, Business: Business Administration

    The three chapters presented in this work focus on improving statistical inference for time series models and clinical trial data. In the first chapter, a Bayesian approach is proposed for testing serial error independence in time series models, addressing the Lindley paradox and outperforming frequentist tests in AR1 models. The second chapter provides simulation evidence comparing conventional Mixed Models for Repeated Measures (MMRM) with Bayesian hierarchical models that explicitly specify dynamic correlation, showing improved predictive accuracy and reliable treatment effect estimation in small samples. The third chapter applies the best-performing model specifications from the previous chapter to clinical trial data, highlighting the importance of socioeconomic factors in treatment outcomes for major depressive disorder. The first chapter addresses the assumption of serial independence of model approximation errors in time series modeling, which is crucial for reliable inference. While frequentist methods like the Ljung-Box and Breusch-Godfrey tests have been widely used for hypothesis testing, the increasing popularity of Bayesian Inference demands comparable Bayesian testing procedures. However, Bayesian hypothesis testing has faced challenges in resolving the Lindley paradox, resulting in contradictory inference when compared to frequentist tests. To address this issue, we propose a Bayesian procedure for testing serial error independence in time series models, which enables paradox-free inference. The results demonstrate that the proposed Bayesian test outperforms frequentist tests in Autoregressive Order 1 (AR1) models in small to moderate-sized samples and performs comparably well in higher order autoregressive models. Clinical trials are often costly and have limited observations, making it crucial to achieve precise inference about treatment effects in small samples. The Mixed Models for Repeated Measures (MMRM) have been the standard to (open full item for complete abstract)

    Committee: Jeffrey Mills Ph.D. (Committee Chair); Hans Breiter M.D. (Committee Member); Jeffrey Strawn (Committee Member); Olivier Parent Ph.D. (Committee Member); Lenisa Chang Ph.D. (Committee Member) Subjects: Statistics
  • 6. Ghoshal, Asmita Efron's Method on Large Scale Correlated Data and Its Refinements

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2023, Statistics

    This dissertation focuses on methodological innovation for multiple testing on hypotheses related to large-scale and correlated data, where error rate control is intrinsically critical. Research toward this goal necessitates rigorous discussions on a thorny concept, the strong control of familywise error rate (FWER). In the literature, published papers in this regard subsequently avoid this intricate issue by adapting feeble criteria such as the weak control of FWER or the false discovery rate. Different from conventional approaches, we directly tackle the problem with the strong control of FWER. Starting with Efron's data on an inference problem related to 7128 genes of 72 patients, consisting of 47 acute lymphoblastic leukemia patients and 25 acute myeloid leukemia patients, the dissertation lays out fundamental terminologies facilitating the research on multiple inferences after discussing a method controlling the false discovery rate following the empirical approach of estimating the correlation parameter. Following a review of the current literature, one distinct feature of the dissertation attributes to multiple testing procedures on odds ratios when several populations are of interest. When the joint distribution of a cluster of subsequent populations is approximately available, such as the utilization of the Cochran-Mantel-Haenszel statistic, a sequential testing method of strong control of FWER is proposed. The new method outperforms the traditional Holm's procedure (which also strongly controls FWER) in terms of substantiating any signifcant discovery that is detected by the latter. Another feature of the dissertation explores the sequential testing procedure for the comparison of the odds ratio. It effectuates a general stepwise exact inference procedure that strongly controls the FWER. The new procedure is robust and versatile for both parametric and nonparametric settings. When the new procedure was employed with the Jonckheere-Terpstra test, it dist (open full item for complete abstract)

    Committee: John Chen Ph.D. (Committee Chair); Alexis Ostrowski Ph.D. (Other); Riddhi Ghosh Ph.D. (Committee Member); Umar Islambekov Ph.D. (Committee Member) Subjects: Statistics
  • 7. Pathirana, Hasani An efficient framework for hypothesis testing using Topological Data Analysis

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2023, Statistics

    Topological data analysis (TDA) has become a popular approach in recent years to study the shape structure of data. Persistence homology is a widely used TDA tool that describes how topology changes through a nested sequence of topological spaces in the form of simplicial complexes. As a result, topological features (e.g., connected components, loops, and voids) appear and disappear, and the summary of this evolution is reported as a persistence diagram (PD). Considered as topological signatures of the data, the space of PDs can be endowed with Wasserstein distance with a stability property. However, since PDs are not vectors in Euclidean space and calculating Wasserstein distances might get computationally costly, they have limitations to be represented as inputs in machine learning tasks. A common remedy to deal with this issue is to map PDs into a space of functions and to vectorized by evaluating them over a grid of scale values which results in vector summaries belonging to Euclidean space. The Betti function, which has incorporated weights, is one of the simplest functional summaries for PDs leading to such vector representations. Even though it is the easiest to construct and fast to implement, no stability result is proven to the best of our knowledge. In the present work, we introduce a new method to vectorize the Betti function by integrating it between two consecutive scale values of a grid. The resulting vector summary, named a vector of averaged Bettis (VAB), provides a lower-dimensional, informative, and computationally efficient vector representation compared to the standard method of vectorizing the Betti function. We also prove a stability result for a class of weight functions with respect to Wasserstein distance. Further, through several experimental studies, we show that the permutation-based hypothesis testing procedure, which aims at identifying whether two sets PDs are drawn from the same distribution or process, can be improved i (open full item for complete abstract)

    Committee: Umar Islambekov Ph.D. (Committee Chair); Paul Morris Ph.D. (Other); Kit Chan Ph.D. (Committee Member); Maria Rizzo Ph.D. (Committee Member) Subjects: Statistics
  • 8. Wolf, Abigail Event Camera Applications for Driver-Assistive Technology

    Master of Science in Computer Engineering, University of Dayton, 2022, Electrical and Computer Engineering

    We propose an Event-Based Snow Removal algorithm called EBSnoR. We developed a technique to measure the dwell time of snowflakes on a pixel using event-based camera data, which is used to carry out a Neyman-Pearson hypothesis test to partition event stream into snowflake and background events. The effectiveness of the proposed EBSnoR was verified on a new dataset called UDayton22EBSnow, comprised of front-facing event-based camera in a car driving through snow with manually annotated bounding boxes around surrounding vehicles. Qualitatively, EBSnoR correctly identifies events corresponding to snowflakes; and quantitatively, EBSnoR-preprocessed event data improved the performance of event-based car detection algorithms.

    Committee: Keigo Hirakawa (Committee Chair); Vijayan Asari (Committee Member); Bradley Ratliff (Committee Member) Subjects: Computer Engineering
  • 9. Bendary, Ahmed Hardware-Aided Approaches for Unconditional Confidentiality and Authentication

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    Emerging technologies and infrastructure developments in information and communication systems have paved the way for the extraordinary exposure of information around the globe. Specifically, the ease and the reliable exchange of information have promoted cultural, social, and economic activities. Meanwhile, this exposure is being exploited against user privacy and data confidentiality. In response, there have been major activities in keeping information safe. These activities can be summarized under three main domains: 1) Authentication: granting only legitimate access to data at rest, 2) Confidentiality: protecting information from being leaked to unauthorized parties in transit, and 3) Privacy: concealing user identity and activities. Modern cryptography is a practical and standardized approach that provides a certain level of information security. Cryptosystems obfuscate data in a way that makes it almost impossible to recover the plaintext, even with significant computational resources, but they do not rule out brute force recovery of data. They are robust in the communication media, i.e., the attackers are ruled out to have access to the ciphertext without a problem. Another approach, which is based on the physical characteristics of the hardware and/or the location, has been emerged as a powerful technique that can achieve unconditional security, i.e., without any assumption on the computational resources of the attackers. These two approaches are complementary and future security approaches will likely utilize both. In this dissertation, we mainly focus on the physical layer approaches, in particular, hardware-aided approaches, and discuss ways on how they can be used to enhance encryption-based approaches. First, we study multiple-input multiple-output (MIMO)-aided covert communication (also referred to as communication with a low probability of detection): the session between two legitimate parties remains undetectable from an external eavesdropper. (open full item for complete abstract)

    Committee: C. Emre Koksal (Advisor); Yingbin Liang (Committee Member); Daniel J. Gauthier (Committee Member) Subjects: Electrical Engineering; Information Science; Information Systems
  • 10. Sargun, Deniz Robust Change Detection with Unknown Post-Change Distribution

    Doctor of Philosophy, The Ohio State University, 2021, Electrical and Computer Engineering

    As communication and control systems become more complex, connected and process data at higher velocities, detecting changes in patterns becomes increasingly difficult yet still crucial to guarantee a level of QoS, security, reliability etc. For large systems, usually there are many modes of failure and they are also prone to attacks from different surfaces. Still, there are numerous zero-day vulnerabilities that are unidentified until they cause a fault or are exploited. Change detection with unknown distributions provides a way of detecting the occurrence of faults or the gain of access by malicious parties by comparing the time series system features to their norm. In a wide variety of the applications, on the other hand, it is feasible to assume a certain level of knowledge of the system before the effect takes place and utilizing the knowledge of initial conditions increases the detection performance. With an ever increasing data rate and connectivity, any change in the observed process has to be detected on the fly before it is outdated, without the necessity to store and with a small blast radius for malicious activities. A delay in real time change detection may result in QoS disruption, cyber-physical threats and inability to contain the spread of a disease. So, minimal computational complexity is a key ingredient for modern change detection algorithms. In this dissertation, we assume non-Bayesian change detection problems under a finite alphabet with varying change point and cost models and with unknown post-change distributions. We focus on robust detection algorithms that utilize the knowledge of pre-change system dynamics and are of low complexity. Given that the effect of the change on the system is unknown, the distribution of observations may divert in many ways without much structure, whereas, before the change point, a false alarm is structured by Sanov's theorem, following a particular sample path. The proposed methods characterize (open full item for complete abstract)

    Committee: C. Emre Koksal (Advisor); Atilla Eryilmaz (Committee Member); Kiryung Lee (Committee Member); Abhishek Gupta (Committee Member) Subjects: Computer Engineering; Electrical Engineering; Statistics
  • 11. Stewart, Patrick Statistical Inferences on Inflated Data Based on Modified Empirical Likelihood

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2020, Statistics

    Many studies deal with inflated and nonnegative data, such as in medical studies. Most studies that deal with inflated data deal with zero-inflated datasets, but there are many datasets that are zero-one inflated as well. Zero-inflated datasets are characterized by a significant proportion of zero values, leading to a skewed distribution. Zero-One inflated datasets are characterized by a significant proportion of zero and one values, which also leads to a skewed distribution. It is common practice to use the Central Limit Theorem to assume an approximately normal distribution to construct confidence intervals and conduct hypothesis tests. However with inflated and highly skewed distributions, this practice leads to an inaccurate result. The empirical likelihood method offers an alternative method of computing confidence intervals with the benefit of having no distributional assumptions. Although the empirical likelihood method provides an improvement, it suffers from several drawbacks. In this dissertation, we propose several modified empirical likelihood methods to combat these drawbacks. We use these modified methods, along with the empirical likelihood and normal approximation methods, to construct confidence intervals based on zero-inflated data and zero-one inflated data. We compare the performance of each method for these two situations on both simulated data and real data. Furthermore, we develop a hypothesis test for comparing two means based on one of the modified empirical likelihood approaches. We then test the modified empirical likelihood approach against the empirical likelihood and normal approximation methods using simulated and real data.

    Committee: Wei Ning Ph.D. (Advisor); Hanfeng Chen Ph.D. (Committee Member); Junfeng Shang Ph.D. (Committee Member); Rachel Shafer Ph.D. (Other) Subjects: Statistics
  • 12. Koroglu, Muhammed Multiple Hypothesis Testing Approach to Pedestrian Inertial Navigation with Non-recursive Bayesian Map-matching

    Doctor of Philosophy, The Ohio State University, 2020, Electrical and Computer Engineering

    Inertial sensors became wearable with the advances in sensing and computing technologies in the last two decades. Captured motion data can be used to build a pedestrian inertial navigation system (INS); however, time-variant bias and noise characteristics of low-cost sensors cause severe errors in positioning. To overcome the quickly growing errors of so-called dead-reckoning (DR) solution, this research adopts a pedestrian INS based on a Kalman Filter (KF) with zero-velocity update (ZUPT) aid. Despite accurate traveled distance estimates, obtained trajectories diverge from actual paths because of the heading estimation errors. In the absence of external corrections (e.g., GPS, UWB), map information is commonly employed to eliminate position drift; therefore, INS solution is fed into a higher level map-matching filter for further corrections. Unlike common Particle Filter (PF) map-matching, map constraints are implicitly modeled by generating rasterized maps that function as a constant spatial prior in the designed filter, which makes the Bayesian estimation cycle non-recursive. Eventually, proposed map-matching algorithm does not require computationally expensive Monte Carlo simulation and wall crossing check steps of PF. Second major usage of the rasterized maps is to provide probabilities for a self-initialization method referred to as the Multiple Hypothesis Testing (MHT). Extracted scores update hypothesis probabilities in a dynamic manner and the hypothesis with the maximum probability gives the correct initial position and heading. Realistic pedestrian walks include room visits where map-matching is de-activated (as rasterized maps do not model the rooms) and consequently excessive positioning drifts occur. Another MHT approach exploiting the introduced maps further is designed to re-activate the map filter at strides that the pedestrian returns the hallways after room traversals. Subsequently, trajectories left behind inside the rooms are heuristically adjus (open full item for complete abstract)

    Committee: Alper Yilmaz Prof (Advisor); Keith Redmill Prof (Committee Member); Charles Toth Prof (Committee Member); Janet Best Prof (Other) Subjects: Electrical Engineering; Engineering
  • 13. Namavari, Hamed Essays on Objective Procedures for Bayesian Hypothesis Testing

    PhD, University of Cincinnati, 2019, Business: Business Administration

    Controversy arising from the Jeffreys-Lindley-Bartlett paradox has hindered progress in developing widely acceptable Bayesian hypothesis testing and model selection methods. However, Mills (2019a) provides a testing framework, utilizing objective Bayes factors, that does not suffer from the paradox. The focus of this dissertation is on utilizing this general hypothesis testing framework to develop practically applicable statistical tests for three commonly examined hypotheses. The first essay develops a testing procedure for the differences in means across samples, generally known as the ANOVA test. Frequentist ANOVA tests are also subject to the Behrens-Fisher problem, and so typically involve a restrictive assumption concerning the variances across samples. The developed Bayesian test allows differing unknown variance as well as correlation between the samples. Further, joint hypothesis tests for comparison of several means are developed, involving cross-sectional tests using a seemingly unrelated regression (SUR) setup. The performance of these tests is compared with standard frequentist and Bayesian ANOVA testing procedures through Monte Carlo simulation studies of empirical rejection rates. The second essay develops a testing procedure for cointegration in time series, providing an alternative Bayesian residual based test to that proposed by Furmston et al. (2013). The test's performance in comparison to that of the main alternatives, including that of Furmston et al., and to error-correction based tests, will be examined by a Monte Carlo study of empirical rejection rates. The third essay develops a procedure for testing for stationarity in panel data that could be especially useful when the time dimension, T, is small. The presented procedure is based on definition of weak stationarity rather than testing individual series for unit root. We argue that panel data can be nonstationary even when all characteristic roots of the stochastic process (open full item for complete abstract)

    Committee: Jeffrey Mills Ph.D. (Committee Chair); Dungang Liu Ph.D. (Committee Member); Olivier Parent Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member) Subjects: Economics
  • 14. An, Panduan On Analysis of Sufficient Dimension Reduction Models

    Doctor of Philosophy (PhD), Ohio University, 2019, Mathematics (Arts and Sciences)

    Sufficient dimension reduction in regression analysis has been one of the most popular topics in the past two decades. Sufficient dimension reduction is concerned with the situation where the conditional distribution of the response variable 푌 given the covariate vector 푋 depends on 푋 only through a set of linear combinations of 푋. This is known as sufficient dimension reduction (SDR) in the literature which aims at reducing the high dimension of covariates to avoid the so-called curse-of-dimensionality. One pioneering work in SDR is the sliced inverse regression (SIR) proposed by Li (1991). However, SIR is not exhaustive and in particular SIR works poorly when the regression function is nearly symmetric. In this dissertation, we propose a measurement of the monotonicity of the regression curve. An algorithm based on the measurement is developed to select an SDR method. Another key issue in regression analysis is variable selection. Most current methods in variable selection are penalty based. We propose a new method based on SSE ratio to identify significant variables that contribute to the SDR models. Relatively little work has been done for the comparison of two SDR models to our knowledge. We propose three test statistics for such comparison based on Mahalanobis distance, Benasseni's Coefficient, and nonparametric estimation of the regression function. Simulation studies have been conducted under various settings to demonstrate the performance of the proposed methods.

    Committee: Wei Lin (Advisor); Xiaoping Shen (Committee Member); Rida Benhaddou (Committee Member); Chulho Jung (Committee Member) Subjects: Mathematics; Statistics
  • 15. Savel, Daniel Towards a Human Genomic Coevolution Network

    Doctor of Philosophy, Case Western Reserve University, 2018, EECS - Computer and Information Sciences

    The advent of next generation sequencing technologies has enabled high-throughput and large-scale DNA sequencing, which has greatly improved the ability to sequence large numbers of whole genomes. These technologies have facilitated the creation of reference genomes of a large and diverse set of organisms. By applying large scale comparative genomic analyses to this library of genomes, evolutionary patterns can be extracted and used to aid in annotating the human genome. This work presents a method for improving the quality of these reference genomes by utilizing substring frequency to identify and correct sequencing errors in raw sequencing data in an efficient manner. The method has been shown to work on real sequencing data and work in conjunction with other contemporary methods. Furthermore, this work presents an analysis on identifying evolutionary patterns that coincide with regulatory interactions. The genomic coevolution, the correlation of the phylogenetic histories of pairs of genomic segments, is characterized for both intrachromosomal and interchromosomal regulatory interactions in the human genome. The findings show that genomic coevolution can be used to aid in the process of detecting these regulatory interactions. These results are among the first steps to leading the way towards the creation of a human genomic coevolution network, which could potentially be used to predict genome-wide novel interactions in the human genome at a large scale.

    Committee: Mehmet Koyuturk (Advisor); Jing Li (Committee Member); Harold Connamacher (Committee Member); Thomas LaFramboise (Committee Member) Subjects: Bioinformatics; Computer Science
  • 16. Jeng, Tian-Tzer Some contributions to asymptotic theory on hypothesis testing when the model is misspecified /

    Doctor of Philosophy, The Ohio State University, 1987, Graduate School

    Committee: Not Provided (Other) Subjects: Statistics
  • 17. Taneja, Atrayee New approaches to testing a composite null hypothesis for the two sample binomial problem /

    Doctor of Philosophy, The Ohio State University, 1986, Graduate School

    Committee: Not Provided (Other) Subjects: Statistics
  • 18. Costello, Patricia A new technique for testing nonparametric composite null hypotheses /

    Doctor of Philosophy, The Ohio State University, 1983, Graduate School

    Committee: Not Provided (Other) Subjects: Statistics
  • 19. Kern, Leslie The effect of data error in inducing confirmatory inference strategies in scientific hypothesis testing /

    Doctor of Philosophy, The Ohio State University, 1982, Graduate School

    Committee: Not Provided (Other) Subjects: Psychology
  • 20. Teoh, Kok Contributions to the asymptotic theory of estimation and hypothesis testing when the model is incorrect.

    Doctor of Philosophy, The Ohio State University, 1981, Graduate School

    Committee: Not Provided (Other) Subjects: Statistics