Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 19)

Mini-Tools

 
 

Search Report

  • 1. Xu, Peixin Model-based Multiple Imputation by Chained-equations for Multilevel Data below the Limit of Detection

    PhD, University of Cincinnati, 2022, Medicine: Biostatistics (Environmental Health)

    Missing data are a problem in many scientific studies and have been addressed by many statisticians in the past two decades. Especially, exposure assessment in epidemiological research could be difficult with low concentrations in biologic samples. This results in a left-truncated missingness due to the limit of detection, which applies to both cross-sectional and longitudinal studies. Recently, an increasing number of literatures have emphasized the importance of including analysis model information in imputation, which is referred to as model-based imputation. For example, in epidemiological study, the analysis model information can be interaction terms between exposures and time or gender. The author proposed a model-based multiple imputation algorithm by chained-equations for multilevel data subject to limit of detection. This method accommodates (1) the detection limits, (2) multilevel data structure, as well as (3) complicated analysis model information, which can be used for both multilevel regression models with level-1 outcome and multiple informant models with level-2 outcome. To the best of the author's knowledge, there is no existing literature that has simultaneously addressed all three aspects in one algorithm. Simulation studies showed that the proposed algorithm outplayed traditional imputation methods with multilevel regression models and remained competitive with multiple informant models. The author further applied this method to the HOME Study data imputation. HOME Study collects longitudinal environmental chemical exposures and investigates their impact on a cohort of pregnant women in Cincinnati, Ohio. Specifically, the author would like to study the association of urinary OPEs concentration and children's reading ability. The result shows that OPE concentrations are inversely associated with the Reading Composite score at 8 years without covariate adjustment.

    Committee: Changchun Xie Ph.D. (Committee Member); Aimin Chen Ph.D. (Committee Member); Nanhua Zhang Ph.D. (Committee Member); Marepalli Rao Ph.D. (Committee Member); Todd Jenkins Ph.D. (Committee Member) Subjects: Biostatistics
  • 2. Austin, Bryan Need for Cognition in Resident Assistants

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2021, Higher Education Administration

    My study addresses how experiences within the role of a resident assistant (RA) may be related to need for cognition, a measure describing an individual's capacity and desire to engage in effortful thinking. To investigate this relationship, I used a pretest and posttest survey methodology. This methodology allowed me to determine if RAs experienced a significant change in their need for cognition after one year in the RA role and what variables within the RA role were related to need for cognition. Several factors within the RA experience were correlated with need for cognition. These factors were ACT score, gender, positive diversity experiences, the changing of a significant belief as a result of working as an RA, and weekly hours worked as a RA. Of these factors, only ACT was predictive of need for cognition. The results of my study confirm that ACT is a significant predictor for need for cognition. My results also suggest that RAs' experiences with diversity may play a significant role in the development of their need for cognition.

    Committee: Ellen Broido D.Ed. (Advisor); Patrick Pauken Ph.D.; J.D. (Other); Maureen Wilson Ph. D. (Committee Member); Hyeyoung Bang Ph. D. (Committee Member); Rachel Vannatta Ph. D. (Committee Member) Subjects: Higher Education; Higher Education Administration
  • 3. Kosler, Joseph Multiple comparisons using multiple imputation under a two-way mixed effects interaction model

    Doctor of Philosophy, The Ohio State University, 2006, Statistics

    Missing data is commonplace with both surveys and experiments. For this dissertation, we consider imputation methods founded in Survey Sampling, and assess their performance with experimental data. With a two-way interaction model, missing data renders Multiple Comparisons Procedures invalid; we seek a resolution to this problem through development of a Multiple Imputation Procedure. By completing an incomplete data set, we obtain a balanced data set for which multiple comparisons of treatment effects may be performed. Our procedure is RMNI: Repeated Measures Normal Imputation. This procedure is readily adapted to function with any hierarchical linear model.

    Committee: Elizabeth Stasny (Advisor) Subjects: Statistics
  • 4. Almomani, Ayat Automatic variance adjusted Bayesian inference with pseudo likelihood under unequal probability sampling: imputation and data synthetic

    PhD, University of Cincinnati, 2021, Arts and Sciences: Mathematical Sciences

    Pseudo posterior inference using the sampling weights results in variance underestimation. Thus, uncertainty adjustment is needed to get unbiased variance estimation. In survey sampling context, there are two previous works proposed post-processing adjustment methods. Both studies assume simple structures of the population distributions and do not address missing value issues. We proposed new adjusted weights for the pseudo likelihood function, which leads to a consistent estimator with the correct asymptotic variance given by the inverse of the Godambe information matrix. The asymptotic properties of the proposed method are proved. Based on the proposed approach, a useful Bayesian imputation model is formulated using the truncated DP mixture Gaussian model to flexibly capture the complex shape of survey samples collected by informative sampling. Unlike other pseudo posterior inferences with survey samples relying on the original sampling weights, our approach automatically adjusts the impact of using the pseudo likelihood and lead to unbiased variance estimation. To the best of our knowledge, this is the first work to adjust the uncertainty of the pseudo posterior inference under missing data. We extended the work to generate fully synthetic data using the proposed model. Simulation and application study generate MCMC samples with the correct variance and hence we do not need post-processing for uncertainty adjustment.

    Committee: Won Chang Ph.D. (Committee Chair); Xia Wang (Committee Member); Emily Kang Ph.D. (Committee Member); Hang Joon Kim Ph.D. (Committee Chair) Subjects: Statistics
  • 5. Zang, Huaiyu A Bayesian Nonparametric Approach for Causal Inference with Missing Covariates

    PhD, University of Cincinnati, 2020, Arts and Sciences: Mathematical Sciences

    Missing covariates in observational studies are common. Inappropriately handling missing data on covariates could have an impact on the causal effect estimation. Causal analysis on complete-case records could result in inefficiency due to a loss in sample size as well as potentially biased causal estimation. Besides missing data problems, the complexity of data structure makes the causal inference more difficult. In real data, the data distribution could be very complex; a standard parametric model lacks its flexibility. To address these problems, we introduce a Bayesian nonparametric causal model to estimate causal effects with missing covariates, that simultaneously imputes missing values and estimates causal effects under a potential outcome framework. We compare the performance of our method to complete-case analyses and two-step approaches (the sequential-chain imputation followed by the off-the-shelf causal inference methods) via repeated sampling simulations. Our simulation results show that our method produces accurate average treatment effect estimates as well as good imputation performance to preserve joint distributions of complicated data. In the simulation studies, we confirm that a bad imputation model could negatively impact the causal estimation and learn that we need to choose a good imputation model and check imputation performance for correct causal inference analysis. The proposed method is also applied to Juvenile Idiopathic Arthritis data, extracted from electronic medical records, comparing effectiveness of early aggressive use of biological medication in treating children.

    Committee: Hang Joon Kim Ph.D. (Committee Chair); Bin Huang Ph.D. (Committee Member); Siva Sivaganesan Ph.D. (Committee Member); Xia Wang Ph.D. (Committee Member); Nanhua Zhang Ph.D. (Committee Member) Subjects: Statistics
  • 6. Bailey, Brittney Data analysis and multiple imputation for two-level nested designs

    Doctor of Philosophy, The Ohio State University, 2018, Biostatistics

    This work examined methods to account for the clustering that occurs in nested clinical trial designs, which if ignored when analyzing or imputing data can result in invalid inference and misleading conclusions. We explored two distinct areas of research depending on whether the design is partially nested or fully nested. For partially nested designs, where subjects are nested within clusters in at least one study condition but subjects in another condition remain independent, we conducted a simulation study to identify the best method of analysis for binary outcomes. We compared four logistic regression models: standard logistic regression, logistic regression with generalized estimating equations, and mixed effects logistic regression with either random intercepts (LRI) or random slopes (LRC). For the logistic regression models with random effects, we additionally considered three estimation methods: penalized quasilikelihood, Laplace approximation, or adaptive Gaussian quadrature (AGQ). We showed that for partially nested designs with at least ten clusters and at least ten subjects per cluster, the LRC model estimated by AGQ produced least biased estimates of both the intervention effect and the intracluster correlation coefficient and better maintained the type I error rate. For fully nested designs, we explored methods for handling missing continuous outcomes in cluster randomized trials, the most common fully nested trial design. Random effects regression imputation has been the recommended approach to multiple imputation (MI) in cluster randomized trials, but we proposed three new semiparametric multiple imputation procedures that are more robust to misspecification of the imputation model. The new methods combined two predictive mean matching (PMM) models, one that ignores clustering and one that uses fixed effects for clusters. In the parametric setting, ignoring clustering in the imputation model results in underestimation of the MI varianc (open full item for complete abstract)

    Committee: Abigail Shoben PhD (Advisor); Rebecca Andridge PhD (Advisor); Erinn Hade PhD (Committee Member); Michael Pennell PhD (Committee Member) Subjects: Biostatistics; Statistics
  • 7. Dogucu, Mine Properties of Partially Convergent Models and Effect of Re-Imputation on These Properties

    Doctor of Philosophy, The Ohio State University, 2017, Educational Studies

    When researchers fit models to multiply imputed datasets, they have to fit the model separately for each imputed dataset resulting in multiple sets of model results. It is possible for some of these sets of results not to converge while some do converge. This study examined occurrence of such a problem – partial convergence and inspected four outcomes of partially convergent models: proportion of convergence, percent parameter bias, root mean square error, and confidence interval coverage rate.

    Committee: Richard Lomax (Advisor) Subjects: Educational Tests and Measurements; Quantitative Psychology; Statistics
  • 8. Uzdavines, Alex Stressful Events and Religious Identities: Investigating the Risk of Radical Accommodation

    Master of Arts, Case Western Reserve University, 0, Psychology

    At some point in their lives, everyone will experience a stressful life event. Usually, someone can cope with and make meaning from the event. However, the body of research investigating the impact of severe and/or chronic exposure to stressful life events on the brain shows that harmful effects of stress exposure accumulate. Considering the extant literature regarding religious meaning making in light of these findings and the robust literature on spiritual transformation following stressful life events, I developed three hypotheses: 1) stressful life events increase risk of (non)religious ID change, 2) earlier events continued to impact later ID changes, and 3) risk of ID change was similar across change groups. This study analyzed a nationally representative longitudinal dataset of US children born between 1980 and 1984 (N = 8984). The final analyses used multiple imputation to account for missing data and did not find evidence supporting the hypotheses.

    Committee: Julie Exline Ph.D. (Committee Chair); Heath Demaree Ph.D. (Committee Member); Arin Connell Ph.D. (Committee Member) Subjects: Health; Mental Health; Psychology; Religion; Spirituality
  • 9. Kline, David Systematically Missing Subject-Level Data in Longitudinal Research Synthesis

    Doctor of Philosophy, The Ohio State University, 2015, Biostatistics

    When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on either the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this dissertation, we focus on missing subject-level variables where few methods have been developed or compared. We compare two multiple imputation approaches that have been proposed for missing subject-level data in single longitudinal studies: a joint modeling approach and a sequential conditional modeling approach. Based on analytical and empirical results for the case when all variables are normally distributed, we find the joint modeling approach to be preferable to the sequential conditional approach except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Based on this preference, we develop a new joint model for multiple imputation of missing subject-level variables that models subject- and observation-level variables with distributions in the exponential family. Our model is built within the generalized linear models framework and uses normally distributed latent variables to account for dependence on both the subject- and observation-levels. When compared via simulation, the performance of our model is similar to or better than existing approaches for imputing missing subject-level variables with normal, Bernoulli, Poisson, and multinomial distributions. We illustrate our method by applying it to combine two longitudinal studies on (open full item for complete abstract)

    Committee: Rebbeca Andridge (Advisor); Eloise Kaizar (Advisor); Bo Lu (Committee Member) Subjects: Biostatistics; Statistics
  • 10. Deryol, Rustu Lifestyle, Self-Control, and School-Based Violent Victimization in Turkey

    PhD, University of Cincinnati, 2015, Education, Criminal Justice, and Human Services: Criminal Justice

    The present study aims to fill in the gap in the Turkish school-based violent victimization literature by exploring the predictors of general, crime-, and gender-specific violent victimization. Research hypothesis are derived from the framework of lifestyle-routine activities and self-control theories and are tested using data from The National High School Offending and Victimization Survey in Turkey. This survey was created and conducted upon the initiative of Dr. Osman Dolu, teaching and conducting research at the Police Academy in Ankara, Turkey. For this dissertation, the data on school victimization is based on a sub-sample from this national survey which included 1,204 students from 15 Mersin high schools. Binary logistic regression models were estimated using datasets with missing data and datasets with missing values imputed. Results indicate that the propositions of lifestyle-routine activities theory were generally supported. Particularly, self-mutilation (as a measure of delinquent lifestyle) was a robust predictor of general violent victimization and crime-specific victimization. Similarly, friends' delinquency, measured with gang membership and the number of gang friends, also impacted victimization in several models. Moreover, school-related opportunity measures were also often significant. In particular, school responsiveness to student misconduct, school control of weapons, and unsupervised areas significantly estimated school-based victimization in a number of models. However, gender-specific analyses supported the notion of “gendered opportunity” – that the predictors of opportunity for victimization vary somewhat across males and females. Findings of this study also support the results of much U.S.-based empirical literature about the effect of low self-control on school-based violent victimization in that it was a significant predictor of victimization in many models, though its effect was mediated and/or moderated in a number of instances. A (open full item for complete abstract)

    Committee: Pamela Wilcox Ph.D. (Committee Chair); Dolu Osman Ph.D. (Committee Member); Francis Cullen Ph.D. (Committee Member); John Wooldredge Ph.D. (Committee Member) Subjects: Criminology
  • 11. Xi, Wenna Comparing the Statistical Power of Analysis of Covariance after Multiple Imputation and the Mixed Model in Testing the Treatment Effect for Pre-post Studies with Loss to Follow-up

    Master of Science, The Ohio State University, 2014, Biostatistics

    Pre-post studies, where outcomes are measured both before and after an intervention, are common in biomedical research. When the outcomes at both pre- and post-test are completely observed, previous studies have shown that analysis of covariance (ANCOVA) is more powerful than the change score analysis in testing the treatment effect and therefore is usually recommended in analyzing pre-post studies. However, methods for analyzing pre-post studies with missing outcome values have not been compared. The goal of this study was to compare the power of two analysis methods in testing for a treatment effect when post-test values are missing: ANCOVA after multiple imputation (MI) and the mixed model. To do so, we analyzed data from a real study, the BePHIT study, and performed simulation studies. Four analysis methods were used to analyze the BePHIT and simulated data: ANCOVA after MI, ANCOVA using only complete cases (CC), the mixed model using all-available data, and the mixed model using complete cases. Simulation studies were conducted under various sample sizes, missingness rates, and missingness scenarios. In the analysis of the BePHIT data, ANCOVA after MI produced the smallest p-value for the test of a treatment effect. However, in the simulation studies, CC ANCOVA was generally the most powerful method. The simulation studies also showed that the power of ANCOVA after MI dropped the fastest when the percentage of missingness increased and, for most scenarios, was the least powerful method when 50% of the post-test outcomes were missing.

    Committee: Michael Pennell (Advisor); Rebecca Andridge (Committee Member) Subjects: Biostatistics
  • 12. Jiang, Hui Missing Data Treatments in Multilevel Latent Growth Model: A Monte Carlo Simulation Study

    Doctor of Philosophy, The Ohio State University, 2014, EDU Policy and Leadership

    Under the framework of structural equation modeling (SEM), longitudinal data can be analyzed using latent growth models (LGM). An extension of the simple LGM is the multilevel latent growth model, which can be used to fit clustered data. The purpose of this study is to investigate the performance of five different missing data treatments (MDTs) for handling missingness due to longitudinal attrition in a multilevel LGM. The MDTs are: (1) listwise deletion (LD), (2) FIML, (3) EM imputation, (4) multiple imputation based on regression (MI-Reg), and (5) MI based on predictive mean matching (MI-PMM). A Monte Carlo simulation study was conducted to explore the research questions. First, population parameter values for the model were estimated from a nationally representative sample of elementary school students. Datasets were then simulated based on a two-level LGM, with different growth trajectories (constant, decelerating, accelerating), and at varying levels of sample size (200, 500, 2000,10000). After datasets are generated, a designated proportion of data points (5%, 10%, 20%) were deleted based on different mechanism of missingness (MAR, MNAR), and the five missing data treatments were applied. Finally, the parameter estimates produced by each missing data treatment were compared to the true population parameter values and to each other, according to the four evaluation criteria: parameter estimate bias, root mean square error, length of 95% confidence intervals (CI), and coverage rate of 95% CIs. Among the five MDTs studied, FIML is the only MDT that yields satisfactory bias level as well as coverage rate for all parameters across all sample sizes, attrition rates, and growth trajectories under MAR. It is also the only MDT that consistently outperforms the conventional MDT, LD, in every aspect, especially when missingness ratio increases. Under MNAR, however, estimates of the predictor effects on slopes become biased and coverage for those two paramet (open full item for complete abstract)

    Committee: Richard Lomax (Advisor); Paul Gugiu (Committee Member); Eloise Kaizar (Committee Member) Subjects: Education; Statistics
  • 13. Parsons, Michael Planned Missing Data Designs in Communication Research

    Doctor of Philosophy (PhD), Ohio University, 2013, Communication Studies (Communication)

    Prominent among the many methodological challenges communication research faces are the relative lack of longitudinal research conducted in the discipline and the threats to validity that arise from the complex instrumentation necessary for inquiry into human interaction. This dissertation presented planned missing data designs (PMDs) as solutions to these challenges because PMDs can make research less burdensome, cheaper, faster, and more valid. Three studies illustrate the use of PMDs in communication research. Study one was a controlled-enrollment PMD investigation of the relationship between students' public speaking anxiety and communication competence in a semester-long longitudinal study. By using the controlled-enrollment design, this study had five measurement waves, but each participant was measured at no more than three measurement waves. Results indicated that the controlled-enrollment design was effective at minimizing participant loss due to attrition and reducing the risk of testing effects due to repeated measurements. Study two was an efficiency-type PMD replication of Infante and Wigley's (1986) verbal aggressiveness scale validation study, in which each participant was presented with only 95 items from the 147 item survey instrument. Through the use of an efficiency design, this study was able to replicate the results of the original study with a dramatically reduced time burden on the participants, indicating that efficiency-type PMDs are an effective tool for scale shortening. Study three was an accelerated longitudinal PMD replication of Rubin, Graham, and Mignerey's (1990) longitudinal communication competence study, which measured change in students' communication competence over the course of a college career. Through the use of an accelerated longitudinal PMD, data collection was completed in just over one calendar year, far shorter than the three years the original study took to collect data. A flaw in participant retention proced (open full item for complete abstract)

    Committee: Amy Chadwick PhD (Advisor) Subjects: Communication
  • 14. McQuiston, James Social capital in the production gap: social networking services and their transformative role in civic engagement

    PHD, Kent State University, 2013, College of Arts and Sciences / Department of Political Science

    Social networking services are used by a large segment of society; Facebook claims that 1 billion users are active on their website. The potential role for social networking in civic engagement is substantial, and this dissertation expands upon previous research in its examination of the relationship between social networking use and civic engagement. Prior research into the effect of social networking services on social capital creation is limited in terms of generalizability and predictive power. The dissertation explores the determinants of social networking service use, the impact that social networking services have on the creation of social capital, and how social networking website use modifies a respondent's level of generalized trust and political efficacy. The sample utilized in this dissertation includes 2,303 respondents from the Social Side of the Internet Survey, conducted in November and December of 2010. The dissertation utilizes this data to examine social networking intensity as a hypothesized determinant of indirect and direct forms of social capital. Models explore the decision to utilize the internet, social networking services (SNS), and to join traditional groups, evaluating the hypothesis that SNS usage creates social capital through a different pathway than online or physical interactions. Results provide early support for this hypothesis, as the factors influencing the decision to utilize social networking are separate from those modifying online or group activity. The explanatory power of social networking intensity is compared to demographic and group-centered conceptions of social capital generation. The data supports the conception that SNS intensity is a significant determinant of external political efficacy and social capital, but is unable to identify a relationship between social networking intensity and generalized trust. By examining the role that social networking services play alongside factors such as age, education, inter (open full item for complete abstract)

    Committee: Ryan Claassen PHD (Committee Chair); Renée Johnson PHD (Committee Member); Aroon Manoharan PHD (Committee Member); Richard Serpe PHD (Committee Member); Kevin Floyd PHD (Other) Subjects: Political Science
  • 15. Bodine, Andrew The Effect of Item Parameter Uncertainty on Test Reliability

    Master of Arts, The Ohio State University, 2012, Psychology

    Item Response Theory (IRT) is a popular method for determining item characteristics (item calibration) and estimating the latent trait levels of test-takers (test scoring). In common scoring methods, item parameters are estimated in the calibration step, and these estimates are treated as fixed, known, errorless values in the scoring step where latent trait levels are estimated. This treatment of the item parameters fails to incorporate their error of estimation, leading to underestimation of the standard errors for latent trait estimates. This study discusses the ramifications of current scoring methods on test reliability and explores two methods for incorporating this missing uncertainty: Markov chain Monte Carlo (MCMC) and multiple imputation (MI), which are capable of incorporating the standard errors of estimation of the item parameters.

    Committee: Michael Edwards PhD (Advisor); Thomas Nygren PhD (Committee Member); Duane Wegener PhD (Committee Member) Subjects: Psychology; Statistics
  • 16. Carmack, Tara A Comparison of Last Observation Carried Forward and Multiple Imputation in a Longitudinal Clinical Trial

    Master of Science, The Ohio State University, 2012, Public Health

    In randomized clinical trials, the presence of missing data presents challenges in determining the actual treatment effect of the study. It is particularly problematic in longitudinal studies when patients followed over time withdrawal from the study. Although it is important to anticipate and attempt to prevent these drop-outs in the study design, it is still likely that a significant amount of missingness will be present in the final data. It is important to have statistical methods that effectively analyze data which contains missing values, and produce unbiased results. This study compares several methods for handling missing data in longitudinal trials. The focus is on the single imputation method of last observation carried forward, and compares it to complete case analysis, multiple imputation and two additional versions of multiple imputation where everyone was imputed as if they were actually in the control group (placebo-imputation). We simulated a randomized control trial with a treatment and placebo group and two time points. After creating the data, we imparted missingness in the follow-up time point. We considered three mechanisms for the missing data: missing completely at random (MCAR), missing at random (MAR) and not missing at random (NMAR). The results indicated that in all situations, last observation carried forward produced extremely biased estimates of treatment effect. Both placebo imputations produced similarly biased estimates. Complete case analysis was only valid in the situation where the data was MCAR. Traditional multiple imputation using regression performed the best of all the methods.

    Committee: Rebecca Andridge PhD (Advisor); Abigail Shoben PhD (Committee Member) Subjects: Biostatistics
  • 17. Kwon, Hyukje A Monte Carlo Study of Missing Data Treatments for an Incomplete Level-2 Variable in Hierarchical Linear Models

    Doctor of Philosophy, The Ohio State University, 2011, EDU Policy and Leadership

    This study was designed to evaluate the performance of missing data treatments with restrictive and inclusive strategies in a two-level hierarchical linear model with missing at random (MAR) missingness in terms of bias, Root Mean Square Error (RMSE), and width and coverage rate of confidence interval. The missing data treatments included in this study were listwise deletion, mean substitution, restrictive and inclusive EM, restrictive and inclusive multiple imputation (MI). The number of level-2 predictors, proportion of missingness (PM) and sample size (N) were manipulated as study factors. The number of level-2 predictors and sample size appeared not to have a distinct impact on the performance of missing data treatments for level-2 missing data in terms of bias. However, the proportion of missing data significantly tends to affect the performance of missing data treatments with large effect so that with larger proportion of missingness, the relative bias difference among missing data treatments tends to increase in most fixed effects and some random effects. Inclusive MI and listwise deletion generally outperformed the other missing data treatments producing practically acceptable bias in most fixed effects that are highly related to missingness. Restrictive EM and inclusive EM also performed well with some exceptions with large proportion of missingness (PM=30%). Restrictive MI and mean substitution produced unacceptable bias even with smaller proportions of missingness (PM=5% or 15%). For random effects, every missing data treatment was effective except for the non-significant Tau11. Listwise deletion tends to provide the largest RMSE on both fixed and random effects. The relative difference in the RMSE between listwise deletion and the other missing data treatments was substantially large with large proportion of missingness (PM=30%) and smaller sample sizes (N<80 or 40). Furthermore, listwise deletion provided the largest confidence intervals for both (open full item for complete abstract)

    Committee: Richard Lomax PhD (Advisor); Ann O'Connell EdD (Committee Member); Dorinda Gallant PhD (Committee Member) Subjects: Educational Tests and Measurements
  • 18. Merkle, Edgar Bayesian estimation of factor analysis models with incomplete data

    Doctor of Philosophy, The Ohio State University, 2005, Psychology

    Missing data are problematic for many statistical analyses, factor analysis included. Because factor analysis is widely used by applied social scientists, it is of interest to develop accurate, general-purpose methods for the handling of missing data in factor analysis. While a number of such missing data methods have been proposed, each individual method has its weaknesses. For example, difficulty in obtaining test statistics of overall model fit and reliance on asymptotic results for standard errors of parameter estimates are two weaknesses of previously-proposed methods. As an alternative to other general-purpose missing data methods, I develop Bayesian missing data methods specific to factor analysis. Novel to the social sciences, these Bayesian methods resolve many of the other missing data methods' weaknesses and yield accurate results in a variety of contexts. This dissertation details Bayesian factor analysis, the proposed Bayesian missing data methods, and the computation required for these methods. Data examples are also provided.

    Committee: Trisha Van Zandt (Advisor) Subjects:
  • 19. Giovannone, Carrie A Longitudinal Study of School Practices and Students' Characteristics that Influence Students' Mathematics and Reading Performance of Arizona Charter Middle Schools

    PHD, Kent State University, 2010, College of Education, Health and Human Services / School of Foundations, Leadership and Administration

    In 1995, Arizona legislators passed laws specifically to implement charter schools in Arizona. Approving 15 year charters (i.e., contracts), allowing charter schools to cross school district boundaries, and requiring the charter schools to administer the state assessment are three of the many charter school statutes written into law. In 2010, those initial charters are expiring. The variation in school practices that occur between charter schools is great as reported by researchers nationwide. This difference does not stop at the borders of Arizona; some Arizona charter schools are labeled as excelling in performance while others are labeled as underperforming. There is limited research of Arizona charter schools and the variance that exists among them. There were two purposes for this dissertation. The first purpose was to analyze the influence of school practices and student characteristics on charter middle school students' mathematics and reading performance at grade 8. The second purpose was to assess those influences over time (i.e., 2007-2009). The assessment used was Arizona's Instrument to Measure Standards (AIMS) which remained stable during this span of time. Multiple imputations were performed for missing data. Hierarchical linear cross-classified random effects modeling (HLM/CCREM) was used to assess the data while taking student mobility into consideration. The results showed that the effect of teacher experience influenced lower achieving students, that teachers teaching out of their area of expertise had a negative effect on mathematics and reading achievement of students, and attending a charter school that was converted from a traditional public school has an advantage in Arizona. An indicator of whether a student was attending a charter school in 2006 (prior to the time period of this study) was added to the model and showed that students scored higher in mathematics and reading if they also attend a charter school in 2006. All factors assessed (open full item for complete abstract)

    Committee: Rafa Kasim PhD (Committee Chair); Tricia Niesz PhD (Committee Member); Kathryn Wilson PhD (Committee Member) Subjects: Education