Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 5)

Mini-Tools

 
 

Search Report

  • 1. Ron, Tom Bringing Situational Judgement Tests to the 21st Century: Scoring of Situational Judgement Tests Using Item Response Theory

    Doctor of Philosophy (Ph.D.), Bowling Green State University, 2019, Psychology/Industrial-Organizational

    Situational judgement tests (SJTs) became popular selection instruments in the last three decades, due to their predictive validity, small subgroup differences, and high face validity. However, although SJTs have made a significant progress in the last century, there still remains a construct problem – it is not sure whether SJTs are a construct or a measurement method. In addition, almost in parallel to the advancement of SJTs, a new theory for scoring and testing has been developed – item response theory (IRT). IRT offers researchers and practitioners flexible models that fit various types of data and can be used to score tests and questionnaires and to learn about their psychometric qualities. In addition, some IRT models offer us a unique method to score multidimensional tests, which assess more than one construct. This study attempts to apply different IRT models to a leadership SJT in order to answer two main questions: one, is SJT a construct or a measurement method? And two, can IRT-based scoring benefit us in terms of validity and reducing subgroup differences over the classical scoring approaches? These questions were tested on three samples of Israeli soldiers who went through a selection process for officers' training school and had to take a leadership SJT as part of it. The results of this study suggest that the picture is more complicated than it was originally thought. It appears that IRT has value over classical test theory (CTT) only for some samples, whereas CTT has more value in other samples. In regard to the construct vs. measurement method debate, it appears that multidimensional IRT models better fit the SJT that was used in this study, a testimony that sides with the SJT as a measurement method camp. Future research and limitations are discussed at the end of the manuscript.

    Committee: Michael Zickar Dr. (Advisor); Hyeyoung Bang Dr. (Other); Joshua Grubbs Dr. (Committee Member); Samuel McAbee Dr. (Committee Member) Subjects: Psychological Tests; Psychology
  • 2. Anderson, Hannah A Psychometric Investigation of a Mathematics Placement Test at a Science, Technology, Engineering, and Mathematics (STEM) Gifted Residential High School

    PHD, Kent State University, 2020, College of Education, Health and Human Services / School of Foundations, Leadership and Administration

    Educational institutions, at all levels, must justify their use of placement testing and confront questions of their impact on students' educational outcomes to assure all stakeholders that students are being enrolled in courses appropriate with their ability in order to maximize their chances of success (Linn, 1994; Mattern & Packman, 2009; McFate & Olmsted III, 1999; Norman, Medhanie, Harwell, Anderson, & Post, 2011; Wiggins, 1989). The aims of this research were to (1) provide evidence of Content Validity, (2) provide evidence of Construct Validity and Internal Consistency Reliability, (3) examine the item characteristics and potential bias of the items between males and females, and (4) provide evidence of Criterion-Related Validity by investigating the ability of the mathematics placement test scores to predict future performance in an initial mathematics course. Students' admissions portfolios and scores from the mathematics placement test were used to examine the aims of this research. Content Validity was evidenced through the use of a card-sorting task by internal and external subject matter experts. Results from Multidimensional Scaling and Hierarchical Cluster Analysis revealed a congruence of approximately 63 percent between the two group configurations. Next, an Exploratory Factor Analysis was used to investigate the underlying factor structure of the mathematics placement test. Findings indicated a three factor structure of PreCalculus, Geometry, and Algebra 1, with moderate correlations between factors. Thirdly, an item analysis was conducted to explore the item parameters (i.e., item difficulty, and item discrimination) and to test for gender biases. Results from the item analysis suggested that the Algebra 1 and Geometry items were generally easy for the population of interest, while the PreCalculus items presented more of a challenge. Furthermore, the mathematics placement test was optimized by removing eleven items from the Algebra 1 factor (open full item for complete abstract)

    Committee: Aryn Karpinski PhD (Committee Chair); Tricia Niesz PhD (Committee Member); Rajeev Rajaram PhD (Committee Member) Subjects: Gifted Education; Mathematics; Mathematics Education; Psychological Tests
  • 3. Wang, Shuo Joint Analysis of Social and Item Response Networks with Latent Space Models

    Master of Science, The Ohio State University, 2019, Statistics

    The adjustment of students to a school environment is fundamentally linked to the friendship networks they form with their peers. Consequently, the complete picture of a student' adjustment can only be obtained by taking into account both their friendship network and their reported perceptions of the school environment. However, there is a lack of flexible statistical models and methods that can jointly analyze a social network with an item-response data matrix. In this paper, we propose an extended latent space model for heterogeneous (multimodal) networks (LSMH) and its extension LSMH-I, which combine the framework of latent space modeling in network analysis with item response theory in psychometrics. Using LSMH, we summarize the information from the social network and the item responses in a person-item joint latent space. We use a Variational Bayesian Expectation-Maximization estimation algorithm to estimate the item and person locations in the joint latent space. This methodology allows effective integration, informative visualization and prediction of social networks and item responses. We apply the proposed methodology to data collected from 16 third-grade classrooms comprised of 451 third-grade students' self-reported friendships and school liking, which were collected as part of the Early Learning Ohio project. Through the person-item joint latent space, we are able identify students with potential adjustment difficulties and found consistent connection between students' friendship networks and their well-being. We believe that using LSMH, researchers will be able to easily identify students in need of intervention and revolutionize the the understanding of social behaviors.

    Committee: Subhadeep Paul (Advisor); Paul De Boeck (Committee Member); Jessica Logan (Committee Member); Peter Craigmile (Committee Member) Subjects: Statistics
  • 4. Stanley, Leanne Flexible Multidimensional Item Response Theory Models Incorporating Response Styles

    Doctor of Philosophy, The Ohio State University, 2017, Psychology

    Psychologists routinely use item responses to estimate individuals' standing on constructs of theoretical and practical interest. However, in addition to the focal constructs, item responses may be meaningfully influenced by response styles, which are characteristic ways in which participants use rating scales. Response styles may or may not be related to the content of an item and may interfere with the accurate measurement of target constructs. This project focuses on comparing multidimensional item response theory (MIRT) from three model families: difference, divide-by-total, and IRTree models. In addition to ordinal-only models from the three families, I consider two very general models from the divide-by-total (Falk & Cai, 2016) and IRTree (Jeon De Boeck, 2016) families that allow researchers to relax the assumption that Likert-type item responses are purely ordinal. Response style models from the difference model family do not currently exist. Several research questions are addressed using a large set of responses (N = 6,714) to a Big Five personality inventory with 10 items per dimension and a 5- point Likert-type response scale: (1) Can response style models from the divide-by-total and IRTree families be fit to a single set of responses? (2) Is there evidence that response styles should be modeled or can they simply be ignored in terms of statistical model fit (AIC and BIC)? (3) How can IRTree models be extended in such a way that they become more conceptually similar to divide-by-total models for trait and response style dimensions? (4) If there is statistical evidence of response styles in the data, is there also evidence that modeling them makes a practical difference in scores on the target (Big Five) dimensions? The results of this work suggest that failing to model individual differences in extreme and (possibly) midpoint response styles may meaningfully bias expected a posteriori (EAP) scores for at least some personality dimensions. On the other (open full item for complete abstract)

    Committee: Paul De Boeck (Advisor); Michael Edwards (Committee Co-Chair); Michael De Kay (Committee Member) Subjects: Personality Psychology; Quantitative Psychology
  • 5. Periard, David A Bifactor Model of Burnout? An Item Response Theory Analysis of the Maslach Burnout Inventory – Human Services Survey.

    Doctor of Philosophy (PhD), Wright State University, 2016, Human Factors and Industrial/Organizational Psychology PhD

    Burnout is a syndrome—composed of emotional exhaustion, depersonalization, and personal accomplishment—resulting from chronic stress. The Maslach Burnout Inventory – Human Services Survey (MBI-HSS; Maslach, Jackson, & Leiter, 1996) is the most popular measure of burnout. Unfortunately, the MBI-HSS has flaws including highly correlated traits and low subscale reliabilities. I tested a bifactor model for the MBI-HSS based on the work by Meszaros, Adam, Svabo, Szigeti, and Urban (2014) using item response theory. Bifactor models specify a general factor that underlies all the items within a scale and specific factors that underlie the subscale items; also, all factors are orthogonal. I found that the bifactor model had superior fit to the traditional correlated traits. A method for decomposing item and test information in multidimensional item response theory is also introduced along with a new method of displaying the test information. Finally, I provide the scoring recommendation that only the general burnout dimension for the MBI-HSS should be reported as the subscales are unreliable.

    Committee: Gary Burns Ph.D. (Advisor); David LaHuis Ph.D. (Committee Member); Joseph Houpt Ph.D. (Committee Member); Nathan Bowling Ph.D. (Committee Member) Subjects: Psychology; Statistics