Skip to Main Content

Basic Search

Skip to Search Results
 
 
 

Left Column

Filters

Right Column

Search Results

Search Results

(Total results 3)

Mini-Tools

 
 

Search Report

  • 1. Hille, Kathryn Student Placement: A Multifaceted Methodological Toolkit

    Doctor of Philosophy (PhD), Ohio University, 2019, Educational Research and Evaluation (Education)

    Placement testing in intensive English programs (IEPs) involves methodological considerations that merit additional research. Neither quantitative nor qualitative methods alone have always proven successful to address common research problems in the field of IEP student placement, including questions regarding the selection of placement tests and the establishment of cut scores for those tests by means of standard-setting methods. These two key questions, alongside the methodology underlying them, comprise three necessary supporting elements for the achievement of appropriate student placement. This dissertation uses a three-article format to examine, in turn, each of these necessary supporting elements and its connections to the other two. The first manuscript, “The Application of Mixed Methods for Developing Student Placement Protocols in Intensive English Programs” serves as the methodological focus for this dissertation, exploring the potential of Mixed Methods Research to address the common challenges of inconsistent placement criteria and small sample sizes in IEP student placement and to yield recommendations regarding the selection of placement tests and the setting of cut scores. The second manuscript, “Placement Testing: One Test, Two Tests, Three Tests? How Many Tests are Sufficient?,” then provides a quantitative analysis to address the question of placement test selection so as to maximize placement accuracy and includes consideration of logistical issues as well. The final manuscript, “Setting Cut Scores for Student Success,” addresses the issue of establishing appropriate cut scores and presents an approach to synthesizing the potentially divergent cut score results that different standard-setting methods can yield. Taken together, these three manuscripts provide a close examination of multiple supporting elements that are necessary for the achievement of appropriate student placement; these manuscripts also challenge practitioners in the field (open full item for complete abstract)

    Committee: Krisanna Machtmes PhD (Advisor); Yuchun Zhou PhD (Committee Member); Lijing Yang PhD (Committee Member); Sara Helfrich PhD (Committee Member) Subjects: Education; Education Policy; Educational Evaluation; Educational Tests and Measurements; English As A Second Language; Higher Education; Higher Education Administration; Language; Statistics
  • 2. Rosales Flores de Véliz, Leslie Evaluation of a Method to Perform Growth Standards in Guatemala

    Doctor of Philosophy (PhD), Ohio University, 2018, Educational Research and Evaluation (Education)

    The purpose of this dissertation research study was to evaluate an adaptation of the Bookmark Method in order to establish new reading growth standards in Guatemala for the early elementary grades. For this Bookmark Method adaptation, data from the longitudinal test results collected by the USAID Lifelong Learning Project in the Western Highlands of Guatemala was used to vertically-link items of three existing NAC (Pruebas Nacionales de Lectura) reading tests of the Ministry of Education of Guatemala for first, second, and third grades. After vertically linking two grade levels of the test scores (first and second; second and third), an Ordered Item Booklet (OIB) was created that contained the items of two adjacent grade-level tests. Immediately afterwards, a standard Bookmark workshop was carried out with selected elementary teachers that used the above-described OIB to establish three new cut scores or growth standards to apply to four new levels of growth in reading. To establish the new growth standards, the cut-off points set by participating teachers during such workshops were applied to student scores in the grade level they were enrolled in for 2016. Such scores corresponded to the ones that had been linked to the previous grade level. Results of the adaptation were evaluated by using different validity sources of information. The first validity source was a correlation of results of the adaptation with a different growth model. Results from applying the adapted Bookmark Method yielded a classification similar to the classification found in the normative model Student Growth Percentile (SGP). The second validity source was to understand the overall experience of teachers in the adapted Bookmark workshop through interviews. Results from interviews showed teachers had a comfortable and engaging experience during the workshop. The third validity source was to estimate agreement between teacher panelists. Agreement among teacher panelists varied widely on the it (open full item for complete abstract)

    Committee: Gordon Brooks (Advisor) Subjects: Education Policy; Educational Evaluation
  • 3. Salem, Joseph The Development and Validation of All Four TRAILS (Tool for Real-Time Assessment of Information Literacy Skills) Tests for K-12 Students

    PHD, Kent State University, 2014, College of Education, Health and Human Services / School of Foundations, Leadership and Administration

    This study sought to determine whether the items that were in use for the TRAILS K-12 information literacy classroom assessments at the 3rd, 6th, 9th, and 12th grade levels could be used to develop a valid and reliable test for each grade level. To determine whether the assessments could be reengineered into valid tests of information literacy, the entire item banks for each grade level were administered to students recruited across the United States. The data gathered were then used to answer three research questions. First, the study sought to determine whether the current item banks could be used to create an efficient and reliable test at each grade level. The Rasch model of Item Response Theory was used to develop each test based on overall scale reliability, item fit to scale, distractor function, lack of bias based on differential item functioning analysis, item difficulty spread, and content coverage. It was determined that reliable tests could be created at each grade level (3rd N = 20; 6th N = 25; 9th N = 30; 12th N = 35) with generally strong psychometric properties at the item level across all four tests. The study then sought to gather evidence of construct validity through two methods. First, content experts at each grade level were presented each item in the draft test and were asked to rate the degree to which it measured its associated TRAILS objective. The results of this study identified items on each test for further examination but found general endorsement at the item and scale level for objective measurement. Additionally, the amount of reading and item difficulty were examined through a correlation study. No relationship was found on any of the tests, reducing the likelihood that reading affects item difficulty. Finally, the study sought to determine the most generally agreed upon proficiency score for each test by using a modified bookmarking standard-setting procedure. This process utilized expertise at each grade level th (open full item for complete abstract)

    Committee: Schenker Jason PhD (Committee Co-Chair); Fitzgerald Shawn PhD (Committee Co-Chair); Harper Meghan PhD (Committee Member) Subjects: Educational Tests and Measurements; Library Science