Recent Issues

Authors Articles on Google Scholar
Monamodi Kesamang
Authors Articles on Pubmed
Monamodi Kesamang
Quick Links
Abstract & Indexing
Aims & Scope
Browse Journals
Preferences
Email this Article to a friend
Print this Article
Viewing Options
[View Abstract]
[View Full Text PDF]
[Download Full Text PDF]
Statistics
Viewed: 702
Forwarded: 0
Printed: 260
Downloaded: 679

The comparison of item and person item response theory (IRT) parameter estimates for the anchor-items and common-persons designs


An assessment system should be able to identify the potential of each learner and the quality of education is dependent on providing valid score and grades to every examinee. This empirical study investigated the use of Item Response Theory (IRT) test-linking techniques (anchor-items and common-person designs) as a method of maintaining equivalent standards across years.  IRT parameter estimates are assumed to be invariant.  The accuracy with which each method was able to estimate item parameter estimates was also investigated. Below are the study questions, How do the item parameter estimates for the two methods compare?. How do the Item Response Functions (IRFs) and the Test Response Functions (TRFs) compare for the two designs? How do the Item Information Functions (IIFs) and the Test Information Functions (TIFs) compare for the two designs? How does the reliability and Standard Error of Measurement (SEM) compare for the two designs? The study made the following findings: The Pearson correlation coefficients for the item parameters are significantly different. IRFs and TRFs are significantly different for the two linking designs with the anchor-item test IRFs/TRFs approximating the theoretical Item characteristic curve (ICC). IIFs and TIFs for the anchor-item test provide more information. The anchor-item test is more reliable.

Keywords: equating, linking, anchor items, common persons, IRT