logoInter-Rater Reliability Discussion Corner
by Kilem L. Gwet, Ph.D.

Back to the Inter-Rater Reliability Discussion Corner's Home Page


Estimating the number of subjects and number raters when designing an inter-rater reliability study
Posted : May 6, 2013


On June 28, 2012, I posted a note outlining an approach for calculating the required number of subjects necessary in an inter-rater reliability study to ensure a predetermined error margin for a chancecorrected agreement coefficient. The proposed approach requires the knowledge of some parameters that are generally unknown at the design stage. In this post, I am recommending a simpler and more practical approach for estimating the number of subjects as well as the number of raters in a multiple-rater study.

Many chance-corrected agreement coefficients are based on 2 components, which are the percent agreement and the percent chance agreement. While the percent chance agreement often differs from one coefficient to another, they generally share the same percent agreement. Because, researchers often compute and report 2 agreement coefficients or more, I recommend that the sample of subjects as well as the sample of raters for multiple-rater studies, be optimized on the percent agreement alone. The optimal numbers of subjects
and raters will minimize a measure of the precision of percent agreement, and will apply to all coefficients that share the same percent agreement.

Back to the Inter-Rater Reliability Discussion Corner's Home Page


Please use the form shown below to communicate your comments, questions, or suggestions to me regarding this post. I will be happy to get back to you as soon as possible. Thanks. K.L. Gwet, Ph.D.

Subject*:
Name* . :
E-Mail* .:

Comments:

 

Submit Your Message: