agreeAgreeStat Analytics
Research & Software for Analyzing Inter-Rater Reliability Data

2-Rater Agreement: Unweighted Analysis of Raw Scores

Input Data

The dataset shown below is included in one of the example worksheets of AgreeStat 360. It summarizes the distribution of 107 human subjects by rater, and by psychiatric disorder. The raters are actually 2 diagnosis methods: the clinical and the research methods.

ratings from 2 raters

Analysis with AgreeStat/360

To see how AgreeStat360 processes this dataset to produce various agreement coefficients, please play the video below.  This video can also be watched on for more clarity if needed.


The output that AgreeStat360 produces is shown below.  It contains 4 parts:

  • Input data:
    The first part of this output displays the input data used in the analysis, along with marginal totals and percentages.

  • Unweighted analysis:
    A total of 6 agreement coefficients, among which are Coehen's kappa, Gwet's AC1, and more. Each agreement coefficient is associated with a standard error, the 95% confidence interval and the p-value.

  • Weights:
    In this specific analysis, I used the "Linear Weights." These weights are displayed in this third part.

  • Weighted analysis
    The same 6 agreement coefficients used in the unweighted analysis are recalculated with the use of linear weights.  The weighted analysis is used here for illustration purposes only.  It is recommended when categories are ordinal (i.e. can be ranked), which may not be the case here. 

unweighted agreement coefficient for 2 raters