Options, Problems and Guidelines for Measuring Interrater Agreement – a Descriptive Approach Cover Image

Možnosti, problémy a odporúčania pri meraní zhody medzi posudzovateľmi – deskriptívny prístup
Options, Problems and Guidelines for Measuring Interrater Agreement – a Descriptive Approach

Author(s): Lucia Kočišová
Subject(s): Methodology and research technology, Applied Sociology, Social Theory, Social Informatics
Published by: AV ČR - Akademie věd České republiky - Sociologický ústav
Keywords: interrater agreement; agreement index; percent agreement; kappa coefficient; AC1 coefficient

Summary/Abstract: Interrater agreement is one way to establish reliability (and also validity) in social science research. The traditionally preferred method of measuring interrater agreement is the descriptive approach owing to its simplicity. This approach is also associated with a number of different agreement indices, which makes it difficult to select the right index. This article summarises theoretical definition on the prevailing approach used to measure interrater agreement (in both quantitative and qualitative research). From a practical point of view, the article focuses on the possibilities of measuring agreement by using percent agreement, the kappa coefficient, and the AC1 coefficient. A more detailed description of the indices explains how to define, calculate, and interpret them and the problems associated with their use. The indices are then discussed in comparison. Although underestimated and criticised, percent agreement may be a good indicator of interrater agreement. Several paradoxes accompany the use of the kappa coefficient, which is only possible under certain conditions. The appropriate alternative to it is the AC1 coefficient. The article concludes with a summary of recommendations for improving the quantification of interrater agreement.

  • Issue Year: 61/2025
  • Issue No: 3
  • Page Range: 277-300
  • Page Count: 24
  • Language: Slovak
Toggle Accessibility Mode