ENBIS-18 in Nancy
2 – 25 September 2018; Ecoles des Mines, Nancy (France)
Abstract submission: 20 December 2017 – 4 June 2018
Robustness of Agreement in Ordinal Classifications
4 September 2018, 14:50 – 15:10
- Submitted by
- Amalia Vanacore
- Amalia Vanacore (University of Naples Federico II), Maria Sole Pellegrino (University of Naples Federico II)
- The quality of subjective evaluations provided by field experts (e.g. physicians or risk assessors) as well as by trained operators (e.g. visual inspectors) is often evaluated in terms of inter/intra rater agreement via kappa-type coefficients. Despite their popularity, these indices have long been criticized for being affected in complex ways by the presence of bias between raters and by the distributions of data across the rating categories (“prevalence”).
This paper presents the results of a Monte Carlo simulation study aimed at investigating the robustness of four kappa-type indices (viz. Gwet’s AC2 and the weighted variants of Scott’s Pi, Cohen’s Kappa and Brennan-Prediger coefficients) taking into consideration the case of two series of ratings provided by the same rater (intra-rater agreement) or by two raters (inter-rater agreement). The robustness of the reviewed indices to changes in the frequency distribution of ratings across categories and in the agreement distribution between the two series of ratings has been analyzed across several simulation scenarios built by varying the sample size (i.e. number of rated items), the dimension of the rating scale, the frequency and agreement distributions between the series of ratings.
Simulation results suggest that the level of agreement is sensitive to the distribution of items across the rating categories and to the dimension of rating scale but it is not influenced by the sample size. Among the reviewed indices, the Brennan–Prediger coefficient and Gwet’s AC2 are less sensitive to variation in the distribution of items across the categories for a fixed agreement distribution.
Return to programme