Why Cohen’s Kappa should be avoided as performance measure in classification

Rosario Delgado, Xavier Andoni Tibau

Producción científica: Contribución a una revistaArtículoInvestigaciónrevisión exhaustiva

179 Citas (Scopus)

Resumen

We show that Cohen’s Kappa and Matthews Correlation Coefficient (MCC), both extended and contrasted measures of performance in multi-class classification, are correlated in most situations, albeit can differ in others. Indeed, although in the symmetric case both match, we consider different unbalanced situations in which Kappa exhibits an undesired behaviour, i.e. a worse classifier gets higher Kappa score, differing qualitatively from that of MCC. The debate about the incoherence in the behaviour of Kappa revolves around the convenience, or not, of using a relative metric, which makes the interpretation of its values difficult. We extend these concerns by showing that its pitfalls can go even further. Through experimentation, we present a novel approach to this topic. We carry on a comprehensive study that identifies an scenario in which the contradictory behaviour among MCC and Kappa emerges. Specifically, we find out that when there is a decrease to zero of the entropy of the elements out of the diagonal of the confusion matrix associated to a classifier, the discrepancy between Kappa and MCC rise, pointing to an anomalous performance of the former. We believe that this finding disables Kappa to be used in general as a performance measure to compare classifiers.

Idioma originalInglés
Número de artículoe0222916
PublicaciónPLoS ONE
Volumen14
N.º9
DOI
EstadoPublicada - 1 sept 2019

Huella

Profundice en los temas de investigación de 'Why Cohen’s Kappa should be avoided as performance measure in classification'. En conjunto forman una huella única.

Citar esto