A set of tools to analyze inconsistencies observed in a Cat-ToBI labeling experiment are presented. We formalize and use the metrics that are commonly used in inconsistency tests. The metrics are systematically applied to analyze the robustness of every symbol and every pair of transcribers. The results reveal agreement rates for this study that are comparable to previous ToBI inter-reliability tests. The inter-transcriber confusion rates are transformed into distance matrices to use multidimensional scaling for visualizing the confusion between the different ToBI symbols and the disagreement between the raters. Potential different labeling criteria are identified and subsets of symbols that are candidates to be fused are proposed. © 2011 Elsevier B.V. All rights reserved.
|Publication status||Published - 1 Jan 2012|
- Inter-transcriber consistency
- Prosodic labeling