The following tasks have been featured in the Interspeech Computational Paralinguistics Challenge (ComParE) up to 2019 (including the editions prior to 2013 which have been run under individual names).
The tasks are listed in temporal order of appearance in the series since 2009 (bottom) to the latest ones (top).
# Classes: Either the number of classes for classification is given (“x” indicates that there have been multiple classification tasks), or an interval range in case of continuous tasks for regression.
In the right-most column, the final result by choosing the optimal number of participant-contributions at the end of each year’s Challenge is given. Note that in the meanwhile, higher results have often been reported in the literature. However, as after the end of each Challenge the test labels are often available to participants, we do not follow up on these.
Note that these performance measures by no means establish a sort of reference for the problem addressed. They rather document the best result obtained at the end of each challenge by fusion of the n best participants’ engines for respective constellations which are characterised by the number of speakers, realism (spontaneous/acted), speaker idiosyncrasies, native language, acoustics, and many other conditions.
%UA: Percentage of Unweighted Accuracy (also named Unweighted Average Recall). As most paralinguistic real-world problems are marked by high class-imbalance, this competition measure computes the sum of recall (class-wise accuracy) and divides by the number of classes. Hence, for a two-class problem, chance level resembles 50% UA, for a three-class problem 33% UA, etc.
AUC: (Unweighted Average) Area under (Receiver Operating Characteristic) Curve. This measure is chosen for detection tasks.
CC: Spearman’s Correlation Coefficient (note that this subsumes Spearman). This measure is used in case of continuous modelling.
|Intelligibility of H&N Cancer Patients||2||76.8|