Supervised machine learning typically assumes that objects can be unambiguously classified into categories. In practice, many classification tasks are ambiguous, and disagreement can arise for a variety of reasons without a final resolution. In this seminar, we describe a deliberation framework that enables small groups of workers to deliberate on ambiguous cases in order to resolve disagreement or declare cases as irresolvable.
We present results from a recent study in which we investigated the underlying sources of disagreement, the circumstances under which disagreement can be resolved through group deliberation, and whether deliberation improves crowdsourcing outcomes. One objective and one subjective text classification task served as case studies with 237 workers, and up to 16 independent groups per case. Results show that deliberation substantially improves classification accuracy, and that case resolvability depends on a variety of factors, including the level of initial consensus, the amount and quality of deliberation activities and sources of disagreement.