This seminar presents two major studies.
Part I: Crowdsourced Counting for Phenology Phenology is a key aspect of plant success. Recent research has demonstrated that herbarium specimens can provide important information on plant phenology. Massive digitization efforts have the potential to greatly expand herbarium-based phenological research, but also pose a serious challenge regarding efficient data collection. Here, we introduce CrowdCurio, a crowdsourcing tool for the collection of phenological data from herbarium specimens. We test its utility by having workers collect phenological data (number of flower buds, open flowers and fruits) from specimens of two common New England (USA) species of plants. We report findings that reinforce the reliability of using nonexpert workers (i.e., Amazon Mechanical Turk) to perform expert counting tasks in scientific contexts.
Part II: Observing Consistency in Crowdwork Consistency is a practical metric that evaluates an instrument's reliability based on its ability to yield the same output when repeatedly given a particular input. Despite its broad usage, little is understood about the feasibility of using consistency as a measure of worker reliability in crowdwork. In this paper, we explore the viability of measuring a worker's reliability by their ability to conform to themselves. We introduce and describe Deja Vu, a mechanism for dynamically generating task queues with consistency probes to measure the consistency of workers who repeat the same task twice. We present a study that utilizes Deja Vu to examine how generic characteristics of the duplicate task — such as placement, difficulty, and transformation — affect a worker’s task consistency in the context of two unique object detection tasks. Our findings provide insight into the design and use of consistency-based reliability metrics.