FURI | Spring 2020
Input-elicitation Methods for Crowdsourced Human Computation
“Collecting human data from crowdsourcing is problematic due to cognitive biases, varying worker expertise, and varying levels of subjective scales. In this work, we investigate the effectiveness of input-elicitation systems for a crowdsourced top-k computation task. We develop and run a crowdsourced experiment in Amazon MTurk that prompts users to rank the number of dots in a variety of images. Our initial results show that prompting users to complete larger size problems significantly increases the accuracy and efficiency of data collection. We suggest input-elicitation to be more widely considered for future work in crowdsourcing.”