FURI | Spring 2020

Input-elicitation Methods for Crowdsourced Human Computation

Data icon, disabled. Four grey bars arranged like a vertical bar chart.

“Collecting human data from crowdsourcing is problematic due to cognitive biases, varying worker expertise, and varying levels of subjective scales. In this work, we investigate the effectiveness of input-elicitation systems for a crowdsourced top-k computation task. We develop and run a crowdsourced experiment in Amazon MTurk that prompts users to rank the number of dots in a variety of images. Our initial results show that prompting users to complete larger size problems significantly increases the accuracy and efficiency of data collection. We suggest input-elicitation to be more widely considered for future work in crowdsourcing.”

Student researcher

Ryan Kemmer

Ryan Kemmer

Computer science

Hometown: Tucson, Arizona, United States

Graduation date: Spring 2020