Quality assurance in crowdsourcing

Type Research Project

Funding Bodies
  • WU Vienna (earmarked funds)

Duration Dec. 22, 2014 - Nov. 30, 2015

  • Institute for Information Systems and Society IN (Details)


Press 'enter' for creating the tag
  • Bauer, Christine (Former researcher) Project Head

Abstract (English)

Recently, outsourcing tasks to an undefined crowd (i.e., crowdsourcing) has gained popularity in the industry (e.g., Threadless, Lego, Microsoft, Amazon Mechanical Turk) and is also a compelling topic in many scientific disciplines (e.g., for marketing research, user behaviour research, psychology, elicitation of users’ preferences and requirements, etc.). Having experienced answers of poor quality from the crowd in prior work (e.g., Cunin & Elsen, 2014; Hâggman, Tsai, Elsen, Honda, & Yang, 2014, in press), we want to delve into detail and find ways to raise the quality of the results of crowdsourced tasks.
In this study, we want to perform an experiment comparing data quality in several crowdsourcing settings. We will analyse the results of real-world tasks in the field of eliciting user preferences and behaviour, a field where crowds are considered as a robust channel for eliciting consumers’ preferences, perceptions, and similar kind of feedback (Kittur, Chi, & Suh, 2008). Using the crowd for eliciting consumers’ preferences appears particularly powerful; however, at the same time, also challenging in terms of quality and soundness assessment, since there is no unique way to distinguish between “good” and “bad” answers concerning subjective preferences. When seeking for high-quality answers, crowdsourcers have to develop quality control strategies that go beyond simple redundancy techniques or majority voting.


  • University of Liege - Belgium




  • Crowdsourcing