Use of internal testing data to help determine compensation for crowdsourcing tasks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Crowdsourcing is a popular means for developing datasets that can be used to build machine learning models including classification and entity recognition from text as well as tasks related to images, tables, charts, and graphics. As the use of crowdsourcing grows, concerns about appropriate and fair compensation of contributors are also increasing. However, estimating the correct compensation levels can be a challenge a priori. In this paper, we will describe the input and data that inform considerations for how much to pay workers for various tasks. We will give an overview of three separate crowdsourcing tasks and describe how internal testing processes and qualification tasks contribute to the end user (Worker) experience and how we attempt to gauge the effort required to complete a task
Original languageAmerican English
Title of host publicationCEUR Workshop Proceedings
StatePublished - Oct 9 2018
EventAugmenting Intelligence with Humans­-in-­the-­Loop: nd International Workshop on Augmenting Intelligence with Humans­-in-­the-­Loop co-located with 17th International Semantic Web Conference (ISWC 2018) - Monterey, United States
Duration: Oct 9 2018 → …
http://ceur-ws.org/Vol-2169/

Conference

ConferenceAugmenting Intelligence with Humans­-in-­the-­Loop
Abbreviated titleHumL@ISWC 2018
Country/TerritoryUnited States
CityMonterey
Period10/9/18 → …
Internet address

Fingerprint

Dive into the research topics of 'Use of internal testing data to help determine compensation for crowdsourcing tasks'. Together they form a unique fingerprint.

Cite this