Abstract
Crowdsourcing is a popular means for developing datasets that can be used to build machine learning models including classification and entity recognition from text as well as tasks related to images, tables, charts, and graphics. As the use of crowdsourcing grows, concerns about appropriate and fair compensation of contributors are also increasing. However, estimating the correct compensation levels can be a challenge a priori. In this paper, we will describe the input and data that inform considerations for how much to pay workers for various tasks. We will give an overview of three separate crowdsourcing tasks and describe how internal testing processes and qualification tasks contribute to the end user (Worker) experience and how we attempt to gauge the effort required to complete a task
Original language | American English |
---|---|
Title of host publication | CEUR Workshop Proceedings |
State | Published - Oct 9 2018 |
Event | Augmenting Intelligence with Humans-in-the-Loop: nd International Workshop on Augmenting Intelligence with Humans-in-the-Loop co-located with 17th International Semantic Web Conference (ISWC 2018) - Monterey, United States Duration: Oct 9 2018 → … http://ceur-ws.org/Vol-2169/ |
Conference
Conference | Augmenting Intelligence with Humans-in-the-Loop |
---|---|
Abbreviated title | HumL@ISWC 2018 |
Country/Territory | United States |
City | Monterey |
Period | 10/9/18 → … |
Internet address |