On Benchmarking for Crowdsourcing and Future of Work Platforms

dc.contributor.author Borromeo, Ria Mae
dc.contributor.author Chen, Lei
dc.contributor.author Dubey, Abhishek
dc.contributor.author Roy, Sudeepa
dc.contributor.author Thirumuruganathan, Saravanan
dc.date.accessioned 2022-07-04T06:52:17Z
dc.date.available 2022-07-04T06:52:17Z
dc.date.issued 2019
dc.description.abstract Online crowdsourcing platforms have proliferated over the last few years and cover a number of important domains, these platforms include from worker-task platforms such Amazon Mechanical Turk, worker-forhire platforms such as TaskRabbit to specialized platforms with specific tasks such as ridesharing like Uber, Lyft, Ola etc. An increasing proportion of human workforce will be employed by these platforms in the near future. The crowd sourcing community has done yeoman’s work in designing effective algorithms for various key components, such as incentive design, task assignment and quality control. Given the increasing importance of these crowdsourcing platforms, it is now time to design mechanisms so that it is easier to evaluate the effectiveness of these platforms. Specifically, we advocate developing benchmarks for crowdsourcing research. Benchmarks often identify important issues for the community to focus and improve upon. This has played a key role in the development of research domains as diverse as databases and deep learning. We believe that developing appropriate benchmarks for crowdsourcing will ignite further innovations. However, crowdsourcing – and future of work, in general – is a very diverse field that makes developing benchmarks much more challenging. Substantial effort is needed that spans across developing benchmarks for datasets, metrics, algorithms, platforms and so on. In this article, we initiate some discussion into this important problem and issue a call-to-arms for the community to work on this important initiative.
dc.identifier.doi 10.5281/zenodo.6793148
dc.identifier.uri https://repository.upou.edu.ph/handle/20.500.13073/276
dc.language.iso en
dc.publisher Institute of Electrical and Electronics Engineers
dc.title On Benchmarking for Crowdsourcing and Future of Work Platforms
dc.type Article
Files
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Name:
2019 RBorromeo- On Benchmarking for Crowdsourcing.pdf
Size:
84.12 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.68 KB
Format:
Item-specific license agreed to upon submission
Description: