Crowdsourcing for information extraction and for peer reviewPosted by ginger on Jan 22, 2016 in citizen science, collective intelligence, collective wisdom, community intelligence, community-annotation, crowdsourcing, mark2cure, open science | 0 comments
If it hasn’t been evident enough from the Gene Wiki activity, or BioGPS’s community expandable resources (plugins), or our citizen science initiative, Mark2Cure, we’re very enthusiastic about crowdsourcing and community-based resources here in the Su Lab.
If you’ve been following Mark2Cure, you may know that our beta experiment focused on named entity recognition for disease mentions and that our current campaign looks for three concept types. What you might not know is how paid microtask platforms like Amazon Mechanical Turk and Crowdflower have played an integral role in the development of Mark2Cure. Before we spend Max’s programming prowess on building Mark2Cure for citizen scientists, we first do a lot of testing through paid microtask platforms. Ben spearheaded the NER efforts in Amazon Mechanical Turk for over a year before the beta site of Mark2Cure was even launched, and one of our graduate students, Toby, has pioneered the relationship extraction work through Crowdflower.
We don’t talk about Toby much in Mark2Cure, but he is the trailblazer who explores the scientific terrain even before Mark2Cure even get’s close. As a talented member of the Su Lab, he’s a great speaker, does incredible research, and is a strong proponent of open science and crowdsourcing. In fact, Toby is so committed to open science and crowdsourcing, that he’s crowdsourcing for peer reviews in order to improve his research proposal in an open science platform: Thinklabs. His proposed research will serve as a demonstration of what crowdsourcing and machine learning can accomplish in the field of information extraction.
If you are also committed to open science or opening up the peer review process please help demonstrate that open science works, by reviewing his proposal on Thinklabs. You might be able to earn yourself a reward in doing so, but more importantly, you demonstrate your support for open platforms and that open science does not have to be detrimental to scientists-in-the-making.