Computers are really good at answering questions with single, verifiable answers. But, Humans are better at addressing subjective questions that require a deeper, multidimensional understanding of context – something computers aren’t trained to do well. Questions can take many forms – some have multi-sentence elaborations, others may be simple curiosity or a fully developed problem.
Unfortunately, it’s hard to build better subjective question-answering algorithms because of a lack of data and predictive models. That’s why the CrowdSource team at Google Research, a group dedicated to advancing NLP and other types of ML science via crowdsourcing, has collected data on a number of these quality scoring aspects.
The Competitor uses this new dataset to build predictive algorithms for different subjective aspects of question-answering. The question-answer pairs were gathered from nearly 70 different websites, in a “common-sense” fashion.
Submission to this Challenge must be received by 11:59 PM UTC February 3, 2020.