From: Automatic question generation and answer assessment: a survey
Source | Dataset | Purpose |
---|---|---|
Rus et al., 2012 | QGSTEC | Automatic Question Generation |
Jauhar et al., 2015 | TabMCQ | Question Answering, Information Extraction, |
Question Parsing, Answer-type Identification, | ||
and Lexical Semantic Modeling | ||
Rajpurkar et al., 2016 | SQuAD | Reading Comprehension: Answer a question |
posed by humans from a corresponding passage | ||
Serban et al., 2016 | 30MQA | Question Answering: Generate Question Answer |
Pairs from Knowledge Bases | ||
Nguyen et al., 2016 | MS MARCO | Machine Reading Comprehension and Question- |
Answering | ||
Lai et al., 2017 | RACE | Machine Comprehension and Question Answering: |
Evaluating the reading comprehension ability of students | ||
Trischler et al., 2017 | NewsQA | Machine Comprehension |
Joshi et al., 2017 | TriviaQA | Reading Comprehension, Question Answering over |
structured Knowledge Bases and joint modeling of | ||
Knowledge Bases and Text | ||
Welbl et al., 2017 | SciQ | Question Generation and Question Answering |
Liang et al., 2018 | MCQL | Automatic Distractor Generation |
Kocisk’y et al., 2018 | NarrativeQA | Reading Comprehension |
Chen et al., 2018 | LearningQ | Automatic Educational Question Generation |