Question Generation Challenge
SLTC Newsletter, April 2009
Overview of Question Generation
Question Generation (QG) is an interesting field of research which is only recently receiving the attention of researchers. QG is the process of automatically generating questions from text and is in many ways the inverse of Question Answering (QA). Given free text, the function of a QG system is to generate a question which relates to that text in some way.
Some generated questions might be answerable by the input text. Some deeper questions might not be explicitly answerable and may even lead to further questions. Work has already been done on classifying question types and describing taxonomies.[1,2]
QG can be beneficial for applications which interface with humans using dialogue, such as intelligent tutoring systems or other dialogue capable systems. Question Generation may be used to create questions for testing student's understanding of text or trigger a deeper understanding of a topic.
Question Generation and Question Answering
In contrast with Question Answering (QA), until recently QG has had a rather low profile in the natural language processing community. On the positive side, QG is making quick leaps forward, partly because of the shared foundations with QA that it can draw on.
A typical approach used by QA systems since the early years of the Text Retrieval Conference’s QA track is to automatically construct a search query from a question, use this query as an input for a search engine such as Google or Yahoo. A question answering system then automatically analyses snippets and documents returned by the search engine and finds specific sentences that are most likely to answer a given question. This approach relies on identification of the answer sentence template which would match sentences that answer the question.
Let's look at how the two fields are similar. To answer a question such as "When did Amtrak begin operations?" the QuALiM  QA system searched for sentences of the form "Amtrak began operations in ANSWER" or "In ANSWER, Amtrak began operations". Some of the current QG systems use the same principle albeit in reverse. For a sentence "Amtrak began operations in YEAR" a QG system may generate a WHEN question "When did Amtrak begin operations?"
Both QA and QG manipulate the sentences using syntactic and semantic information.
QG Workshop and Challenge
To encourage research in QG, two workshops on the subject have already been held . The Intelligent Tutoring System (ITS) community has recognised the benefits of improving technology capable of generating questions and they have been enthusiastic participants. This year the Question Generation Shared Task Evaluation Challenge 2010 kicked off and participants are already engaged in two processing tasks with different scopes: single sentences and entire paragraphs.
For both tasks three data sources are used. These are the well known online encyclopedia Wikipedia, the social question answering site Yahoo! Answers and the Open University’s online educational resource, OpenLearn.
In the single sentence QA task, participants are given a single sentence and a target question type (e.g. WHO? WHEN? WHERE?). Participating systems generate a question of the target type from the input sentence. Typically we would expect to be able to answer the generated question from the input sentence but this is not an explicit requirement of the task. Here is one example from the single sentence QA task:
Abraham Lincoln (February 12, 1809 – April 15, 1865), the 16th President of the United States, successfully led his country through its greatest internal crisis, the American Civil War (1861 – 1865).
TARGET QUESTION TYPE: WHEN?
(1) In what year was Abraham Lincoln born
(2) In what year did the American Civil War commence?
The paragraph QA task provides a complete paragraph of free text. Participants’ systems are asked to generate six questions from each paragraph at three different levels of scope. Generated questions must relate to the entire paragraph, multiple sentences or single sentences (or less). The following instance shows the type of input and output expected from this task:
Abraham Lincoln (February 12, 1809 – April 15, 1865), the 16th President of the United States, successfully led his country through its greatest internal crisis, the American Civil War, preserving the Union and ending slavery. As an outspoken opponent of the expansion of slavery in the United States, Lincoln won the Republican Party nomination in 1860 and was elected president later that year. His tenure in office was occupied primarily with the defeat of the secessionist Confederate States of America in the American Civil War. He introduced measures that resulted in the abolition of slavery, issuing his Emancipation Proclamation in 1863 and promoting the passage of the Thirteenth Amendment to the Constitution. As the civil war was drawing to a close, Lincoln became the first American president to be assassinated.
(1) Who is Abraham Lincoln?
(2) What major measures did President Lincoln introduce?
(3) How did President Lincoln die?
(4) When was Abraham Lincoln elected president?
(5) When was President Lincoln assassinated?
(6) What party did Abraham Lincoln belong to?
The results of the challenge will be reported at the 3rd Workshop on Question Generation in June 2010.
The workshop invites the QG challenge participants as well as other researchers to present work pertaining to QG, including novel approaches, methods of evaluation, question taxonomies, data collection, and annotation schemes. The QG community is still growing and all interested parties are welcome to become involved and contribute in some way. It is an exciting and challenging field which brings together researchers from different areas such as question answering, natural language understanding and intelligent tutoring systems.
 A. Graesser, V. Rus, and Z. Cai Question classification schemes. Proceedings of the Workshop on the Question Generation Shared Task and Evaluation Challenge. (2008)
 Nielsen, R. D., Buckingham, J., Knoll, G., Marsh, B., & Palen, L. A taxonomy of questions for question generation. Proceedings of the Workshop on the Question Generation Shared Task and Evaluation Challenge. (2008)
 Kaisser, M. and Becker, T. Question Answering by Searching Large Corpora with Linguistic Methods, in Proc 13th TREC, NIST (2004)
 Rus, V. & Graesser, A.C. (Eds.). (2009). The Question Generation Shared Task and Evaluation Challenge. ISBN:978-0-615-27428-7.