Katana VentraIP

Question answering

Question answering (QA) is a computer science discipline within the fields of information retrieval and natural language processing (NLP) that is concerned with building systems that automatically answer questions that are posed by humans in a natural language.[1]

Not to be confused with Query evaluation.

a local collection of reference texts

internal organization documents and web pages

compiled reports

newswire

a set of pages[2]

Wikipedia

a subset of pages

World Wide Web

A question-answering implementation, usually a computer program, may construct its answers by querying a structured database of knowledge or information, usually a knowledge base. More commonly, question-answering systems can pull answers from an unstructured collection of natural language documents.


Some examples of natural language document collections used for question answering systems include:

Answering questions related to an article in order to evaluate is one of the simpler form of question answering, since a given article is relatively short compared to the domains of other types of question-answering problems. An example of such a question is "What did Albert Einstein win the Nobel Prize for?" after an article about this subject is given to the system.

reading comprehension

Closed-book question answering is when a system has memorized some facts during training and can answer questions without explicitly being given a context. This is similar to humans taking closed-book exams.

Closed-domain question answering deals with questions under a specific domain (for example, medicine or automotive maintenance) and can exploit domain-specific knowledge frequently formalized in . Alternatively, "closed-domain" might refer to a situation where only a limited type of questions are accepted, such as questions asking for descriptive rather than procedural information. Question answering systems in the context of machine reading applications have also been constructed in the medical domain, for instance related to Alzheimer's disease.[3]

ontologies

question answering deals with questions about nearly anything and can only rely on general ontologies and world knowledge. Systems designed for open-domain question answering usually have much more data available from which to extract the answer. An example of an open-domain question is "What did Albert Einstein win the Nobel Prize for?" while no article about this subject is given to the system.

Open-domain

Question-answering research attempts to develop ways of answering a wide range of question types, including fact, list, definition, how, why, hypothetical, semantically constrained, and cross-lingual questions.


Another way to categorize question-answering systems is by the technical approach used. There are a number of different types of QA systems, including


Rule-based systems use a set of rules to determine the correct answer to a question. Statistical systems use statistical methods to find the most likely answer to a question. Hybrid systems use a combination of rule-based and statistical methods.

History[edit]

Two early question answering systems were BASEBALL[4] and LUNAR.[5] BASEBALL answered questions about Major League Baseball over a period of one year. LUNAR answered questions about the geological analysis of rocks returned by the Apollo Moon missions. Both question answering systems were very effective in their chosen domains. LUNAR was demonstrated at a lunar science convention in 1971 and it was able to answer 90% of the questions in its domain that were posed by people untrained on the system. Further restricted-domain question answering systems were developed in the following years. The common feature of all these systems is that they had a core database or knowledge system that was hand-written by experts of the chosen domain. The language abilities of BASEBALL and LUNAR used techniques similar to ELIZA and DOCTOR, the first chatterbot programs.


SHRDLU was a successful question-answering program developed by Terry Winograd in the late 1960s and early 1970s. It simulated the operation of a robot in a toy world (the "blocks world"), and it offered the possibility of asking the robot questions about the state of the world. The strength of this system was the choice of a very specific domain and a very simple world with rules of physics that were easy to encode in a computer program.


In the 1970s, knowledge bases were developed that targeted narrower domains of knowledge. The question answering systems developed to interface with these expert systems produced more repeatable and valid responses to questions within an area of knowledge. These expert systems closely resembled modern question answering systems except in their internal architecture. Expert systems rely heavily on expert-constructed and organized knowledge bases, whereas many modern question answering systems rely on statistical processing of a large, unstructured, natural language text corpus.


The 1970s and 1980s saw the development of comprehensive theories in computational linguistics, which led to the development of ambitious projects in text comprehension and question answering. One example was the Unix Consultant (UC), developed by Robert Wilensky at U.C. Berkeley in the late 1980s. The system answered questions pertaining to the Unix operating system. It had a comprehensive, hand-crafted knowledge base of its domain, and it aimed at phrasing the answer to accommodate various types of users. Another project was LILOG, a text-understanding system that operated on the domain of tourism information in a German city. The systems developed in the UC and LILOG projects never went past the stage of simple demonstrations, but they helped the development of theories on computational linguistics and reasoning.


Specialized natural-language question answering systems have been developed, such as EAGLi for health and life scientists.[6]

if a fact is verified, by posing a question like: is fact X true or false?

Fact-checking

customer service,

technical support,

market research,

or conducting research.

generating reports

QA systems are used in a variety of applications, including

Architecture[edit]

As of 2001, question-answering systems typically included a question classifier module that determined the type of question and the type of answer.[7]


Different types of question-answering systems employ different architectures. For example, modern open-domain question answering systems may use a retriever-reader architecture. The retriever is aimed at retrieving relevant documents related to a given question, while the reader is used to infer the answer from the retrieved documents. Systems such as GPT-3, T5,[8] and BART[9] use an end-to-end architecture in which a transformer-based architecture stores large-scale textual data in the underlying parameters. Such models can answer questions without accessing any external knowledge sources.

interactivity—clarification of questions or answers

[24]

answer reuse or caching

[25]

[26]

semantic parsing

answer presentation

[27]

and semantic entailment[28]

knowledge representation

social media analysis with question answering systems

[29]

sentiment analysis

utilization of thematic roles

[30]

for visual question answering[22]

Image captioning

question answering[31]

Embodied

Question answering systems have been extended in recent years to encompass additional domains of knowledge[21] For example, systems have been developed to automatically answer temporal and geospatial questions, questions of definition and terminology, biographical questions, multilingual questions, and questions about the content of audio, images,[22] and video.[23] Current question answering research topics include:


In 2011, Watson, a question answering computer system developed by IBM, competed in two exhibition matches of Jeopardy! against Brad Rutter and Ken Jennings, winning by a significant margin.[32] Facebook Research made their DrQA system[33] available under an open source license. This system uses Wikipedia as knowledge source.[2] The open source framework Haystack by deepset combines open-domain question answering with generative question answering and supports the domain adaptation of the underlying language models for industry use cases. [34][35]

Dragomir R. Radev, John Prager, and Valerie Samn. Archived 2011-08-26 at the Wayback Machine. In Proceedings of the 6th Conference on Applied Natural Language Processing, Seattle, WA, May 2000.

Ranking suspected answers to natural language questions using predictive annotation

John Prager, Eric Brown, Anni Coden, and Dragomir Radev. Archived 2011-08-23 at the Wayback Machine. In Proceedings, 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Athens, Greece, July 2000.

Question-answering by predictive annotation

; Harold L. Somers (1992). An Introduction to Machine Translation. London: Academic Press. ISBN 978-0-12-362830-5.

Hutchins, W. John

L. Fortnow, Steve Homer (2002/2003). . In D. van Dalen, J. Dawson, and A. Kanamori, editors, The History of Mathematical Logic. North-Holland, Amsterdam.

A Short History of Computational Complexity

Tunstall, Lewis (5 July 2022). (2nd ed.). O'Reilly UK Ltd. p. Chapter 7. ISBN 978-1098136796.

Natural Language Processing with Transformers: Building Language Applications with Hugging Face

Question Answering Evaluation at TREC

Question Answering Evaluation at CLEF