Task Description and Approaches

TY 0506/2018

MM modified 0609/2018


Multi-choice QA (MCQA) is focused in this work. We are interested in understanding whether a QA model can perform better on one MCQA dataset with knowledge transferred from another MCQA dataset.

Multi-Choices QA

In MCQA, the inputs to the model are a story, a question, and several answer choices. The story, denoted by SS, is a list of sentences, where each of the sentences is a sequence of words from a vocabulary set VV. The question and each of the answer choices, denoted by QQ and CC, are both single sentences also composed of words from VV. The QA model aims to choose one correct answer from multiple answer choices based on the information provided in SS and QQ.

Transfer Learning

The procedure of transfer learning includes two steps.

The first is to pre-train the model on one MCQA dataset referred to as source task, which usually contain abundant training data. The second step is to fine-tune the same model on the other MCQA, which is referred to as the target task.

The target task usually contains much less training data.

The effectiveness of transfer learning is evaluated by the model's performance on the target task.

Supervised Transfer Learning

In supervised transfer learning, both the source and target datasets provide the correct answer to each question during pre-training and fine-tuning, and the QA model is guided by the correct answer to optimize its objective function in a supervised manner in both stages.

Unsupervised Transfer Learning

Correct answer to each question in the target dataset is not available in unsupervised transfer learning scenario.

To be explicit, the entire process is supervised during pre-training, but unsupervised during fine-tuning.

A self-labeling technique inspired by Lee et al. (2013); Chen et al. (2011); Wallace et al. (2009) is used during fine-tuning on the target dataset.

The Self-labeling technique on unsupervised transfer learning in is presented as follows:

Algorithm 1: Unsupervised QA Transfer Learning:

Input: source dataset with correct answer to each question; target dataset without any answer; Number of training epochs.

Output: optimal QA model MM*.

1 pre-train QA model MM on the source dataset.

2 repeat

3 for each question in the target dataset, use MM to predict its answer.

4 for each question, assign the predicted answer to the question as the correct one.

5 fine-tune MM on the target dataset as usual.

6 until reach the number of training epochs.

results matching ""

    No results matching ""