A Benchmark for Few-Shot Task Transfer in Open Domain Dialogue
Intra-dataset task transfer allows for the study of task transfer without any worry of domain shift
132 source-target task pairs from 17 total tasks on 2 underlying sets of data
The multitask setting allows for numerous approaches including: transfer learning, instruction/prompt fine-tuning, multitask learning, meta-learning, and more.
FETA can be used to study transfer, multi-task, and continual learning, as well as the generalizability of model architectures and pre-training datasets.
FETA is a new benchmark for few-sample task transfer in open-domain dialogue. FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. FETAs two datasets cover a variety of properties (dyadic vs. multi-party, anonymized vs. recurring speaker, varying dialogue lengths) and task types (utterance-level classification, dialogue-level classification, span extraction, multiple choice), and maintain a wide variety of data quantities.
@inproceedings{albalak-etal-2022-feta,
title = "{FETA}: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue",
author = "Albalak, Alon and
Tuan, Yi-Lin and
Jandaghi, Pegah and
Pryor, Connor and
Yoffe, Luke and
Ramachandran, Deepak and
Getoor, Lise and
Pujara, Jay and
Wang, William Yang",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.751",
pages = "10936--10953",
abstract = "Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI. This work explores conversational task transfer by introducing FETA: a benchmark for FEw-sample TAsk transfer in open-domain dialogue.FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work.We run experiments in the single- and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer.In addition to task transfer, FETA can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as for learning settings such as continual and multitask learning.",
}