Meet FETA!

A Benchmark for Few-Shot Task Transfer in Open Domain Dialogue

Why FETA?



UNIQUE SETTING

Intra-dataset task transfer allows for the study of task transfer without any worry of domain shift



LARGE-SCALE

132 source-target task pairs from 17 total tasks on 2 underlying sets of data



VERSATILE APPROACHES

The multitask setting allows for numerous approaches including: transfer learning, instruction/prompt fine-tuning, multitask learning, meta-learning, and more.



VERSATILE DATA

FETA can be used to study transfer, multi-task, and continual learning, as well as the generalizability of model architectures and pre-training datasets.



About FETA


FETA is a new benchmark for few-sample task transfer in open-domain dialogue. FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. FETAs two datasets cover a variety of properties (dyadic vs. multi-party, anonymized vs. recurring speaker, varying dialogue lengths) and task types (utterance-level classification, dialogue-level classification, span extraction, multiple choice), and maintain a wide variety of data quantities.

About the FETA Challenge


The FETA challenge is being hosted at the 5th Workshop on NLP for Conversational AI (co-located with ACL 2023).
The mission of the FETA challenge is to encourage the development and evaluation of new approaches to task-transfer with limited in-domain data. Specifically, FETA focuses on the dialogue domain due to interests in empowering human-machine communication through natural language.
Important Dates
  • Competition Start: February 15th, 2023
  • Competition End: July 1st 2023 AoE (Codalab submissions deadline)
  • Paper Submission Deadline: July 8th 2023 AoE
  • Awards and Prizes Announced: July 14th, 2023
Awards
The FETA challenge will award prizes for highest scores, as well as for innovative approaches. Exact awards are to be determined.
Challenge Details
For more details on the FETA challenge, see here.

Paper


Please cite our paper as below if you use the FETA dataset.

@inproceedings{albalak-etal-2022-feta,
    title = "{FETA}: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue",
    author = "Albalak, Alon  and
        Tuan, Yi-Lin  and
        Jandaghi, Pegah  and
        Pryor, Connor  and
        Yoffe, Luke  and
        Ramachandran, Deepak  and
        Getoor, Lise  and
        Pujara, Jay  and
        Wang, William Yang",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-main.751",
    pages = "10936--10953",
    abstract = "Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models. Dialogue understanding encompasses many diverse tasks, yet task transfer has not been thoroughly studied in conversational AI. This work explores conversational task transfer by introducing FETA: a benchmark for FEw-sample TAsk transfer in open-domain dialogue.FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer; task transfer without domain adaptation. We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs and create a baseline for future work.We run experiments in the single- and multi-source settings and report valuable findings, e.g., most performance trends are model-specific, and span extraction and multiple-choice tasks benefit the most from task transfer.In addition to task transfer, FETA can be a valuable resource for future research into the efficiency and generalizability of pre-training datasets and model architectures, as well as for learning settings such as continual and multitask learning.",
}
                        

Contact



Have any questions or suggestions? Feel free to contact us via the team email feta.benchmark@gmail.com!