language-model

A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets

The paper comprehensively evaluates ChatGPT's performance on various academic tasks, covering 140 tasks across diverse fields, highlighting strengths and weaknesses, and introducing a new ability to follow multi-query instructions, ultimately paving the way for practical applications of ChatGPT-like models.

Pathways to semi-(un)supervised* NLP Brain

This talk discusses the evolving field of transfer learning, from LSTMs to large language models, and shows new direction on the transferability in large language model.

xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval

We introduce xCodeEval, the largest executable multilingual multitask benchmark to date consisting of 25M document-level coding examples from about 7.5 K unique problems covering up to 17 programming languages with execution-level parallelism.

Crosslingual Generalization through Multitask Finetuning

Shows multitask multilingual generalization in language model.

What Language Model to Train if You Have One Million GPU Hours?

The crystallization of modeling methods around the Transformer architecture has been a boon for practitioners. Simple, well-motivated architectural variations that transfer across tasks and scale, increasing the impact of modeling research. However, …

PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts

Over 2,000 prompts for roughly 170 datasets are available through PromptSource framework.

Multitask Prompted Training Enables Zero-Shot Task Generalization

T0 shows zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller!

Nearest Neighbour Few-Shot Learning for Cross-lingual Classification

We propose a trasductive approach for few shot cross-lingual classification.

AugVic: Exploiting BiText Vicinity for Low-Resource NMT

We propose AugVic, a data augmentation framework for sequence to sequence model (i.e. NMT) using Language Model.

UXLA: A Robust Unsupervised Data Augmentation Framework for Zero-Resouce Cross-Lingual NLP

We propose UXLA, a novel data augmentation framework for self-supervised learning in zero-resource transfer learning scenarios.