Multi-task Pretraining

Finetuned Language Models Are Zero-Shot Learners

Multi-task Pretraining

Finetuned Language Models Are Zero-Shot Learners


Date
Location
MICL Lab, Singapore
Avatar
M Saiful Bari
Senior Research Scientist

@NTU, Singapore, Intern’20,21,22 Amazon Web Inc. (@awscloud), T0, BLOOMZ, UXLA, xCodeEval, I train LLM at SDAIA! - Scaling Maximalist, Training lead and Core maintainer of ALLaM.