Invitation to the Second Talk of the Vienna Circuits Series: "Improving the robustness of fine-tuned LLMs"

12.12.2025 13:00

The Vienna Circuits Series continues with a talk by Joe Stacey (University of Sheffield) on December 12, 2025. The NLP Group of the subunit Data Mining and Machine Learning (DM) warmly invites all interested participants.

When? Friday, December 12, 2025, 1:00-2:00 PM (CET)
Where?  Online and live stream viewing: Kolingasse 14-16, Raum Nr. 2.38
Zoom Link: https://univienna.zoom.us/j/64543996256?pwd=dmAhRgkbfw4xDDD0JMFphKfwl2f64a.1


Topic: Improving the robustness of fine-tuned LLMs

Abstract
In this talk, Joe will discuss how to improve the robustness of fine-tuned models, focusing on the task of Natural Language Inference. He will present several strategies for improving robustness and explain why debiasing methods are often not the most effective approach. He will then examine the trade-off between in-distribution and out-of-distribution performance when fine-tuning LLMs, showing how strategically selecting the training data can improve out-of-distribution performance. Finally, he will discuss outstanding questions and possible future directions for understanding and improving model robustness.

Speaker
Joe Stacey is a postdoctoral researcher at the University of Sheffield, working on uncertainty quantification under the supervision of Nafise Moosavi, Benjamin Heinzerling, and Kentaro Inui. He is a former Apple AI/ML Scholar and completed his PhD at Imperial College London under the supervision of Marek Rei, focusing on improving the robustness and interpretability of NLI models. Before his PhD, Joe worked as a strategy consultant and as a mathematics teacher in a challenging school in Birmingham.

 

If you would like to be updated about future talks in the Vienna Circuits series, you can subscribe to the series'  » mailing list.

Portrait of Joe Stacey

© Joe Stacey