Roots, Repercussions and Rectification of Bias in NLP Transfer Learning by David Hopes & Benjamin Ajayi-Obe

The popularization of large pre-trained language models has resulted in their increased adoption in commercial settings. However, these models are usually pre-trained on raw, uncurated corpora that are known to contain a plethora of biases. This often results in very undesirable behaviours from the model in real-world situations that can cause societal or individual harm. In this talk, we explore the sources of this bias, as well as recent methods of measuring and mitigating it.

 

Supported by