AI and gender bias is a prevalent issue that impacts several aspects of our lives: from the design of products to the type of services provided to each respective gender. It often reflects and amplifies existing gender biases and stereotypes in society.
Good ML algorithms have bad gender biasing and go sexist most of the times. Even with best translation methods, you can see if a person talks about nurse, by default it is female and if it’s a doctor, then it’s considered as a male.

There are many challenging problems that we implicitly face but tend to ignore. These small biasing in minds do result in big disparity in the crowd all over the world.

One significant contributor to gender bias in ML is the use of biased training data, which can reinforce and perpetuate stereotypes. In fact, the LLMs (now being a big buzz), follows gender stereotypes in picking the likely referent of the pronoun. Natural language processing (NLP) models, such as large language models (LLMs), play a crucial role in this context. LLMs are trained on vast amounts of text data from the internet, often reflecting historical biases and societal norms. As a result, these models may inadvertently learn and reproduce gender biases present in the training data.

Addressing gender bias in ML involves a multi-faceted approach, including improving dataset diversity, refining algorithms to be more transparent and interpretable, and implementing fairness-aware techniques during model development. Additionally, there is a growing need for ethical guidelines and regulations to guide the deployment of ML systems, ensuring accountability and transparency in their decision-making processes. In this conference, we will tackle this problem and try to explore ways to solve it.

Technical level: High Level/overview

Session Length: 15 minutes