Bias in AI systems is a growing concern, with real-world consequences that affect critical areas such as workplace evaluations and healthcare outcomes. In this joint talk, Danishta and Dimitra from Intel will go beyond theoretical discussions and focus on the practical implications of bias, showing how AI tools like ChatGPT and Whisper can unintentionally perpetuate gender, racial, cultural, and language biases in everyday contexts. Through interactive demonstrations, we will explore how AI can influence areas such as employee reviews, cross-cultural communication, and even speech transcription, highlighting the need for continuous monitoring and adjustment to ensure fairness.
For instance, when using Whisper to transcribe speech from various accents, we observed that English words like “hello,” when spoken with an Indian or Chinese accent, were transcribed in Hindi or Mandarin script, respectively. This type of bias occurs when AI models confuse phonetic patterns with the dominant language associated with that accent, leading to transcription errors and misrepresentation. Such cases illustrate how accent and language biases can affect the usability and accuracy of AI tools.
The session will also provide actionable strategies to mitigate bias, such as using diverse and representative datasets, incorporating human-in-the-loop solutions, and tracking task-specific meta-data to identify and address hidden biases. Attendees will learn how to build fairer AI systems by using practical methods for evaluating and monitoring models—from ensuring the right distribution in datasets to gatekeeping models to avoid biases.
Participants will leave with a deeper understanding of how bias can manifest in AI systems and practical tools to prevent, detect, and mitigate it, ensuring that AI is used responsibly and ethically in diverse real-world applications.
Technical level: High Level/overview
Session Length: 40 minutes