Understanding and ensuring fairness for our machine learning process is essential to Bumble’s mission, to create a world where all relationships are healthy and equitable. To create a platform where people can find and form meaningful connections, we need to balance user preferences and recommendation fairness. Additionally, when using machine learning to support products, it is important for us to anticipate potential biases that may arise and address them accordingly. We approach this by creating a guideline and framework for evaluating bias and fairness, which set foundations for us to deal with potential biases.

Supported by