Location: CSIRO’s Data61
Closing Date: 31 August 2023
Duration: 3 years
Machine Learning (ML) is increasingly used to inform high-stake, human-centric decisions including credit scoring and sentencing in the judiciary system. But ML algorithms are well-known to exhibit biases disadvantaging people from certain ethnicities and genders. Fairness research has contributed more equitable algorithms and has increased our understanding on different bias sources. However, an understanding of how these bias sources combine to forge the overall bias is still lacking. This project aims to disentangle ML-bias into algorithmic and data bias focussing on intersectional subgroups. The techniques developed can then be used to analyse biases in threatened species management and human–machine collaboration.
For more information and to apply, click here.