One Antiracist Action You Can Take Today: Follow Women Working To Make AI Ethical, Equitable, & Accountable

One Antiracist Action You Can Take Today: Follow Women Working To Make AI Ethical, Equitable, & Accountable

Dr. Joy Buolamwini, noticed the facial recognition software she was working on couldn’t detect her face, uncovering that the software was never trained to detect the full range of human skin tones.

Dr. Timnit Gebru coauthored papers showing facial recognition to be less accurate at identifying women and people of color and that language-based models trained on large amounts of text will inherently learn the biases baked into them without interventions.

As outlined in that paper:

Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist, and otherwise abusive language ends up in the training data.

An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

Moreover, because the training data sets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, […] undocumented training data perpetuates harm without recourse.”

Learn more about these incredible women at the links below and support their work.

https://blackinai.github.io/#/
https://www.ajl.org/
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
https://youtu.be/UG_X_7g63rY

Search