At least not from the appearance of things. Google summarily fired Timnit Gebru, one of its lead AI Ethics researchers and one of the few Black women in a leadership position at the company. Her sin? Producing academic research critical of biases in AI:
“The email and the firing were the culmination of about a week of wrangling over the company’s request that Gebru retract an AI ethics paper she had co-written with six others, including four Google employees, that was submitted for consideration for an industry conference next year, Gebru said in an interview Thursday. If she wouldn’t retract the paper, Google at least wanted the names of the Google employees removed. Gebru asked Google Research vice president Megan Kacholia for an explanation and told her that without more discussion on the paper and the way it was handled she would plan to resign after a transition period. She also wanted to make sure she was clear on what would happen with future, similar research projects her team might undertake. …. The paper called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia and translate poetry, according to a copy of the document. The models are essentially trained by analyzing language from the internet, which doesn’t reflect large swaths of the global population not yet online, according to the paper. Gebru highlights the risk that the models will only reflect the worldview of people who have been privileged enough to be a part of the training data”
The company responded by “accepting” her “resignation,” effective immediately (while she was on vacation!).
Let’s be clear: this sort of research is what Gebru does, and she’s very good at it. Before she got to Google, she co-authored a paper that proved that facial recognition software misrecognized people of color (especially women) at much higher rates than white men. The paper is influential, and supported efforts like the one that eventually got Amazon to stop selling its facial recognition software to police. Why is Google surprised that she would pursue this topic now? Also, the link between biased training data and crappy outcomes is a thing. A thing that a company like Google ought to care about.
Gebru’s unceremonious firing suggests that Google isn’t seriously committed either to actual critical thinking about AI and ethics, or to dealing with Silicon Valley’s glaring diversity problems. I suppose file under disappointing, but not surprising.
Recent Comments