In recent years, researchers and journalists have highlighted artificial intelligence sometimes stumbling when it comes to minorities and women. Facial recognition technology, for example, is more likely to become confused when scanning dark-skinned women than light-skinned men.
Last week, AI Now, a research group at New York University, released a study about A.I.’s diversity crisis. The report said that a lack of diversity among the people who create artificial intelligence and in the data they use to train it has created huge shortcomings in the technology.
For example, 80% of university professors who specialize in A.I. are men, the report said. Meanwhile, at leading A.I. companies like Facebook, women comprise only 15% of the A.I. research staff while at Google, women account for only 10%.
Furthermore, Timnit Gebru, who is an A.I. researcher at Google, is cited in the report as saying “she was one of six black people—out of 8,500 attendees” at a leading A.I. conference in 2016.
The report’s authors believe that the problem of A.I. performing poorly with certain groups could be fixed if a more diverse set of eyeballs was involved in the technology’s development. And while tech companies say they are aware of the problem, they haven’t done much to fix it, the report said.
One possible solution is for companies to examine and repair any workplace cultures that are off-putting to women and people of color. Most women, for instance, wouldn’t want to work at a company if they knew that it tolerates bigotry and unequal wages between the genders.
Another solution for improving workplace diversity is for companies to be more transparent, which signals to prospective employees their seriousness about the issue. This could include publishing employee compensation figures broken down by race and gender, releasing harassment and discrimination reports that reveal the number of such incidents, and ensuring that executive salaries “are tied to increases in hiring and retention of under-represented groups.”
It’s these types of public steps that could lead to more people of diverse backgrounds working on A.I., the report said, ensuring that the next big A.I. breakthrough benefits everyone.
In a related note, Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the MIT Media Lab who did not work on the report discussed here, has done remarkable work chronicling A.I. bias problems in facial recognition systems. That work earned her a spot on Fortune’s World’s Greatest Leaders list, published last week along side a number of other techies.
By Jonathan Vanian
Forbes presents its list of 100 most powerful women in the world currently.
It seems like a no-brainer. If a company wants to treat everyone who has a certain disease with its new drug, then it should test that drug in, well, everyone. But that isn’t always the case, even today.
As lifespans and career trajectories shift, the needs of (mostly male) older workers actually have a fair amount of overlap with those of female workers, as well as millennials.