AI for Good: Battling Bias Before it Becomes Irreversible
This article was originally published on RBC Royal Bank's innovation & perspective section. Kevin Vuong, Executive Lead, International & Lecturer, University of Toronto, Canada and Atlantic Dialogues Emerging Leader Alum 2018 contributed to this piece.
From the route you take home to where you choose to go for lunch, we have our own inherent preferences that predispose us to a particular decision.
And while we assume that our choices are rational, we know that isn’t always the case. Sometimes we can’t explain why we prefer one option over another.
Though that might be acceptable when it comes to choosing between a Coke and a Pepsi, it is exponentially different when we are asking an artificial intelligence (AI) system to make judgements and predictions on something of significant human consequence.
Bias persists in every corner of our society, so it should be no surprise that we see it in AI.
More than 50 years ago, computer scientist Melvin Conway observed that how organizations were structured would have a strong impact on any systems they created. This has become known as Conway’s Law and it holds true for AI. The values of the people developing the systems are not just strongly entrenched, but also concentrated.
In Canada, AI talent and expertise is concentrated in three cities: Edmonton, Toronto, and Montreal. Each city has a leading AI pioneer who people and funding have coalesced around.
Economically, this concentration has led to greater competitiveness and served as a magnet to attract and retain the best AI talent.
And it’s working. At the Canadian Institute for Advanced Research (CIFAR), over half of their 46 AI research chairs were recruited to Canada. But with this concentration also comes challenges that Canada and other global AI hubs are struggling with.
These AI tribes are overwhelmingly homogenous, futurist and author Amy Webb has found. They all attend the same universities, are affluent and highly educated, and are mostly male.
This finding was reinforced by the World Economic Forum’s most recent Global Gender Gap Report, which found only 22% of AI professionals globally are female. In Canada, it is marginally better where 24% are female.
The gender imbalance is worse when you look at who is studying AI. For every woman researching AI, an Element AI study found there were on average nine men.
But the nature of how we develop AI is that it learns what the data shows them and inherits its creators’ unconscious biases.
We see AI’s gender imbalance reflected in the systems being developed. After a visual recognition system was trained on a gender biased data set, which associated the activity of cooking 33% more often with women, the trained model amplified this disparity to 68%.
These tribes are also overwhelmingly white.
Dubbed a “Diversity Disaster” in the AI Now Institute’s report released last month, only 2.5% of Google’s workforce is black, Facebook and Microsoft are each at 4%, while data on LGBTQ participation in AI is non-existent.
As a multicultural society, the lack of diversity in AI should trouble all Canadians.
From setting insurance rates to policing, health care, student admissions, and more, every day this issue becomes more urgent as AI is increasingly integrated into society.
In the next few years, Gartner predicts that 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them.
We cannot afford to tip toe around the uncomfortable conversations of sexism, racism, and unconscious bias, and how they are making their way into AI.