Opinion

Artificial intelligence is the future—but it’s not immune to human bias

Opinion: As AI becomes more commonplace in our lives, what can be done to remove the bias that can sneak into those data-driven algorithms?

Kathryn Hume is the vice-president of product and strategy for integrate.ai, an AI enterprise SaaS company that enables companies to transform into customer-centric businesses, driving growth and increased customer satisfaction.

Artificial intelligence is starting to impact nearly every aspect of our daily lives. Machine-learning algorithms, the technology behind contemporary AI, determine what content appears on our Facebook feed and what results are returned when we conduct a Google search. They power product recommendations on Amazon and Netflix, determine airline or event ticket pricing, and influence who receives marketing communications based on likelihood to buy a new product or cancel a service. As traditional businesses adopt AI over the next couple of years, nearly every engagement we have with companies will be mediated by machine-learning algorithms.

One of the most exciting things about AI becoming more commonplace in our lives is the promise of objectivity; after all, at the core of any machine-learning product is data, which carries an aura of absolute rigour and truth. While people can fall prey to multiple cognitive biases—our judgment skewed by emotions like fear of loss or by the simplifying heuristics we rely upon to navigate the world’s endless complexity—data-driven algorithms suggest a future that’s immune to subjectivity, heuristics, and bias. This potential for rational, fair, and dispassionate outcomes is at the core of the belief that AI can deliver us a world of equal opportunity: a future where algorithms replace judges, corporate leadership, loan officers, mortgage brokers, and recruiters, eliminating human bias to enable fair outcomes driven purely by statistics.

If only it were that simple.

As algorithms play an increasingly widespread role in society, automating—or at least influencing—decisions that impact whether someone gets a job or how someone perceives her identity, some researchers and product developers are raising alarms that data-powered products are not nearly as neutral as scientific rhetoric leads us to believe. And this is a problem of optics, too: When the myth that science is objective combines with the much-hyped advances that AI portends, the view of AI threatens to swing the other way, leading some to believe that it’s the AI technology itself that is acting maliciously.

But the reality is that statistical methods and the algorithms they power face thorny technical challenges. How, then, does bias manifest itself technically in the machine-learning algorithms we see being deployed by Google, Facebook, or even traditional businesses like banks or insurance companies? What should we actually be worried about?

First, let’s define our terms. The umbrella term AI encompasses multiple subfields of machine learning, a field of computer science that, to quote Carnegie Mellon professor Tom Mitchell, studies computer systems that “automatically improve with experience.” The most common machine-learning systems in production today are classification systems that use a technique called supervised learning. In a nutshell, supervised learning systems infer mathematical functions—like y = mx + b—that represent some relationship between data (see this Harvard Business Review article for a more thorough explanation). In supervised learning, data scientists start with a set of examples that exhibit an accurate relationship between data points, train a function that does a decent job representing that relationship, and then use this function to make educated guesses about new, unseen data. These systems perform better when the data they classify have distinguishing characteristics: it’s hard to bucket things into categories when there’s too much similarity (or too much difference).

Bias can creep into these supervised machine learning systems in three major ways.

The first happens when a minority population is poorly represented in a data set used to train an algorithm. Take, for instance, the example given by University of California Berkeley professor Moritz Hardt in his essay, “Approaching Fairness,” in which a luxury hotel offers a promotion to a subset of wealthy white people and a subset of less affluent Black people—the latter of whom are less likely to visit the hotel. As Hardt explains, the hotel may decide that fair advertising would mean showing the same fraction of wealthy whites and less affluent Black people the promotion—extending visibility and opportunity beyond the likely targets. But because supervised-learning algorithms need training data to accurately predict a relationship between an input and an output, the advertiser “might have a much better understanding of who to target in the majority group, while essentially random guessing within the minority.” Personalization wouldn’t hold water for the Black subpopulation: the hotel gets stuck in the patterns of the past, repeated and affirmed in its future.

The second happens when features in data are closely correlated to one another, making it impossible to overcome bias by simply removing information like gender or race from the equation. Imagine a bank evaluating the risk of offering loans. To protect against racial or gender bias, the bank’s data scientists decide to exclude columns in their database with information about race or gender. Such naive attempts at blind fairness often fail because, to again cite Hardt, “even if a particular attribute is not present in the data, combinations of other attributes can act as a proxy.” Postal codes, for example, may be a good proxy for race. These “redundant encodings” must also be considered in issues around privacy: organizations naively assume that by removing personally identifiable information, they are protecting individuals from identification. But given correlations between different features in data, it’s not hard to reverse-engineer an individual from a couple of joined data sets.

The third happens when human judgment and bias are encoded into the training data itself. Remember that supervised-learning algorithms learn by iterating variations of a mathematical function until it does a good enough job representing the relationship between data. To do this, the algorithms start with what’s called a “ground truth”—the labels human users provide to indicate the output that should correspond to an input. While the term “ground truth” suggests objectivity and fact, it’s more accurate to think about it as a human-provided interpretation of consensus—an agreement amongst fallible people that some piece of data should be classified under a given category as an objective truth.

Some ground-truth labels do not pose ethical questions, as when workers for Amazon Mechanical Turk, which includes a data-labeling service, label an image as a cat or a dog. But others undoubtedly do, as when images of faces are labeled for criminality or sexual preference. The language used to describe these socially sensitive systems present them either as neutral, objective judges of physiognomic fact (as in this study by Xiaolin Wu and Xi Zhang) or super-intelligent oracles able to detect patterns too subtle for human perception (as in this study by Yilun Wang and Michal Kosinski). Neither of these depictions help non-experts appreciate how the systems bake in and amplify human bias and judgment at the scale only possible with automation by large, powerful computational systems.

READ MORE: Our Q&A with Geoffrey Hinton, the Canadian godfather of deep learning AI

The machine-learning community has taken notice of these three major ways bias can creep into AI algorithms and efforts are springing up to address the problem. The Canadian Institute for Advanced Research’s pan-Canadian artificial intelligence strategy includes a goal to address the impact AI will have on society, including efforts to address and set standards around bias. Researchers working under professors Richard Zemel and Toni Pitassi at the University of Toronto and the Vector Institute are actively exploring technical approaches to promote fairness, modifying the math in algorithms to account for underrepresented minorities and hidden correlations. Organizations like the Fairness, Accountability, and Transparency (FAT*) Conference, Data & Society, and the AI Now Institute are bringing together academics, technologists, and policymakers to support pragmatic change. It’s important to recognize that prioritizing ethics takes determination and hard work in the flurry of performance against short-term metrics and returns.

Existing research efforts to overcome statistical bias will hopefully result in sound, open-source solutions in 2018. But identifying and extracting bias baked into the way these algorithms are trained will be a delicate and diffuse task. We may ultimately need to reframe how we think about the role many AI tools play in society—viewing them not as neutral tools, but as convex mirrors that quantify our own biases, inequalities, and prejudices. We have the choice to modify the math behind AI algorithms to promote the future we want, but only if we first have the courage to face down the hard truths of our present.

Looking for more?

Get the Best of Maclean's sent straight to your inbox. Sign up for news, commentary and analysis.
  • By signing up, you agree to our terms of use and privacy policy. You may unsubscribe at any time.