AI has the potential to deliver enormous business value to organizations, and its adoption has been accelerated by the data challenges of the pandemic. Forrester estimates that nearly 100% of organizations will use AI by 2025 and that the AI software market will reach $37 billion by the same year.
The Biased AI Problem (and How to Improve AI)
But there is growing concern about AI biases – situations where AI makes decisions that are consistently unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real damage.
I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI biases happen and what companies can do to s ensure that their models are fair.
Why AI Bias Happens
AI bias occurs because humans choose the data used by algorithms and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it’s easy for unconscious bias to enter machine learning models. Then, AI systems automate and perpetuate these biased patterns.
For example, a US Department of Commerce study found that facial recognition AI often misidentifies people of color. If law enforcement uses facial recognition tools, this bias could lead to wrongful arrests of people of color.
Several mortgage algorithms at financial services firms also consistently charged Latino and black borrowers higher interest rates, according to a UC Berkeley study.
Kwartler says the business impact of biased AI can be significant, especially in regulated industries. Missteps can result in fines or jeopardize a company’s reputation. Companies that need to attract customers need to find ways to thoughtfully put AI models into production, as well as test their programs to identify potential biases.
What better AI looks like
Kwartler says that “good AI” is a multidimensional effort across four distinct personas:
● AI innovators: Leaders or executives who understand the business and realize that machine learning can help solve problems in their organization
● AI creators: The machine learning engineers and data scientists who build the models
● AI implementers: Team members integrating AI into existing technology stacks and bringing it into production
● AI consumers: People who use and monitor AI, including legal and compliance teams who handle risk management
“When we work with clients,” says Kwartler, “we try to identify those people in the business and articulate the risks for each of those people a little differently, so they can earn trust.”
Kwartler also explains why “humble AI” is essential. AI models need to be humble when making predictions, so they don’t drift into biased territory.
Kwartler told VentureBeat, “If I rank a banner ad at 50% probability or 99% probability, it’s sort of that midrange. You have a single cutoff above that line, and you have one result. Below that line you have another result. In reality, we are saying that there is a space between the two where you can apply certain caveats, so a human has to go and examine it. We call it humble AI in the sense that the algorithm shows humility when making this prediction.”
Why it is essential to regulate AI
According to DataRobot’s State of AI Bias report, 81% of business leaders want government regulation to define and prevent AI bias.
Kwartler believes that thoughtful regulation could dispel many ambiguities and allow companies to move forward and exploit the enormous potential of AI. Regulations are especially important for high-risk use cases such as education referrals, credit, employment, and surveillance.
Regulation is key to protecting consumers as more and more companies are integrating AI into their products, services, decision-making processes and processes.
How to Create Unbiased AI
When I asked Kwartler for his top tips for organizations looking to build unbiased AI, he made several suggestions.
The first recommendation is to educate your data scientists on what responsible AI looks like and how your organizational values should be incorporated into the model itself or the model’s guardrails.
Additionally, he recommends transparency with consumers, to help people understand how algorithms create predictions and make decisions. One of the ongoing challenges with AI is that it is viewed as a “black box,” where consumers can see the inputs and outputs, but have no knowledge of the inner workings of AI. Companies should strive to be explainable, so people can understand how AI works and how it can impact.
Finally, he recommends companies establish a grievance process for individuals, to give people a way to speak with companies if they feel they have been treated unfairly.
How AI can help save the planet
I asked Kwartler about his hopes and predictions for the future of AI, and he said he thinks AI can help us solve some of the biggest problems facing human beings right now, especially climate change.
He shared the story of one of DataRobot’s customers, a cement manufacturer, who used a complex AI model to make one of his factories 1% more efficient, helping the factory save around $70,000. tonnes of carbon emissions each year.
But to reach the full potential of AI, we need to make sure that we work to reduce the possible biases and risks that AI can cause.
To stay up to date with the latest trends in data, business and technology, check out my books Data strategy: how to take advantage of a world of big data, analytics and artificial intelligenceand be sure to sign up for my newsletter and follow me on Twitter, LinkedInand Youtube.
#Biased #Problem #Improve