Increasingly, companies are relying on artificial intelligence to perform various business functions, some that only computers can perform and others that are even better handled by humans. And while it might make sense that a computer could perform these tasks without any sort of bias or agenda, leaders in the AI space are increasingly warning about this exact scenario.
The concern is so widespread that new responsible AI measures have been launched by the federal government, requiring companies to look for these biases and operate systems in front of humans to avoid them.
The four pillars of responsible AI
Ray Eitel-Porter, Managing Director and Global Head of Responsible AI at Accenture, explained during a virtual event hosted by Fortune Thursday that the technology consulting firm operates around four “pillars” for implementing AI: principles and governance, policies and controls, technology and platforms, and culture and training.
“The four pillars are basically from our engagement with a number of clients in this space and real recognition of where people are in their journey,” he said. “Most of the time now it’s really about how you take your principles and put them into practice.”
Many companies these days have an AI framework. Policies and controls are the next layer, which relates to how you put these principles in place. Technology and platforms are the tools in which you implement these principles, and the culture and training part ensures that everyone at all levels of the company understands their role, can execute it and buy into it.
“It’s definitely not just something for a data science team or a technology team,” Eitel-Porter said. “It’s really something that’s relevant to everyone in the business, so culture and training is really important.”
Naba Banerjee, product manager at Airbnb, suggested that a fifth pillar be included: the financial investments needed to make these things happen.
Interestingly, Eitel-Porter said the interest and intent is there, citing a recent Accenture survey of 850 senior executives globally, which found that only 6% had successfully integrated responsible AI into operational plan, while 77% said it was a top priority for the future. .
And going back to Banerjee’s point on investment, the same survey showed that 80% of respondents said they would allocate 10% of their AI and analytics budgets over the next few years to responsible AI. , while 45% said they would allocate 20% of their budget. to effort.
“It’s really encouraging because, frankly, without the money it’s very difficult to do these things, and it shows that there’s a very strong commitment from organizations to take the next step. .. to operationalize the principles through the governance mechanism,” he said.
How companies try to be responsible
Airbnb is using AI to prevent parties at hospitality homes, which have become a bigger issue amid the pandemic. One of the ways the company is trying to detect this risk is by looking at tenants under the age of 25 who rent mansions for one, assuming those customers are looking for party venues.
“That seems quite logical, so why use AI?” Banerjee asked. “But when you have a platform with over 100 million guests, over 4 million hosts, and over 6 million listings, and the scale continues to grow, you can’t do it with a set of rules. And as soon as you build a set of rules, someone finds a way around the rules.
Banerjee said the employees were constantly working on training the model to enforce those rules, but it wasn’t perfect.
“When you try to stop bad actors, you unfortunately also catch dolphins in the net,” she said.
That’s when the humans in customer service have to step in to resolve issues with individual users of the platform, who had no intention of raging, but were prevented from booking anyway. These instances were also used to improve the models.
But robots can’t do everything. Airbnb’s Project Lighthouse, which focuses on preventing discrimination by partnering with civil rights organizations, helps the online homestay marketplace keep humans in the loop. Banerjee said the company’s mission is to create a world where everyone can belong anywhere, and to that end the platform has removed 2.5 million users since 2016 who didn’t comply. community standards.
“Unless you can measure and understand the impact of any kind of system you build to keep the community safe… there’s really nothing you can do about it,” she said.
Project Lighthouse aims to measure and eliminate this discrimination, but it does so without facial recognition or algorithms. Instead, it uses humans to help understand a person’s perceived race while keeping that person’s identity anonymous.
“Where we see a gap between white guests, black guests, white hosts, black hosts, we take action,” she said.
At Mastercard, artificial intelligence has long been used to prevent fraud on the millions of daily transactions made across the country.
“It’s interesting because at Mastercard, we’re in the business of data and technology. This is the space we’ve been in for many, many years,” said Raj Seshadri, president of data and services at Mastercard.
And the concept of trust is inherent in this work, she added: “What is the intention of what you are doing? What did you hope to accomplish and what are the unintended consequences? »
But the more data you have, the more discrimination you can avoid when using AI, Seshadri said. For example, small businesses run by women are generally not approved for as much credit, but with more data points it might be possible to reduce gender discrimination.
“It levels the playing field,” Seshadri said.
Biased robots are human creations
Krishna Gade, founder and CEO of Fiddler AI, said the biased robots are not sentient creatures with a program, but rather the result of faulty human data informing what we hope can be an improved version of the process.
A difficulty here is that machine learning-based software improves in a kind of black box, Gade said. And it doesn’t work like traditional software where you can view code line by line and make fixes. It then becomes difficult to explain how AI works.
“They’re basically trying to infer what’s going on in the model,” says Gade. The data the AI uses to calculate a Mastercard customer’s loan approval, for example, may be causal to the model, but not in the real world. “There are so many other factors that could influence the current rate.”
At Fiddler AI, users can “play” a model’s inputs to understand why it behaves the way it does. You could adjust someone’s past debts to see what their credit score would change, for example.
“These types of interactions can build trust with a model,” he said, noting that many industries, such as banking, are asking risk management teams to review their AI processes, but that not all sectors implement these controls.
New government regulations are likely to change that, as many in this industry have called for an AI bill of rights.
“A lot of these conversations are going on, and I think that’s a good thing,” Seshadri said.
#Leaders #Prioritize #Ethical #Innovation #Data #Dignity