The long-term answer to correcting biases in AI systems

The long-term answer to correcting biases in AI systems

A new system or AI tool appears every day.

AI systems are more popular than ever and smarter.

From large language models like GPT-3 to text-image models like Dall-E and, more recently, text-video systems like Imagen Video – a system introduced by Google on October 5 that takes a textual description and generates video – AI systems have also become more sophisticated.

However, sophistication comes at a cost, according to Chirag Shah, an associate professor at the University of Washington’s Information School.

While the creators of the systems tried to make the systems smart, they didn’t do the same in making them fair and just, Shah said. Issues with how systems learn and the data they learn from often lead to biases.

In this Q&A, Shah discusses approaches to correcting biases in AI technology.

What are some of the ways to address bias issues in AI systems?

Chirag Shah: Are you looking for a quick fix? This stuff can be done pretty quickly. But that doesn’t really solve the underlying problem.

For example, if your search results are biased, you might actually detect it and… instead of providing the original results, you mix them up in a way that provides more diversity, more fairness. This kind of problem solving. But that doesn’t change the fact that the underlying system is still unfair and biased. That means you now depend on that extra layer of checking and undoing certain things. If someone wanted to game the system, they can easily do it, and we’ve seen it. It’s not a long-term solution.

Shirag Shah

Some of them [long-term fix] the recommendations are harsh. For example, one of the ways these systems are biased is that they are obviously run by for-profit organizations. The usual players are Google, Facebook and Amazon. They rely on their algorithms to optimize user engagement, which at first glance seems like a good idea. The problem is that people don’t engage with things just because they’re good or relevant. More often they interact with things because the content has certain types of emotions, like fear or hate, or certain types of conspiracy.

Unfortunately, this focus on engagement is problematic. This is mainly because an average user engages with things that are often unverified but entertaining. Algorithms basically end up learning that, OK, it’s a good thing to do. This creates a vicious circle.

A longer term solution is to start breaking the cycle. It has to happen on both sides. It has to happen from those services, the technology companies that are aiming for higher engagement. They need to start changing their formula for how they view engagement or how they optimize their algorithms for something other than engagement.

We also have to do things on the user side, because these tech companies are going to say, “Hey, we only give people what they want.” It’s not our fault that people want to click on conspiracy theories a lot more.

Shirag ShahAssociate Professor, University of Washington Information School

We also have to do things on the user side, because these tech companies are going to say, “Hey, we only give people what they want.” It’s not our fault that people want to click on conspiracy theories a lot more. We just surface these things. We need to start doing more on the user side, i.e. user education.

These are not quick fixes. It’s basically about changing user behavior – human behavior. It won’t happen overnight.

How willing are vendors to take the long road to solving bias issues in AI systems?

Shah: They don’t have a clear incentive to change their engagement formula or that their algorithms don’t optimize engagement, but rather authority or authenticity, or quality of information or sources . The only way — or the primary way — they will be forced to do this is through regulation.

By regulation, I mean things from different government bodies that have the power to levy fines if companies don’t comply. There have to be policies and, you know, regulations.

There are actually AI-related regulations that the European Union proposed last year. And then the FTC [Federal Trade Commission] here followed, but our side of politics is not as strong.

I think we need regulation that recognizes that every time an algorithm mediates information presented to the user, it is equivalent to that mediator actually producing [the information] because they dictate who sees what in what order, and that has a significant impact. So we are far from that.

Without the right incentives, will biases in AI systems get worse as more are created?

Shah: It depends. The question is, [are the systems] what we want? This is where some of my colleagues and I would say that, at least in some of these cases, we’ve gone too far — we’ve already crossed the line. We got too excited about what the technology could do. We don’t ask enough what technology should do.

There are a lot of those cases where you wonder, who’s asking that? There are bigger problems in the world to solve. Why don’t we devote our resources to these things? Why don’t we allocate our resources to that? So yeah, I think that’s a bigger question here.

Editor’s note: This interview has been edited for clarity and conciseness.

#longterm #answer #correcting #biases #systems

Leave a Comment

Your email address will not be published. Required fields are marked *