Tech companies say laws to protect us from bad AI will limit “innovation”. Well, good | John Naughton

OIn May 2014, the European Court of Justice issued a landmark ruling that European citizens had the right to request search engines to remove search results related to material lawfully published on third-party websites. This has been popularly but misleadingly described as the “right to be forgotten”; it was in fact a right to have certain documents published on the complainant removed from the list by search engines, of which Google was by far the most dominant. Or, to put it bluntly, a right not to be found by Google.

The morning the decision was made, I received a phone call from a relatively experienced Google employee I knew. It was clear from his call that the company had been ambushed by the decision – his expensive legal team clearly did not expect it. But it was also clear that his American bosses were exasperated by the brazenness of a mere European institution in delivering such a verdict. And when I slightly indicated that I considered that to be reasonable judgment, I was treated to a forceful tirade, the gist of which was that the problem with the Europeans is that they are “hostile to innovation “. At that point the conversation ended and I never heard from him again.

It reminds me of the reaction of tech companies to an EU bill published last month which, when it comes into force in about two years, will allow people who have been harmed by software to sue the companies. that produce and deploy it. The new bill, called the AI ​​Liability Directive, will supplement the EU AI law, which is expected to become EU law around the same time. The purpose of these laws is to prevent tech companies from spreading dangerous systems, for example: algorithms that drive misinformation and target children with harmful content; often discriminatory facial recognition systems; predictive AI systems used to approve or reject loans or to guide local policing strategies, etc., which are less accurate for minorities. In other words, technologies that are currently almost entirely unregulated.

The AI ​​Act imposes additional controls for “high-risk” uses of AI that have the most potential to harm people, especially in areas such as policing, recruitment and healthcare . The new liability bill, according to MIT Technology Review newspaper, “would give people and companies the right to sue for damages after being harmed by an AI system. The aim is to hold developers, producers and users of technologies accountable and to ask them to explain how their AI systems were built and trained Tech companies that fail to follow the rules face EU-wide class action lawsuits.

Just at the right time, appears the Computer & Communications Industry Association (CCIA), the lobbying group that represents technology companies in Brussels. His letter to the two EU commissioners responsible for the two acts immediately raises fears that imposing strict liability on technology companies “would be disproportionate and inappropriate to the properties of software”. And, of course, it could have a “chilling effect” on “innovation”.

Oh yes. It would be the same innovation that led to the Cambridge Analytica scandal and Russian online interference in the 2016 US presidential election and the UK Brexit referendum and enabled live broadcasts of mass shootings. The same innovation behind recommendation engines that radicalized extremists and directed “10 Depression Pins You Might Like” to a troubled teenager who later ended her life.

It’s hard to decide which of the CCIA’s two assertions – that strict liability is “ill-suited” to software or that “innovation” is the defining characteristic of the industry – is more outlandish. For more than 50 years, the tech industry has been given greater latitude than any other industry to avoid legal liability for the myriad shortcomings and vulnerabilities in its core product or the damage these flaws cause.

What is even more remarkable, however, is that the claim of tech companies to be the sole masters of “innovation” has been taken at face value for so long. But now two prominent competition lawyers, Ariel Ezrachi and Maurice Stucke, have called the companies a bluff. In a remarkable new book, How big tech barons are crushing innovation — and how to fight back, they explain that the only types of innovation that technology companies tolerate are those that correspond to their own interests. They reveal how ruthless technology companies are in stifling disruptive or threatening innovations, whether through preemptive acquisition or naked copying, and that their dominance of search engines and social media platforms limits the visibility of promising innovations that could be useful on a competitive or societal level. As an antidote to technological buffoonery, the book will be hard to beat. It should be required reading for everyone at Ofcom, the Competition and Markets Authority and DCMS. And now “Innovation for whom? should be the first question to any tech booster who talks to you about innovation.

what i read

The web of time
The thorny problem of internet time is a fascinating subject New Yorker Nate Hopper’s essay on the genius who many years ago created the mysterious software system that synchronizes network clocks.

tied up
Project Fear 3.0 is an excellent blog post by Adam Tooze on criticism of the current Conservative administration.

Advances in technology
Ascension is a thoughtful essay by Drew Austin on how our relationship to digital technology has changed over the period 2019-2022.

#Tech #companies #laws #protect #bad #limit #innovation #good #John #Naughton

Leave a Comment

Your email address will not be published. Required fields are marked *