What if YouTube stopped making recommendations?
What if a state representative in Texas or Florida asked Instagram not to remove misinformation about vaccines that violated the app’s rules?
Or if TikTok redoes its “For You” tab so that content moderators need to approve videos before they appear?
The Supreme Court this week opened the door to radically different ways of thinking about social media and the internet. The court is set to hear up to three cases this quarter about the legal protections social media companies have used to become industry giants, and how much leeway the companies now have when it comes to speech, entertainment and information online.
His decisions could be the start of a new reality on the internet, one where platforms are much more careful about what content they decide to deliver to billions of people every day. Alternatively, the court could also create a situation where tech companies have little power to moderate what users post, undoing years of efforts to limit the reach of misinformation, abuse and hate speech.
The result could make parts of the internet unrecognizable as certain voices grow louder or quieter and information spreads in different ways.
“The key to the future of the internet is being able to find that balance between preserving that participatory nature and improving access to good information,” said Robyn Caplan, principal researcher at Data & Society, a non-profit organization that studies the Internet.
In one case the court agreed to hear, it’s about “targeted recommendations,” the suggestions services make to keep people scrolling, clicking, swiping and watching. Tech companies generally can’t be sued simply for allowing people to post problematic content, but in the coming months the court will consider whether that immunity extends to posts the companies themselves recommend.
A second case involving Twitter asks how aggressive tech companies need to be to stop terrorists using their services, and a third case that has yet to be accepted for argument may center on state laws in Texas and of Florida that prevent tech companies from removing large swathes of hardware.
The Supreme Court’s decision to hear the ‘targeted recommendations’ case sent a bombshell through the tech industry on Monday, as the High Court never fully considered the question of when companies can be sued for material others post to online services. Lower courts have repeatedly declared companies immune in almost all cases due to a 1996 federal law, Section 230 of the Communications Decency Act.
The recommendations case involves YouTube videos about the Islamic State terror group, but the outcome could affect a wide range of tech companies depending on the court’s decision later this year or next.
“They’re going to see this case as potentially an existential threat,” said University of Washington law professor Ryan Calo.
If tech companies lose their immunity for recommended posts, companies that rely on unverified user-generated content, such as Instagram and TikTok, could then need to rethink how they connect people with content.
“At a minimum, they’ll have to be much, much more careful about what they leave on their platform, or much more careful about what they let their recommendation engines serve people,” Calo said. (A colleague of Calo’s filed the relevant lawsuit, although Calo is not involved in the case.)
The two cases the Supreme Court has agreed to hear, and the third likely pending, present a test of the legal and political might of the tech industry, which has come under increased scrutiny in Washington from the from lawmakers and regulators, but has largely fought off major threats to its massive profits and influence.
In other words, the court can rein in Big Tech in a way Congress didn’t choose.
“What this could do is put more pressure on platforms to give users more transparency about how the recommendation system works and then control it,” said Brandie Nonnecke, who studies technology companies in as founding director of the CITRIS Policy Lab at the University. of California, Berkeley.
“These are largely uncontrolled media systems that deliver content to people in ways you and I don’t understand,” she said.
The Supreme Court’s ruling on targeted recommendations won’t necessarily affect online services that make recommendations but don’t allow user-generated content, like Netflix or Spotify.
Immunity granted by lower courts under Section 230 has helped enable a whole generation of Internet businesses, from review sites such as Yelp and Glassdoor, to news websites that allow comments from users, to social media companies that allow people to post more or less freely. Companies can leave or delete individual messages largely without fear of lawsuits for defamation or invasion of privacy.
Jeff Kosseff, author of a book on Section 230, “The Twenty-Six Words That Created the Internet,” said the outcome of the Supreme Court case was impossible to predict, but small businesses with fewer resources had the most to lose.
“If the scope of Section 230 were significantly reduced, I think you would see particularly small platforms wondering if they want to take the risk of allowing user content,” he said. .
“If you’re a hyper-local news site that allows comments on your stories, and you might not even have defamation insurance, you’re going to think twice about allowing comments. “, did he declare.
The idea of stripping immunity tech companies for “algorithmic amplification” has been bouncing around for years. Roger McNamee, a venture capitalist and former Facebook investor, proposed it in 2020. Two members of Congress signed the idea into legislation the same year.
When the court hears arguments in the case, it will do so against the backdrop of a vastly different Internet than it existed in 1996. Back then, the relatively few people who used the Internet often did so via dial-up modems , and there were few or no recommendation engines on websites.
Tech companies were also in their infancy. Today, American technology companies are among the most valuable companies on the planet.
“In today’s world, the internet is going to do just fine, and it no longer needs that protection,” said Mary Graw Leary, a law professor at Catholic University of America.
Leary said the Supreme Court should consider the broader context of the Communications Decency Act, which also included anti-obscenity provisions designed to protect children from pornography.
“As industries grow and become more and more powerful, and as we become more aware of the extent of harm industries can create, there is more need for regulation,” she said. declared.
#Supreme #Court #Change #Internet