The US Supreme Court this week examines a quarter-century old law that has protected tech companies from lawsuits and prosecution for content posted by their users, with a chance that the rules governing the internet will no longer stand.
Enacted when Facebook founder Mark Zuckerberg was just 11 years old and Google’s creation still two years off, Section 230 is seen as a fundamental law of the internet and considered inviolable by its staunch defenders.
Section 230 was part of the Communication Decency Act, an anti-pornography law signed in 1996, that helped set the rules of the road for the internet, which was still in its infancy as an online playground for all.
The idea was to protect the then embryonic internet sector from cascading lawsuits and to allow it to flourish, while encouraging tech companies to moderate their content.
At the time most of the attention went on limits put on sexual content, a part of the bill that was backed by then president Bill Clinton and that was later struck down by the Supreme Court in a landmark case.
But inserted into the legislation was Section 230 which stated that “no provider or user of an interactive computer service shall be treated as the publisher” or hold responsibility for content that came from an outside party.
This immunity is largely seen as the regulatory tweak that would eventually free the way for Google search and sow the seeds for the social media revolution.
Under the protection of Section 230, Facebook, Instagram, Twitter or YouTube became the conduits of a world conversation without ever being at risk of lawsuit by someone taking offense at a tweet or a controversial video.
The law also protects Wikipedia or classified ads sites such as Craigslist whose success would also upend traditional media.
But opponents to the law would like to see platforms get sued for drug deals, cyber stalking and violent threats that take place on their sites.
To be sure, Section 230 is not free speech absolutism as endorsed by Elon Musk, the multibillionaire owner of Twitter.
Stung by scandals, big tech companies hire thousands of workers to moderate their platforms in order to preserve their huge audiences and big advertisers as well as avert closer government scrutiny.
But the work is never perfect, and companies still face challenges in policing posts from billions of users.
Courts in the US have regularly upheld Section 230 in its almost 30 years of existence, but its powerful backers are worried about the emotionally charged cases that will be put before the Supreme Court, both involving terrorism.
‘Material support’ for terrorism
In two hearings on Tuesday and Wednesday, the judges will listen to arguments brought by families of the victims of jihadist attacks who accuse Google and Twitter of having “helped” the perpetrators, the Islamic State group, by publishing its propaganda.
By recommending “ISIS videos to users, Google assists ISIS in spreading its message and thus provides material support to ISIS…”, lawyers for the family of Nohemi Gonzalez said in their legal brief to the court.
Gonzalez, a 23-year-old US citizen, was killed when ISIS terrorists fired into an outdoor crowd at La Belle Equipe bistro in Paris.
Twenty-eight state governments are also calling for a rethink of Section 230.
“What was enacted as a narrow protection from defamation liability has become an all-purpose license to exploit and profit from harmful third-party conduct,” said a brief from several of those states, including Alabama and California.
The novelty of the case is that complainants this time are isolating algorithms as the cause of the harm, arguing that the highly complex recommendation systems perfected by big platforms fall out of the scope of Section 230.
The big tech industry, but all sorts of other actors as well, are firmly opposed to revisiting Section 230 and to the latest arguments being put forward.
“If this case alters federal law, companies are likely to respond in one of two ways to protect themselves legally,” warned Matt Schruers, head of the Computer and Communications Industry Association.
“Companies who could muster the resources would over-moderate everything, while others would throw up their hands and not moderate anything,” he said.
In the other case to be heard on Wednesday, family members of Nawras Alassaf, a Jordanian killed in an IS group attack in Turkey, argue that Twitter did too little to weed out extremist content.
The court is expected to issue its rulings by June 30.
COMMENTS
Please let us know if you're having issues with commenting.