The path from anti-porn laws to Big Tech

How modern digital platforms were shaped by porn.

Photo by Gilles Lambert on Unsplash

Silicon Valley churns out Big Tech like aristocracy does debutantes – ready to revive the old society, juice it up. Yet, as governments and policymakers are awakening from the dreamy high-tech utopia, many don’t realise that the very same structures administered the self-guiding principles communication platforms have today.

Scientists use the term ‘unintended consequences’ to discuss the unwitting effects of technologies. In this case, however, digital platforms were the unintended consequence of the Communications Decency Act of 1996, which primarily aimed to control the flow of pornography, but inadvertently opened the Pandora’s Box of the 21st century.

The 1990s marked the grand debut of the internet for commercial use. With such a liberating feature also came the rise of online porn, sparking social conservative efforts to battle the flow of explicit material online. But this led to an impasse: should platforms be considered liable for the content posted by third parties, and if so, could the constitution allow such centralised moderation? Minting it into law, God forbid, could be interpreted as government censorship.

Following such concerns, senators Cox and Wyden added Section 230, stating that tech platforms should not be treated as publishers and held accountable to the same standard as newspapers. Instead, they should be considered as podiums for free speech.

Cementing weak regulation into law also shaped our understanding of digital communication today. We are ready to denounce a lying politician, but the story is different if the Instagram algorithm exposes us to misinformation or extremist content. We might participate in Palestine protests and perhaps even post an infographic to show support, but may hesitate to ask why Instagram removed pro-Palestinian content. We have learned so well to hold governments accountable if they fail to adhere to liberal and democratic values but have been socialised into a liberal nature of technology. Section 230 may not have started it, but it definitely gave tech companies the legal protection to put up such an image. 

The EU’s Electronic Commerce Directive, just like Section 230, allows for the lack of regulation even beyond the US. Instagram’s fiasco following the aftermath of the Euro 2020 final proves just that. i News reported that most racist comments under English football players’ Instagram accounts are still active. After reporting some comments, I was informed that my request failed to reach the review team, while the artificial intelligence found that my reported comment “probably doesn’t go against community guidelines”.

This directly contradicts Instagram’s own community guidelines which clearly define hate speech “as a direct attack against people on the basis of [...] protected characteristics: race, ethnicity, national origin” etc. Social media networks fail to follow their own rules, and they can do so without much accountability because they are simply not publishers. 

When the Section 230 was unveiled in 1996, Facebook inc., Twitter, and Google were years away from emerging. The business plans of these platforms were designed around advertising. Commercialisation of third-party content creates a tricky ground, especially for regulation, since profit still underpins their business efforts. Additionally, platforms can make up their own moral rules, be that banning Donald Trump, shadow-banning the lab leak theory, or drawing the line between political interference and political discourse. Should Big Tech have such epistemic capacity? 

“We are a young information civilisation that hasn’t found its footing in democracy”, Shoshana Zuboff said in her interview recently. However, even tech giants like Zuckerberg are recognising the need for change. At the 2020 Munich security conference, the CEO of Facebook suggested a type of regulatory framework resting in between newspaper and telecommunications companies. He further stressed that the state’s role is crucial in drawing the line between free speech and harmful content, emphasising that such critical decisions should not be left to tech companies alone.

The European Union is slowly updating its judicial framework for Big Tech. Ursula Von Der Leyen proposed a new Digital Services Act to modernise the EU’s Electronic Commerce Directive. 

However, considering the U.S. and its extensive and complicated history with the First Amendment, it is hard to say whether the U.S government, in particular, would be able to draw the line. On July 22, 2021, senators Amy Klobuchar and Ben Ray Lujan proposed more regulation with the Health Misinformation Act aiming to tackle the coronavirus misinformation online. In theory, platforms would be held liable for algorithmic amplification of health misinformation, while the term ‘health misinformation’ would be defined by the federal structures.

Besides being seriously problematic, this act doesn’t hold much ground. Section 230 was introduced precisely because it was understood that the government cannot, and should not, infringe on the First Amendment and free speech. Constitutional values are deeply ingrained in American society and policy making. As long as the document of 1787 dictates policy outcomes, centralised moderation, that even Zuckerberg was uttering about, will be challenging to construct.

When Section 230 saw the light of day in 1996, almost no one thought that Facebook, Google, or anything similar, would eventually exist. Ironically, however, the bill that initially aimed to control the flow of pornography, also paved the way for how we view social media networks that define our reality and democratic processes today.