Texas’s New Social Media Law Will Create a Haven for Global Extremists

Share

Daveed Gartenstein-Ross, Madison Urban, Matt Chauvin

Last Saturday, Payton Gendron murdered 10 Black victims at a supermarket in Buffalo, New York, and livestreamed the shooting on Twitch. Twitch pulled the horrific stream within two minutes—but by then, the video had already been shared on Facebook, Twitter, and elsewhere. For Gendron, social media exposure was central to the white supremacist race war he saw himself fighting: In his manifesto, he noted that livestreaming the attack was meant to “increase coverage and spread my beliefs.” To the rest of us, his ability to broadcast his hate crime should show how content moderation on social media platforms, though often maligned and never perfect, serves an important social good.

But a new law in another U.S. state is about to put significant constraints on social media companies’ nascent efforts to rein in extremist content, violent images, and systematic disinformation—with potentially global effects. (Full disclosure: At Valens Global, we have received funding from Meta, the owner of Facebook, for our work on how national security intersects with technology, but we receive funds for this research from many sources.) A new Texas law limiting content moderation will have ripple effects far beyond the state, as hate speech, disinformation, and pro-terrorist content may now find protection in Texas. From there, it could quickly spread via the borderless internet into other states and countries.

At the core of the Texas bill against “wrongful censorship on social media platforms” is a provision titled “Censorship Prohibited,” which says a social media company cannot “censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on the viewpoint of the user or another person.” The bill also mandates that social media platforms with more than 50 million active users fully publicize their content moderation policies and allows users to sue over moderation decisions. The law will take effect after an appellate court lifted an injunction against it on May 11.

The law’s immediate impact is ambiguous due to complexities regarding how it intersects with U.S. federal code. Nonetheless, Texas state legislators have shifted the content moderation landscape, and if their bill takes full effect, it will be a boon to bad actors, including white power groups, jihadis, and foreign governments pushing disinformation. What’s more, legislators in more than a dozen other U.S. states have introduced bills to combat what they decry as social media censorship. Texas may thus be the harbinger of a bigger wave.

The push for anti-censorship laws is rooted in legitimate concerns. Conservatives believe tech companies are hostile to their worldview and disproportionately censor conservative viewpoints. They can point to such examples as companies’ early suppression of the Wuhan, China, lab leak hypothesis about the origins of COVID-19, widely regarded today as one of the two leading possible causes of the pandemic; the suppression of the Hunter Biden laptop story before the 2020 election, when it now appears to have a factual basis; and the deplatforming of former U.S. President Donald Trump by Twitter and Facebook even as other highly controversial world leaders remain free to post. Indeed, tech companies have not always drawn the lines right in suppressing content. But the companies have also defanged numerous risks by implementing new and often customized policies in response to malign uses of their platforms.

In 2014, for example, Twitter, Facebook, and YouTube were critical parts of the Islamic State’s strategy for spreading propaganda and radicalizing followers. For example, the group and its supporters operated more than 46,000 Twitter accounts, posted millions of pieces of content on Facebook, and ran YouTube channels, all of which helped recruit around 42,000 foreign fighters from 120 countries to fight in Syria and Iraq. Although the companies were initially slow to act, Twitter eventually suspended over 1.2 million accounts for terrorist content between August 2015 and December 2017. Facebook followed suit, removing 14.3 million “pieces of content related to [the Islamic State], al-Qaeda, and their affiliates” in the first three quarters of 2018 alone.

This content moderation was successful in significantly reducing the number of followers and amount of content associated with the Islamic State. Even when suspended accounts returned under a similar name, they enjoyed fewer followers. Given the primacy of social media for its propaganda and recruitment, it was a major blow to the terrorist network.

Tech companies have applied these policies to other problems, such as the rise in white supremacist terrorism. In 2019, Brenton Tarrant—whom Gendron cited as his main inspiration—massacred 51 people at two mosques in Christchurch, New Zealand, and streamed the shooting on Facebook Live. Within 24 hours, Facebook removed 1.5 million videos of the shootings.

In the months after the Christchurch shootings, Facebook announced new policies to deal with white supremacist terrorism. These included a “ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram,” the company said as well as changes to its livestreaming policy. The company said it was innovating to meet the threat of extremists using streaming video on the platform to disseminate violent or hateful content.

Meta has been criticized as being late responding to threats, being ad hoc in its policies, and trying to do content moderation on the cheap by letting users report one another. But the company argues that the ability to respond flexibly has been more effective than having a one-size-fits-all policy. In response to disinformation about COVID-19, for example, tech companies quickly set up information centers, curated news feeds, and redirected users from questionable sources. Meta rolled out pop-ups and information centers across Facebook, Instagram, and WhatsApp as well as removed misinformation that could cause physical harm. The companies’ policies have almost certainly saved lives and safeguarded users’ well-being by suppressing information about dangerous supposed cures, such as drinking bleach and eating colloidal silver.

By putting strict limits on the platforms’ flexibility to moderate content, Texas will constrain social media companies’ responses to all kinds of challenges—old and new—far beyond the state’s borders. The looming threat of litigation would have a chilling effect on companies’ content moderation efforts and stymie further innovation. By requiring companies to open their content moderation playbooks to everyone, the law will in effect give bad actors at home and abroad an instruction manual for circumventing the platforms’ defenses with their propaganda and other schemes.

What’s more, even though the Texas law ostensibly focuses on political viewpoints, this raises ambiguities that could severely constrain a platform’s ability to police toxic content. That’s because almost all forms of extremism, hatred, and disinformation can be said to constitute a viewpoint. Does the takedown of pro-Islamic State content constitute censorship on the basis of viewpoint? The Islamic State is a proscribed terrorist organization, and thus, the new law would allow removing the group’s posts. But posts merely expressing support for the Islamic State or its ideas are, in most cases, legal. Recruiting or fundraising for the group might be illegal, but proclaiming your love for the Islamic State in a tweet is not.

Indeed, given the United States’ comparatively weak laws against hate speech and incitement, a clever lawyer might construe just about any violent extremist cause as a viewpoint as long as illegal activity has not taken place. If so, a great deal of violent extremist material, hate speech, and discriminatory statements could be protected from moderation under the new law.

The Texas law includes provisions that seem designed to address this issue—but do not actually address it. One section of the law allows the removal of material that “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion,” or other factors. But the provision is too narrow to enable moderation of most kinds of hate speech, which generally doesn’t contain direct, explicit, or specific threats of violence.

Ultimately, Texas’s law will impede social media companies from responding adroitly to the evolving tactics of malign actors. The loopholes are so wide that they will be readily exploitable. Content moderation playbooks are far from perfect, but they are continually being refined. Excessive restrictions on limiting social media content tilts the advantage to those bent on harming people not only with their words but also with their actions.

What is posted in Texas will not stay in Texas, even if other states and countries have less permissive content moderation regimes. For example, someone based in Texas recruiting online for the white supremacist group Atomwaffen Division—which is not a federally proscribed organization and would thus be protected from moderation in Texas—could inspire a lethal attack in California, even if California allows stricter content moderation.

Frustration over companies’ seemingly inconsistent moderation policies is understandable. It is worthy of much discussion and debate. But Texas is making a ham-fisted attempt at addressing the issue with consequences far and wide. The result will be a boon to various extremists, permitting malign actors to bust through the floodgates that have barely kept them contained.

 
 

Daveed Gartenstein-Ross is the CEO of Valens Global and a senior advisor on asymmetric warfare at the Foundation for Defense of Democracies. Twitter: @DaveedGR

Varsha Koduvayur is an analyst at Valens Global, where she focuses on U.S. domestic extremism and geopolitics. Twitter: @varshakoduvayur

Other Articles

We use cookies to improve your browsing experience, analyze our website traffic, and to understand where our visitors are coming from.