Twitter’s enforcement of its Hateful Conduct Policy starting Dec. 18 spread fear and outrage among users as content and accounts were deleted. Users dubbed the act the #TwitterPurge.

Their motivation to increase user growth led Twitter to address numerous complaints about harassment (which drove a significant number of users away from the platform). Twitter sought to resolve the issue by removing the perpetrators, according to USA Today.

The policy supports users by condemning harassment to consequences ranging from removing an offending tweet to suspending the account– all dependent on the context of the situation and severity/repetition of policy violation. The policy states:

“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

Twitter suspended several accounts on this bases Monday, including those of “white nationalist Jared Taylor and his American Renaissance group, and British far-right activists Jayda Fransen and Paul Golding, along with the American Nazi Party,” USA Today said.

The platform also suspended an account from the New Black Panther Party described by the Southern Poverty Law Center as a “virulently racist and anti-Semitic organization” that promotes violent behavior.

Purge Inconsistency

Other groups feel threatened by Twitter’s action.

 

“Conservatives and right-wingers fear they will be targeted during the purge and are already calling on supporters to swap to platforms like Gab, which is uncensored and dedicated to freedom of speech,” Metro UK said in relation to the shutdown of accounts.

Gab, for the unfamiliar, is based in Austin, Texas, and is an alternative social networking service. Users may write messages up to 300 characters called “gabs.”

There’s a lack of trust– or a broken trust– between Twitter and users, even though the act was meant to be a response to user complaints. This mistrust is evident in the fact that users who do not violate the new policy fear that the next will target them.

“As a private company, Twitter has no obligation to provide a forum for white nationalist views and is subject only to non-discrimination laws,” George Freeman, executive director of the Media Law Resource Center, said. “The First Amendment is part of the Bill of Rights that protects the individual from government. It doesn’t protect the individual from private companies.

Without the trust of the user base, no amount of transparency and communication will alleviate the tension because Twitter is legally capable of suspending accounts subjectively with political bias.

“There really are no speech rights accorded to the public that uses the private company,” Freeman said.

Even so, complete Transparency and communication from Twitter allows users to ascertain for themselves the bias (or lack thereof) in enforcement now and in the future.

“Twitter has been criticized for inconsistency in enforcing or providing transparency into its policies, most recently when it explained why it allowed Pres. Trump’s retweets of unverified anti-Muslim videos” that Britain First, an ultranationalist British organization, posted.

Repercussion

This inconsistency contributes to user fear, and Twitter does not gain from alienating users. Matthew Heimbach is the founder of the Traditionalist Worker Party (a political party of white separatists) and doesn’t contest the purge.

“The more the system tries to make the ideas of nationalism taboo, the more people are going to be interested and seek them out,” Heimbach said. “It’s helping us propagate our message every time they try very clumsily to shut us down.”

Removing content acknowledges that it has the power to influence. As such, deleting it or suspending accounts draws attention to them. Then again, the content this policy targets actively harass Twitter users and are thus actively causing harm.

What is undetermined is whether the company will be able to act objectively to enforce the hateful conduct policy while the users simultaneously perceive it as fair.