Meta’s newly relaxed content moderation policies in the US may lead to a splintered digital community.
The company has rewritten its policies on “hateful conduct”, allowing users on platforms such as Facebook, Instagram and Threads to post statements that could have previously been considered hate speech.
Deleted clauses banning specific derogatory statements include comparing women to “household objects or property”, “Black people as farm equipment” and “transgender or non-binary people as ‘it’”.
The revised guidelines read: “We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird’.”
In a five-minute video on Instagram, sporting a $900,000 Greubel Forsey watch, Meta CEO Mark Zuckerberg recently said: “The recent elections also feel like a cultural tipping point towards, once again, prioritising speech.”
Zuckerberg has replaced the company’s extensive third-party fact-checking programme, which has relied on organisational partners, with a community-driven system that mirrors X’s Community Notes under which users can submit suggested “notes” on other people’s content. Certain users vote on whether or not the notes are publicly displayed.
Also removed is the prohibition of “the usage of slurs that are used to attack people on the basis of their protected characteristics” and “content targeting a person or group of people on the basis of their protected characteristic(s) with claims that they have or spread the novel coronavirus, are responsible for the existence of the novel coronavirus, are deliberately spreading the novel coronavirus”.
With the provision going away, it may now be within the bounds to state, for example, that Chinese people were responsible for the Covid-19 pandemic.
In August, Zuckerberg said in a letter to Republican Congressman Jim Jordan that the Biden administration in 2021 “repeatedly pressured” his company to take down certain posts related to Covid and the “government pressure was wrong”. This week the CEO said: “We’re going to dramatically reduce the amount of censorship on our platforms.”
The policy change comes soon after he replaced Nick Clegg, the company’s former president of global affairs, with Joel Kaplan, who served as former US President George W. Bush’s deputy chief of staff from 2006 to 2009 and then joined Meta as vice-president of US policy in 2011 (back then the company was called Facebook).
Arturo Bejar, a former engineering director at Meta with expertise on curbing online harassment, told AP that he was worried about the changes to Meta’s harmful-content policies: “I shudder to think what these changes will mean for our youth. Meta is abdicating their responsibility to safety, and we won’t know the impact of these changes because Meta refuses to be transparent about the harms teenagers experience, and they go to extraordinary lengths to dilute or stop legislation that could help.”
Looking at Google Trends in the US, a search for ways to delete Facebook and Instagram accounts has been on the upswing since Zuckerberg announced the loosening of content moderation policies.
Though European Commission spokesperson Thomas Regnier declined comment on Meta’s policy announcement for the US, he said during a media conference that Europe wasn’t asking any platform to remove lawful content. “We just need to make the difference between illegal content and then content that is potentially harmful…. There, we ask just platforms to take appropriate risk mitigation measures.”