Creator platforms emerge as a front in misinformation battles

By Max Willens

The deplatforming of Donald Trump that began on Twitter has touched off profound conversations about the role the largest tech platforms play in public conversations.

It was also the first of several dominoes to fall last week as other platforms tamped down on content and goods associated with the people who stormed the U.S. Capitol on Jan. 6, as platforms suddenly found themselves assessing hate speech and disinformation in a new way.

This past week, Patreon began expanding the list of keywords its automated content monitoring tools and review teams use to look for content that violates its policies, the second time it has changed its approach to handling unacceptable content in just the past three months; Patreon banned a number of accounts focused on the QAnon conspiracy theory back in October as well as suspending and warning several others.

“We do not want [Patreon] to be a home for creators who are inciting violence,” said Laurent Crenshaw, Patreon’s head of policy. “We’re going to put the person power and the thought energy into making sure we’re eliminating and mitigating these in the future.”

Patreon also put several user accounts under suspension last week and is considering banning them from its platform, and it is examining several others to determine if they should be suspended. Patreon has analyzed hundreds of accounts since it instituted its QAnon policy, a “small number” of the more than 100,000 creators that use Patreon’s platform, a spokesperson said. And most of them are small earners, generating a few hundred dollars per month from patrons.

But they are part of a much larger reckoning the internet is having with dangerous, hateful and harmful speech.

“More and more companies are going to have to invest resources into developing not just community policies but answer questions like, ‘What is misinformation on our platform,’ and ‘How are we going to regulate it?’” said Natascha Chtena, editor in chief of the Harvard Kennedy School’s Misinformation Review. “That takes a lot of resources.”

A platform like Patreon is, almost by design, hoping to welcome and support a broad array of viewpoints and speech, Crenshaw said.

That support of speech disappears when it “reaches the point of real-world harm,” a threshold that the country crossed quite suddenly on Jan. 6. Misinformation about things like the legitimacy of the recent presidential election, for example, content that might have been regarded as merely objectionable two weeks ago now looks dangerous.

“Some of these people we’re talking about, the Ali Alexanders of the world, were seen as being as controversial until last week,” Crenshaw said, referring to one person whose Patreon account is under review (Crenshaw spoke to Digiday Jan. 13).

All of these platforms, broadly speaking, have similar views about what kinds of speech are tolerable, though the level of specificity in their policies varies. Medium, for example, specifically identifies conspiracy theories, as well as scientific and historical misinformation, as categories of speech it does not allow; Substack’s content policy bans content that promotes “harmful or illegal activities” or “violence, …read more

Source:: Digiday

      

Aaron
Author: Aaron

Related Articles