Before entering a mosque in Christchurch, New Zealand, the site of one of the deadliest mass murders in the country’s history, a gunman paused to endorse a YouTube star in a video that appeared to capture the shooting.
“Remember, lads, subscribe to PewDiePie,” he said.
To an untrained eye, this would have seemed like a bizarre detour.
But the people watching the video stream recognised it as something entirely different: a meme.
Like many of the things done before the attack on Friday — like the posting of a 74-page manifesto that named a specific Internet figure — the PewDiePie endorsement served two purposes. For followers of the killer’s video stream, it was a kind of satirical Easter egg. (“Subscribe to PewDiePie,” which began as a grassroots online attempt to keep the popular YouTube entertainer from being dethroned as the site’s most-followed account, has morphed into a kind of all-purpose cultural bat signal for the young and internet-absorbed.)
For everyone else, it was a booby trap, a joke designed to ensnare unsuspecting people and members of the media into taking it too literally. The goal, if there was one, may have been to pull a popular internet figure into a fractious blame game and inflame political tensions everywhere.
In a tweet early on Friday morning, PewDiePie, whose real name is Felix Kjellberg, said, “I feel absolutely sickened having my name uttered by this person.”
New Zealand authorities have identified the suspect as Brenton Harrison Tarrant, 28, but it remains unclear if he acted alone.
The details that have emerged about the Christchurch shooting — at least 49 were killed in an attack on two mosques — are horrifying. But a surprising thing about it is how unmistakably online the violence was, and how aware the shooter on the video stream appears to have been about how his act would be viewed and interpreted by distinct Internet subcultures.
In some ways, it felt like a first — an Internet-native mass shooting, conceived and produced entirely within the irony-soaked discourse of modern extremism.
The attack was teased on Twitter, announced on the online message board 8chan and broadcast live on Facebook. The footage was then replayed endlessly on YouTube, Twitter and Reddit, as the platforms scrambled to take down the clips nearly as fast as new copies popped up to replace them. In a statement on Twitter, Facebook said it had “quickly removed both the shooter’s Facebook and Instagram accounts and the video,” and was taking down instances of praise or support for the shooting. YouTube said it was “working vigilantly to remove any violent footage” of the attack. Reddit said in a statement that it was taking down “content containing links to the video stream or manifesto”.
Even the language used to describe the attack before the fact framed it as an act of Internet activism. In a post on 8chan, the shooting was referred to as a “real life effort post.” An image was titled “screw your optics,” a reference to a line posted by the man accused in the Pittsburgh synagogue shooting that later became a kind of catchphrase among neo-Nazis. And the manifesto — a wordy mixture of white nationalist boilerplate, fascist declarations and references to obscure internet jokes — seems to have been written from the bottom of an algorithmic rabbit hole.
It would be unfair to blame the Internet for this. Motives are complex, lives are complicated, and we don’t yet know all the details about the shooting. Anti-Muslim violence is not an online phenomenon, and white nationalist hatred long predates 4Chan and Reddit.
But we do know that the design of Internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users toward edgier content, a loop that results in more time spent on the app, and more advertising revenue for the company. Their hate speech policies are weakly enforced. And their practices for removing graphic videos — like the ones that circulated on social media for hours after the Christchurch shooting, despite the companies’ attempts to remove them — are inconsistent at best.
We also know that many recent acts of offline violence bear the Internet’s imprint. Robert Bowers, the man charged with killing 11 people and wounding six others at the Tree of Life synagogue in Pittsburgh, was a frequent user of Gab, a social media platform beloved by extremists. Cesar Sayoc, the man charged with sending explosives to prominent critics of President Donald Trump last year, was immersed in a cesspool of right-wing Facebook and Twitter memes.
People used to conceive of “online extremism” as distinct from the extremism that took form in the physical world. If anything, the racism and bigotry on internet message boards felt a little less dangerous than the prospect of Ku Klux Klan marches or skinhead rallies.
Now, online extremism is just regular extremism on steroids. There is no offline equivalent of the experience of being algorithmically nudged toward a more strident version of your existing beliefs, or having an invisible hand steer you from gaming videos to neo-Nazism. The Internet is now the place where the seeds of extremism are planted and watered, where platform incentives guide creators toward the ideological poles, and where people with hateful and violent beliefs can find and feed off one another.
So the pattern continues. People become fluent in the culture of online extremism, they make and consume edgy memes, they cluster and harden. And once in a while, one of them erupts.
In the coming days, we should attempt to find meaning in the lives of the victims of the Christchurch attack, and not glorify the attention-grabbing tactics of the gunman. We should also address the specific horror of anti-Muslim violence.
At the same time, we need to understand and address the poisonous pipeline of extremism that has emerged over the past several years, whose ultimate effects are impossible to quantify but clearly far too big to ignore. It’s not going away, and it’s not particularly getting better. We will feel it for years to come.
c.2019 New York Times News Service