Home » Here’s how Elon Musk can prevent racist raids on Twitter

Here’s how Elon Musk can prevent racist raids on Twitter

by admin

In this photo illustration, an image of Elon Musk is displayed on a computer screen and the twitter logo on a mobile phone on October 6, 2022 in Ankara, Turkey.

Muhammad Selim Korkutata | Anadolu Agency | Getty Images

When Tesla and SpaceX CEO Elon Musk showed up at headquarters on Oct. 27, 2022 to take over Twitter, online trolls and bigots stormed social networks, sending racist slurs and other hate speech. polluted social networks with a deluge of

But new research from the nonprofit network contagion lab (NCRI) and Rutgers found that Twitter’s safety team responded to that “raid” better than they did to a similar event in April 2022.

According to NCRI CEO Adam Sohn, a raid is when bad actors online engage in a coordinated effort to disrupt social media platforms, usually targeting marginalized people or people. It is intended to harm a specific target.

Gamergate is perhaps the most infamous raid, occurring around 2014 when trolls by 4Chan, a member of the video game community, launched a misogynistic attack on women in the industry. They specifically targeted one woman and critic. Sexist representation in gamesTheir campaign took place on a myriad of social platforms such as Twitter and Reddit, revealing real-world rape and death threats and bomb threats aimed at critics.

Online conspiracy-driven communities have also been known to use raid tactics.

Some people just engage in so-called “inauthentic” activity on social networks and see if they can get away with it. (“For Lulu’s”).

NCRI analyst Alex Goldenberg said Twitter’s response to hate speech last week was effective, but the company could have anticipated and prevented it.

Hours before the influx of hate speech, he said: Twitter mention. ”

NCRI uses sophisticated machine learning software and systems to monitor vast amounts of social network content to identify hate and threats against marginalized groups such as blacks, Jews, Hindus and Muslims. is tracking the rise of

Making research tools available, publishing reports, safety recommendations and warnings, sometimes delivering them directly to social networks about where threats can rise and spill over into the physical world I will explain. According to Sohn, NCRI’s hope is to use this information to prevent real-world harm from these online efforts.

NCRI was previously able to predict an increase in violence against Asian Americans during the Covid pandemic and to identify an immediate threat to law enforcement personnel by a rebel group (the Boogaloo Boys). They also warned against the rise of a community that encourages self-harm, primarily cutting, on Twitter.

What NCRI found this time

NCRI found that within 12 hours of Musk’s arrival at Twitter headquarters, the use of anti-black swear words (the n-word) on social networks increased by almost 500% from the previous average. NCRI published this brief study The morning after the Musk deal was officially closed.

For new research, NCRI dug into historical data. The company found a similar raid in April 2022 when Musk first revealed he had agreed to buy his Twitter for $54.20 per share. Did.

NCRI compared the two events and found that Twitter was the one to stop the attack this time.

“Almost half of the accounts that recently spread n-slur have been suspended, compared to less than 10% of accounts suspended in previous raids. It suggests that it is a serious problem.”

Despite Twitter’s strong response to hate speech, some damage had already been done.

Several Advertiser paused Spending How Musk delivers on its promise of being “welcoming” and that it is “Bohemian picture of hell.”

Some of those who have left Twitter for now include Shonda Rhimes, creator of hit TV shows like “Grey’s Anatomy,” “Bridgerton,” Grammy-winning singer and songwriter Sarah Bareilles, and actor and “This Is Us” producer Ken. Orin.

Others are waiting to see how Musk and his team adopt the product, but threaten to leave depending on the outcome.

Basketball icon LeBron James expressed concern over an increase in racist tweets, and Musk responded to him on Twitter, saying, A thread from the social network’s current head of safety, Yoel RothA longtime Twitter executive said their team took steps to disable the accounts implicated in most of the attacks.

NCRI’s analysis confirms that the actions taken by Mr. Roth and his safety team were effective.

Goldenberg, NRCI’s chief intelligence analyst, said that in the future, the NCRI will use cybersecurity to monitor network performance and detect when someone may be trying to hack into a company’s systems. We want to expand the use of “automatic anomaly detection”, a commonly used technology. .

Had anomaly detection applied to social media detected the planned raid first, Twitter could have taken preventive action.

Goldenberg and Sohn compare the technology to smoke detectors and carbon monoxide detectors to solve societal problems occurring online.

Musk calls himself a free speech absolutist, but his track record of defending the rights of others has been mixed. More recently, he has acknowledged the need to balance the ideals of free speech with trust and safety on Twitter.

One thing he hasn’t publicly promised is to pay more attention to his own tweets.

Musk has a history of posting baseless conspiracy theories, comments, and jokes that have been widely interpreted as sexist, anti-LGBTQ, racist, or anti-Semitic. Memorably, he posted a Hitler meme on his widely followed Twitter account.

Shortly after he took over Twitter, Musk Baseless anti-LGBTQ conspiracy theories About the burglary and assault of House Speaker Nancy Pelosi’s husband, Paul Pelosi. Musk has since deleted the tweet without explanation.

He currently boasts 113.7 million followers on the platform and that number is growing rapidly.

You may also like

Leave a Comment