X promises 'highest level' response on posts about Israel-Hamas war. Misinformation still flourishes

Full Screen
1 / 2

File - Workers install lighting on an "X" sign atop the company headquarters, formerly known as Twitter, in San Francisco, on July 28, 2023. The social media platform X says it is trying to take action on a flood of posts sharing graphic media, violent speech and hateful conduct about the latest war between Israel and Hamas. (AP Photo/Noah Berger, File)

The social media platform X, formerly known as Twitter, says it is struggling with a flood of posts sharing graphic media, violent speech and hateful conduct about the Israel-Hamas war. But it has received a broadside of criticism, including from a top European Union official, questioning the adequacy of the response.

Outside watchdog groups said misinformation about the war abounds on the platform, whose workforce โ€” including its content moderation team โ€” was gutted by billionaire Elon Musk after he bought it last year.

Recommended Videos



Fake and manipulated imagery circulating on X include โ€œrepurposed old images of unrelated armed conflicts or military footage that actually originated from video games,โ€ said a Tuesday letter to Musk from European Commissioner Thierry Breton. "This appears to be manifestly false or misleading information.โ€

Breton, the EU's digital rights chief, also warned Musk that authorities have been flagging โ€œpotentially illegal contentโ€ that could violate EU laws and โ€œyou must be timely, diligent and objectiveโ€ in removing it when warranted.

San Francisco-based X didn't immediately respond to a request for comment about Breton's letter.

A post late Monday from Xโ€™s safety team claimed it is treating the crisis with utmost effort: โ€œIn the past couple of days, weโ€™ve seen an increase in daily active users on @X in the conflict area, plus there have been more than 50 million posts globally focusing on the weekendโ€™s terrorist attack on Israel by Hamas. As the events continue to unfold rapidly, a cross-company leadership group has assessed this moment as a crisis requiring the highest level of response.โ€

That includes continuing a policy frequently championed by Musk of letting users help rate what might be misinformation, which causes those posts to include a note of context but not disappear from the platform.

The struggle to identify reliable sources for news about the war was exacerbated over the weekend by Musk, who on Sunday posted the names of two accounts he said were โ€œgoodโ€ for โ€œfollowing the war in real-time.โ€ Analyst Emerson Brooking of the Atlantic Council called one of those accounts โ€œabsolutely poisonous.โ€ Journalists and X users also pointed out that both accounts had previously shared a fake AI-generated image of an explosion at the Pentagon, and that one of them had posted numerous antisemitic comments in recent months. Musk later deleted his post.

Brooking posted on X that Musk had enabled fake war reporting by abandoning the blue check verification system for trusted accounts and allowing anyone to buy a blue check.

Brooking said Tuesday that it is โ€œsignificantly harder to determine ground truth in this conflict as compared to Russiaโ€™s invasion of Ukraineโ€ last year and โ€œElon Musk bears personal responsibility for this.โ€

He said Muskโ€™s changes to the X platform have made it impossible to quickly assess the credibility of accounts while his โ€œintroduction of view monetization has created perverse incentives for war-focused accounts to post as many times as possible, even unverified rumors, and to make the most salacious claims possible.โ€

โ€œWar is always a cauldron of tragedy and disinformation; Musk has made it worse,โ€ he added. Further, Brooking said via email โ€œMusk has repeatedly and purposefully denigrated the idea of an objective media, and he made platform design decisions that undermine such reporting. We now see the result.โ€

Part of Musk's drastic changes over the past year included removing many of the people responsible for moderating toxic content and harmful misinformation.

One former member of Twitterโ€™s public policy team said the company is having a harder time taking action on posts that violate its policies because there arenโ€™t enough people to do that work.

โ€œThe layoffs are undermining the capacity of Twitterโ€™s trust and safety team, and associated teams like public policy, to provide needed support during a critical time of crisis,โ€ said Theodora Skeadas, one of thousands of employees who lost their jobs in the months after Musk bought the company.

X says it changed one policy over the weekend to enable people to more easily choose whether or not to see sensitive media without the company actually taking down those posts.

โ€œX believes that, while difficult, itโ€™s in the publicโ€™s interest to understand whatโ€™s happening in real time,โ€ its statement said.

The company said it is also removing newly created Hamas-affiliated accounts and working with other tech companies to try to prevent โ€œterrorist contentโ€ from being distributed online. It said it is โ€œalso continuing to proactively monitor for antisemitic speech as part of all our efforts. Plus weโ€™ve taken action to remove several hundred accounts attempting to manipulate trending topics.โ€

Linda Yaccarino, whom Elon Musk named in May as the top executive at X, withdrew from an upcoming three-day tech conference where she was scheduled to speak, citing the need to focus on how the platform was handling the war.

โ€œWith the global crisis unfolding, Linda and her team must remain fully focused on X platform safety,โ€ X told the organizers of the WSJ Tech Live conference being held next week in Laguna Beach, California.

โ€”-

Associated Press writer Ali Swenson contributed to this report.


Recommended Videos