Home Tech Meta identifies networks pushing deceptive content likely generated by AI

Meta identifies networks pushing deceptive content likely generated by AI


Meta reported on Wednesday that it had identified “likely AI-generated” content used deceptively on its Facebook and Instagram platforms. This content included comments praising Israel’s handling of the war in Gaza, posted beneath entries from global news organizations and U.S. lawmakers.

In its quarterly security report, the social media company stated that these accounts pretended to be Jewish students, African Americans, and other concerned citizens, targeting audiences in the United States and Canada. Meta attributed this campaign to STOIC, a political marketing firm based in Tel Aviv.

STOIC has not yet responded to requests for comment on these allegations.

Why This Matters

While Meta has previously detected basic AI-generated profile photos in influence operations since 2019, this report marks the first disclosure of more advanced generative AI technologies being used since their emergence in late 2022. Researchers are concerned that generative AI, which can rapidly and inexpensively produce human-like text, images, and audio, could enhance disinformation campaigns and potentially influence elections.

During a press call, Meta’s security executives stated that novel AI technologies had not hindered their ability to disrupt these influence networks, which are coordinated efforts to disseminate specific messages. They also mentioned that they had not seen AI-generated images of politicians that were realistic enough to be mistaken for authentic photos.

Key Quote

“There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them,” said Mike Dvilyanski, Meta’s head of threat investigations.

By the Numbers

The report highlighted six covert influence operations that Meta disrupted in the first quarter. Besides the STOIC network, Meta also shut down an Iran-based network focused on the Israel-Hamas conflict, although no generative AI use was identified in that campaign.


Meta and other tech giants are grappling with the potential misuse of new AI technologies, particularly in election contexts. Researchers have found examples of image generators from companies like OpenAI and Microsoft producing photos with voting-related disinformation, despite these companies having policies against such content.

These companies have emphasized the use of digital labeling systems to mark AI-generated content at the time of creation, but these tools do not work on text, and researchers question their effectiveness.

What’s Next

Meta will face significant tests of its defenses with upcoming elections in the European Union in early June and in the United States in November.