Skip to main content
Policy

EFCSN Contributes Comment to Meta’s Oversight Board on AI-Generated Video of Hungarian Politician

By 08/05/2026No Comments10 min read

EFCSN has contributed a public comment to Meta’s Oversight Board on a case involving an AI-generated video of Hungarian politician Péter Magyar, posted during the 2026 election campaign. The case raises broader questions about platform responsibility, AI labelling standards, and the role of politically affiliated creator networks: questions that go well beyond Hungary.

In our comment, we argue that:

  • Meta’s current labelling threshold is too high: an informative label should apply whenever there’s a reasonable risk of misleading a significant part of the public, not only in cases of “particularly high risk”
  • Satirical intent and distance from elections should not exempt AI-generated content from labelling. The risk to public discourse is continuous
  • The Oversight Board should examine whether the video and similar Megafon content fall under Meta’s Branded Content policy, given indications that politically affiliated creators may have been paid for their posts

Below is our full submission:


Public comment by the European Fact-Checking Standards Network on an AI-Generated Video of Hungarian Politician 

Lakmusz, EFCSN’s Hungarian member, was able to identify the video referred to in the Oversight Board’s case announcement. The video later became unavailable on Facebook. It was a photorealistic AI-generated video depicting Péter Magyar, albeit with exaggerated gestures. The voice in the video does not resemble the real voice of the politician. The video in question was published by István Szakács, a public figure associated with Megafon, a network of influencers known for their pro-Fidesz political messages.

We wish to add an important nuance to the Oversight Board’s description: the video does not allude to the practice of robocalling in general but specifically refers to an incident that happened in November 2025, when personal data of 200 thousand party activists and supporters was leaked from Tisza’s mobile application. The AI-generated video shows Magyar being angry and agitated over this incident, so it refers to a matter of public importance, one of the main news items around the date of its publication.

The growing prevalence of AI generated misinformation and AI slop

The volume of AI-generated or manipulated disinformation has reached unprecedented levels. In December 2025, survey results from the European Digital Media Observatory (EDMO) showed that 20% of all fact-checked articles in Europe focused on AI-related content, a new record. Similarly, the world’s biggest fact-checking team, AFP’s Digital Investigations Unit, reported in their annual report that AI-related investigations were their single most covered topics, making up 11% of their total output. Additionally, platforms are increasingly inundated with “AI slop”: low-quality, mass-produced content designed solely to exploit the incentive structures of the attention economy. This content, while usually not relevant for fact-checking, often exploits human empathy through sensationalized, AI-generated stories to monetize engagement. Examples of this “dangerous distortion” are Facebook groups flooded with AI-generated “historical” photos, including fabricated images of Holocaust victims. While AI slop might serve entertainment purposes at times, its sheer volume further threatens the viability of public discourse and debate online.

One of the most insidious effects of GenAI is not the success of the fakes themselves, but the shadow of doubt they cast on authentic evidence. Elections are one of generative AI’s most natural accelerators: The stakes are huge, the emotions are raw, and the will to win overrides hesitation, making campaigns the place where AI adoption has always moved fastest and will keep pushing furthest. 

Drawing lines: obvious, borderline, and deceptive use of GenAI

The Hungarian elections were no exception, the use of AI-generated videos and images on Facebook was omnipresent throughout the  campaign. AI-generated content was published not only by influencers and political proxy organizations but also by politicians themselves, media organizations and in many cases opaque Facebook-pages created for the sole purpose of promoting political AI-content before the election.

The vast majority of such content Lakmusz encountered in the months before the election was rather easily recognizable as AI, either because its visual style was not realistic or because the scenes it depicted were clearly fictional, like imagined war scenes or a phone call in Hungarian between Péter Magyar and Ursula von der Leyen. Fact-checking this kind of content would just result in stating the obvious and would be a waste of resources, so Lakmusz generally refrained from it and rather wrote analytical articles on the AI phenomenon. It did fact-check, though, more realistic-looking content like an AI-generated image of the so-called Ukrainian money transfer incident, while AFP, which is also active in Hungary, fact-checked an AI-manipulated video of Péter Magyar.

The video referred to in the Oversight Board’s announcement is more of a borderline case with realistic but exaggerated imagery and a mismatched sound. Had a fact-checking outlet considered it at the time of publication, it would probably have had to assess users’ reaction to it before deciding whether to publish a debunk. But regardless of this hypothetical decision, it is important to stress that the bar for fact-checking should not be the same as the bar for AI-labelling. The ever improving verisimilitude of generative AI (AI that looks authentic and sounds credible) is where fact-checking efforts need to concentrate.

Using AI in political campaigns is a new fast-growing phenomenon, its effects are not fully known, the AI-literacy levels probably vary greatly, and people encounter AI-content in diverse contexts and situations (sitting in front of their laptops or scrolling on their phones on the metro). 

Also, while fact-checking is resource-intensive and should focus on the most widely spread and potentially harmful content, AI-labelling with technological solutions could be applied on a large scale with practically zero marginal cost. More widespread AI-labelling by social media companies (done transparently and based on reliable tools) could also help fact-checkers focus their efforts where it’s most needed and where it can add the most value. However, to our knowledge such technological solutions that reliably detect and label AI generated content do not exist. Provenance metadata standards, such as C2PA, that could mitigate this problem, are worthwhile implementing but so far fail to ensure reliable labelling for various reasons

On Meta’s policy

In our opinion, the current manipulated media policy of Meta is too narrow and not consistent. While in its introduction, the policy claims that it applies to content that could or may mislead, later in the detailed description Meta says it applies an informative label only if the manipulated content “creates a particularly high risk of materially deceiving the public on a matter of public importance”. We think the informative label should be used more broadly, at least in cases where there is a reasonable risk that the content could materially deceive a significant part of the public. Importantly, standard use cases of Meta’s platforms should be taken into account here. Most users spend little time on individual pieces of content and are therefore likely more easily deceived. This is the case when the content is visually realistic and the scenes it depicts could reasonably have happened in reality. The video referred to in the Oversight Board’s case announcement would fall into this category.

We also think that the intended satirical or comedic effect of the content should not play into the consideration of whether to apply an informative label when the AI-generated content can reasonably be taken as depicting real events. After all, making fun of political figures is one of the most effective ways of campaigning. Labelling AI-generated satire doesn’t defuse the joke. Transparency and humour are not mutually exclusive. The distance from an election or other critical event should not influence labelling of AI generated content either, as the risk it poses to public discourse is continuous and does not occur only during narrowly defined campaign periods.

Labels applied by users manually

During the Hungarian campaign fact-checkers encountered many AI-generated content that their creators labeled as such, but only at the very end of the description or caption. We think this is an ineffective mode of labelling since many users who interact with the content might not click on the “see more” button in the description or caption and therefore might not become aware of the AI label. Meta should still apply a more visible informative label on the content in those cases.

The role of online “political influencers”

The case in question raises additional questions on the role of online “political influencers”, as the account that originally posted the video belongs to a network of creators and influencers that is connected with the Fidesz party and exclusively spreads messages that are in favor of that party. Two of the “supported partners” that Megafon names on its website – Olivér Hortay and Gábor Szűcs – ran as official Fidesz candidates in the 2026 election, while before the 2024 local elections almost 70 Fidesz candidates received social media training from Megafon. In 2025, Megafon’s revenues reached HUF 5.8 billion (roughly 19 million US dollars). While the exact origin of the money remains unknown, in 2022, a court ruling confirmed that the conclusion often reached in the press that Megafon might have received public money was reasonable. István Szakács, the influencer who posted the video referred to in the Oversight Board’s announcement, is not named as “supported partner” on Megafon’s website, however it is documented that the received trainings from Megafon, and in a 2023 interview he identified himself as a “team member” of Megafon who nevertheless is not an “official face” of the network. When he was asked how he made money from social media, he said it was “a secret”.

We would like to encourage the Oversight Board to assess if the case in question and videos by Megafon influencers in general also fall under Meta’s Branded Content policy as there are indications that Megafon creators may have been remunerated for their content on Facebook. Importantly, the Branded Content policy states that, “Current elected and appointed government officials, political candidates, political parties, and political committees may not use branded content.” We understand that the enforcement of this policy comes with inherent difficulties as business partnerships are often not public knowledge. But given the rising importance of creators as a source of news and political information, a thorough enforcement of the Branded Content policy is highly relevant.


The full case is available on the Oversight Board’s website: https://www.oversightboard.com/pc/ai-generated-video-of-hungarian-politician/

This page will be updated with the Oversight Board’s final decision once published.