
The law is clear in the EU: disinformation is a systemic risk on the biggest digital services. However, platforms have been retreating from the fight against mis- and disinformation over the past year – some very publicly, some less so. So do they really think that this is not a problem anymore? We now have an answer to that question as the European Commission and the Board of Digital Services Coordinators released their first joint risk assessment and mitigation report under the Digital Services Act. The report is basically a summary of the self-assessments submitted by platforms, and input from CSOs and independent researchers. (The EFCSN also contributed and our inputs on the harms of health misinformation were incorporated by the regulator into the final document; read our full contribution here.)
Turns out platforms themselves recognise that dis- and misinformation are systemic risks that threaten democracy, civic discourse, public health, and social cohesion.
But don’t take it from us, here are some quotes from the report:
Disinformation is wrecking civic discourse
“Many VLOPs and VLOSE providers have reported actual or foreseeable negative effects on civic discourse as driven by the large-scale dissemination of disinformation and misinformation, including through foreign interference and information manipulation (“FIMI”) campaigns, as well as coordinated inauthentic behaviour, both on and off-platform.”
Elections aren’t safe either
“VLOPs and VLOSEs providers, alongside CSOs, have reported systemic risks to voting and electoral processes stemming from the large-scale dissemination of false or misleading content. These systemic risks may include disinformation and misinformation about election dates, candidate eligibility, voter registration processes, or the delegitimisation of democratic processes, e.g. via unfounded claims of electoral fraud, procedural flaws, interference, or institutional biases in favour or against certain persons, political parties or opinions.”
Public figures and vulnerable groups targeted
“Several VLOPs and VLOSEs providers and CSOs have reported systemic risks related to the representation and treatment of public figures in online environments. These included the large-scale dissemination of disinformation, misinformation and harmful conspiracy theories about candidates to public office, coordinated harassment campaigns (often targeting women and minorities), impersonation, and the circulation of synthetic media designed to mislead.”
Crises fuel disinformation
“Providers and CSOs have reported systemic risks linked to the viral dissemination of disinformation and misinformation during or following crises, whether natural or human made. Such systemic risks often manifest in real-time, when the visibility and impact of content is amplified by recommender systems and heightened user engagement.”
Old tricks, new tensions
“Providers and CSOs have also identified systemic risks in the context of social unrest. One commonly reported risk involved the repurposing of old content, such as videos of bombings, mass shootings or large protests, framed as real-time events, with the apparent aim of triggering public panic or inflaming tensions.”
Public health might need a disinfo doc
“Providers and CSOs noted systemic risks to public health stemming from the large-scale dissemination of disinformation or misinformation on social media and search engines. These concerned for example vaccines, misleading claims about legal but potentially harmful substances and practices, the gamification of viral harmful health trends, or serious medical conditions such as Ebola, HIV/AIDS or diabetes.”
What now?
The report also lists all the mitigation measures the platforms reported. But considering the scale of the problem, many platforms’ backtracking on mitigation measures over the past year, and the fact that misinformation prevalence is high on most platforms, the reported measures likely do not cut it. Independent fact-checking, for example, has been explicitly referred to in the European Commission DSA guidelines as an effective risk mitigation measure, and yet most platforms do not use it.
Interestingly enough, “Some providers made references to Codes of Conduct such as […] the Code of Conduct on Disinformation (formerly Code of Practice on Disinformation) and mitigation measures mentioned therein.” What’s important to note is that many platform signatories of the Code have unsubscribed from many measures or fail to implement them in full. This à la carte approach undermines the Code’s effectiveness and requires scrutiny from regulators, since the Code’s value as an effective risk mitigation tool under the DSA depends squarely on the signatories implementing all the commitments that are relevant to their services, as specified in its preamble.
Given that the platforms recognize that dis- and misinformation on their services poses a systemic risk, they must start acting accordingly. The time for minimal compliance is over.