Despite the criticism, the truth is that the moderation measures provided for in the new DSA can really tackle harmful content on social networks.
Back in 2019 Mark Zuckerberg argued that policing content born in the heat of the network of networks was virtually impossible (or at least it was on platforms like Facebook). And it is true that eradicating (at least 100%) the poison that roams the internet seems a possible mission, but it is also true that the unregulated nature of social networks has made content moderation a very complicated task (probably too complicated).
Not all companies behind the big 2.0 platforms make content moderation a priority, and this inevitably has a major impact on the speed at which social media poison is spread.
However, a new report recently published by two behavioral science experts suggests that there is a remedy to stop harmful content circulating with impunity on social networks. And that remedy involves leaving aside to some extent the freedom of expression to the maximum power that prevails today in the United States and taking inspiration from the regulations of the old continent.
The study, published in the Proceedings of the National Academy of Sciences and co-authored by Marian-Andrei Rizoiu of the University of Technology, Sydney, and Philipp Schneider of the Swiss Federal Institute of Technology in Lausanne, puts the spotlight on the potential effects of the European Union’s (EU) new Digital Services Act (DSA), a groundbreaking compilation of rules aimed at curtailing the harmful effects of social networks that the U.S. Congress has already called “unfair” to U.S. technology companies. The DSA will come into force next February and will set the standards that companies such as Meta and ByteDance will have to abide by within the EU.
One of the mandates envisioned by the DSA is the hiring of flesh-and-blood “flaggers” whose ultimate goal will be to identify and remove harmful content within 24 hours on social networks. This is not an excessively long period of time, but it is when we talk about the speed with which disinformation, hate speech and terrorist propaganda spreads on social networks.
“We have seen examples on Twitter where the mere sharing of a fake image of an explosion in the vicinity of the Pentagon caused stock markets in the United States to plummet in a matter of minutes,” warns Rizoiu.
It is probably not a panacea, but the DSA is proving to be a useful tool to stop harmful content.
In this sense, the astonishing alacrity with which potentially harmful content spreads on social networks could render the policies that are part of the DSA ineffective. “There are doubts as to whether the new EU regulations will actually have an impact on the containment of potentially harmful content,” says Rizoiu.
To test whether or not such doubts are unfounded, Rizoiu and Schneider have created a mathematical model to analyze how content of a harmful nature is disseminated on social networks. And they have relied on two metrics to assess the effectiveness of DSA moderation measures: the harmful potential of the content (how it goes viral) and the half-life of that content (how long it takes for a particular piece of content to reach its half-life point).
According to the researchers, X, the social network formerly known as Twitter, is the platform where harmful content has the shortest half-life, which would fall below 30 minutes. X is followed by Facebook, where harmful content has a half-life of 100 minutes, Instagram (20 hours), LinkedIn (24 hours) and YouTube (8 days), as reported by Fast Company.
Rizoiu and Schneider stress that tackling harmful content is possible on platforms such as X, where information circulates content with enormous speed. Their model suggests that moderating harmful content within 24 hours of publication can reduce its reach by up to 50%.
It should also be emphasized that harmful content, which is already malicious in itself, can have an even greater negative impact because it ignites a “self-excitation process” whereby if one event occurs, the likelihood of subsequent events automatically increases. “Harmful content involves more people in the discussion and ultimately translates into more harmful content,” Rizoiu and Schneider emphasize. Therefore, without proper moderation policies, “harmful content in isolation can generate more harmful content and ultimately contribute to its exponential growth,” they note.
According to Rizoiu and Schneider’s model, the more harmful the content, the greater the reduction effect, demonstrating that moderation effectively breaks the “word-of-mouth cycle,” which leads to a post spreading exponentially.
The European Union’s new DSA is thus proving potentially effective in tackling harmful content on social networks. And it is already inspiring creative solutions from 2.0 giants to keep harmful content at bay. TikTok recently announced, for example, new measures to make it easier for users based in Europe to report illegal content.