The fundamental right to freedom of expression – an obsolete model?
Dr. Manfred Kölsch
The Digital Services Act (DSA) will come into full force in Germany on 17 February 2024 as EU Regulation 2022/2065. Before this date, the Digital Services Act (DDG), which concretises the DSA, is to be passed by the Bundestag without fanfare.
This legislation on digital services is a Trojan horse: it presents a façade of respecting democratic principles. The EU Commission stresses that the DSA is intended to establish “strict rules to safeguard European values” and Article 1 of the DSA directly states: “Everyone has the right to freedom of expression”.
Behind this liberal façade, however, the exact opposite is happening: there is an attack on the constitutional order. Due to the complexity of the matter and the general flood of information that acts as a divert attention, this attack on the constitutional order goes unnoticed. The DSA opens up the possibility of declaring entries that are not illegal subject to deletion on very large online platforms and search engines.
The recitals of the EU Regulation to be used to interpret the DSA make a clear distinction between the dissemination of unlawful and “otherwise harmful information” (Recital 5). Platform operators are required to “pay particular attention to how their services could be used to disseminate or amplify misleading or deceptive content, including disinformation” (Recital 84). Art. 34 DSA also makes a precise distinction between unlawful information and information with only “detrimental effects”.
However, the term “disinformation” is not defined in the DSA. The EU Commission clarified back in 2018, though, that disinformation includes information that can cause “public harm”. In doing so, it determined (p.4) that public harm is to be understood as “threats to democratic political processes and political decision-making as well as to public goods such as the protection of health, the environment and security”.
There can be no doubt that false, misleading or just inconvenient entries need not be unlawful. Nevertheless, they can be declared unlawful at any time on the basis of the DSA. The EU Commission sets the standard by which disinformation is judged. However, this means that politically unpopular opinions, even scientifically argued positions, can be deleted.
And that’s not all: if they are categorised as unlawful, there are social consequences. Citizens subject themselves to internal pre-censorship. They are forced to align their content with what fits into the current political opinion corridor. Most of them will not run the risk of immanent social disadvantages. The vital element of a liberal basic order – constant intellectual and democratic debate, with opposing opinions – will therefore wither away. Tutored thinking will be implanted.
Another layer of censorship comes in from the fact that the major platforms will have to analyse entries for “systemic risks” they might entail, evaluate them accordingly and then take “risk mitigation measures”. Systemic risks are deemed to exist if there are “likely (or foreseeable) adverse effects” on “social debate”, “public safety” or “public health”. Such entries must be deleted or blocked.
However, these terms lack the substantive limitation required by the constitutional principle of certainty, even when taking into account the discretionary powers granted to the legislator. A statutory authorisation to the executive must be sufficiently defined and limited in terms of content, purpose and scope. This is the only way to make the actions of those authorised measurable and predictable and calculable to a tolerable extent for the citizen.
After the end of the coronavirus pandemic, the rule of suspicion is now being extended to all possible areas of public life. Due to the general clauses used in the DSA, the platforms concerned will always find a reason to deletion inconvenient entries. The coordinator has the power to order sanctions and the whistleblowers have unlimited possibilities to submit complaints for deletion.
Unjustified deletions are further encouraged by the use of automatic content recognition technologies, which is unavoidable due to the flood of information. The European Court of Justice recently ruled (in a case concerning the General Data Protection Regulation; N.H.), that these technologies, 90% of which are already used by some platforms, are not able to predict the likelihood of future behaviour.
The Advocate General at the ECJ has also explained why the available technologies are not capable of making the judgements required by the DSA, e.g. whether an entry will have a foreseeable detrimental effect on the “public debate” or “public health” that would justify deletion.
Due to the threat of fines of up to 6% of global turnover in the previous year for infringements, the platforms will practise so-called overblocking (i.e. the excessive deletion of permitted expressions of opinion and information or the restriction of their dissemination; N.H.) for economic reasons alone.
As a result, platform users will always see themselves as potential disruptors of public debate and electoral processes or as a threat to public safety and public health. This method of blurring will give rise to the fear of being targeted by the controllers. The public debates that underpin democracy will degenerate into sham debates in the predetermined opinion channel.
The monitoring obligations of all players are preventative in nature. It is always about “likely critical”, “foreseeable detrimental” or “foreseeable detrimental effects” on “social debate”, “public safety” or “public health”.
The Advocate General at the ECJ has said what is legally necessary in this regard: These are “particularly serious interferences with the right to freedom of expression” (…) because, by restricting certain information even before it is disseminated, they prevent any public debate on the content and thus deprive freedom of expression of its actual function as an engine of pluralism” (para.102f). The Advocate General rightly points out that preventive information controls in effect eliminate the right to freedom of expression and information, which is unrestricted in principle.
This right of freedom of expression will in the future be granted (or withheld) by the authorities with the DSA. In order to be able to assess the future risk potential of the billions of communication processes, e.g. for the “social debate”, in the 27 EU states, as the DSA claims, a huge amount of coordination is required at the very least. A Europe-wide network of communications surveillance bureaucracy is therefore being installed. Effective monitoring and enforcement requires a “seamless exchange of information in real time” (Art. 85 DSA) between the actors in the network.
At the top is the EU Commission, which can take all decisions it deems essential. The other parties involved are the “Panel” (Art. 63ff. DSA), the national “Digital Services Coordinator” (Art. 49ff. DSA) and the civil society “whistleblowers” certified by the latter (Art. 22ff. DSA).
This monitoring bureaucracy contradicts the federalism enshrined in the constitution. Until now, media supervision has been a matter for the federal states.
According to the DSA, whistleblowers are to be considered “trustworthy” if they have already proven themselves in the past in recognising objectionable content (Art. 22 DSA). In plain language, this means In Germany, the known whistleblowers under the regime of the previously applicable Ntzwerkdurchsetzungsgesetz (Network Enforcement Act) will gratefully recognise that their position has gained monopoly status.
A careful look behind the façade of the rule of law reveals that the DSA undermines the right to freedom of expression and information guaranteed by Article 11 of the EU Charter of Fundamental Rights, Article 10 of the European Convention on Human Rights and Article 5 of the German Basic Law.
German version of this post