On 1 March, the European Commission published a press release, announcing what it calls “a set of operational measures.” The measures and accompanying safeguards are to be taken by “companies and Member States” and are a ‘follow-up’ designed to ‘step up’ the work of “tackling illegal content online.” The explicit indication seems to be, that if these measures are not implemented “it will be necessary to propose legislation.

Although it is easy to see these measures segway into the discussion on ‘fake news,’ it seems fair to note at the outset that at least in theory these measures and the campaign against ‘fake news’ are two separate things. About the only thing that the EU is clear on, when it comes to ‘fake news’, is that it is not illegal. The measures presented today, on the other hand, are said to apply to:

all forms of illegal content ranging from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement.

The measures follow previous calls for action in September 2017 and February 2018. In September, the Commission presented ‘guidelines and principles for online platforms.’ The stated goals and reasons were:

the proactive prevention, detection and removal of illegal content inciting hatred, violence and terrorism online. The increasing availability and spreading of terrorist material and content that incites violence and hatred online is a serious threat to the security and safety of EU citizens. It also undermines citizens’ trust and confidence in the digital environment (…).

There was also an implicit threat. Vera Jourová, Commissioner for Justice, Consumers and Gender Equality, said something very peculiar:

The rule of law applies online just as much as offline. We cannot accept a digital Wild West, and we must act. The code of conduct I agreed with Facebook, Twitter, Google and Microsoft shows that a self-regulatory approach can serve as a good example and can lead to results. However, if the tech companies don’t deliver, we will do it.

There seems to be an inherent opposition between respecting the rule of law on the one hand, while at the same time using the rule of law as an excuse for, in effect, censorship. The code of conduct asks private companies to ‘self-regulate’ on the basis of supra-national governance rules. Whereas a course of action closer to the rule of law, would seem to be to allow a court to decide on the legality of utterances áfter they have been uttered. It is important to make this point, because the EU has decided to package these measures in a way that suggests they’re trying to pull a fast one. It lumps together propaganda for terrorism with child abuse material, counterfeit goods and hate speech. And while child abuse material and counterfeit goods are easily defined, that is much harder for propaganda and hate speech. Those ask for a far greater degree of interpretation, even if there were fool-proof judicial definitions in all EU countries. How one would automate that process and bring it in line with established jurisprudence is an open question. Which is even disregarding the question if it would even be a good thing to have those kinds of things be judged outside of court.

In effect, the EU is very openly stirring up fear of terrorist content online, to push through an agenda that would maximise EU influence on what is deemed acceptable online:

The dodgy language employed, in more ways than one, does not bode well. While the measures are presented as necessary to combat terrorism, Commissioner Jourová also said:

Online trolls and haters cannot limit our rights to express ourselves because of violent illegal messages they target people with. We are working with platform providers and civil society to fight the online injustice. These are the values we cherish.

There doesn’t seem to be a clear, logical argument in this quote. Yet the EU selected it for one of its tweets. It is certainly true that online trolls and haters cannot limit our rights to express ourselves. She might mean that they cannot be allowed to do so, but it doesn’t say that. The suggestion now is that she is fighting ‘online trolls and haters’ who cannot limit rights, by setting up a system that will in effect do so, in the name of ‘online justice’. The press release makes this very clear:

Today’s communication is a first step and follow-up initiatives will depend on the online platforms’ actions to proactively implement the guidelines. The Commission will carefully monitor progress made by the online platforms over the next months and assess whether additional measures are needed in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework.

(…) In order to allow for the monitoring of the effects of the Recommendation, Member States and companies will be required to submit relevant information on terrorist content within three months, and other illegal content within six months.

What is also noteworthy, is that despite claims by the Commission that it is working with platform providers and civil society, both platform providers, as well as civil society, are upset by the announced measures. Both European Digital Rights (EDRi), an association of civil and human rights organisations from across Europe, and EDiMA, the European trade association representing online platforms and other innovative businesses, criticise the measures.

EDRi say in a press release:

Under human rights law, restrictions to our rights to privacy or freedom of expression need to be provided for by law and be necessary and proportionate. The European Commission’s short-cut, where it puts the focus on “voluntary” measures by internet companies, bypasses democratic accountability. Today’s Recommendation is part of a worrisome trend. The European Commission has focused heavily on using the ‘threat’ of legislation to force internet companies into more ‘voluntary’ policing activities.

It has also provided an interesting Q&A, detailing its opposition to the measures.

EDiMA, meanwhile, says it is “dismayed” and that while

our sector accepts the urgency it needs to balance the responsibility to protect users while upholding fundamental rights,

and claims that EU measures actually harm the effectiveness of its reaction. All in all, it does not look good.