European Commission Recommends Measures to Tackle Illegal Content Online

Mar 06, 2018

In September 2017, the European Commission promised to monitor progress in tackling illegal content online and assess whether additional measures are needed to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. As a follow-up, the Commission is now recommending a set of operational measures – accompanied by the necessary safeguards – to be taken by companies and Member States to further step up this work before it determines whether it will be necessary to propose legislation. These recommendations apply to all forms of illegal content ranging from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products, and copyright infringement.

Terrorist content is most harmful in the first hours of its appearance because it spreads quickly and entails grave risks to citizens and society at large. Given this urgency, and in view of calls from EU leaders and international organizations such as the United Nations and the G7, the Recommendation puts particular emphasis on terrorist material: it should be removed within one hour after it has been flagged to the platforms by law enforcement authorities as well as Europol.

This builds on the on-going work with the industry through various voluntary initiatives to ensure that the internet is free of illegal content and reinforces actions taken under different initiatives.

What is Considered Illegal Content Online?

What is illegal offline is also illegal online. Illegal content is not in compliance with EU law or the law of a Member State. This includes terrorist content, child sexual abuse material (Directive on combating sexual abuse of children), illegal hate speech (Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law), commercial scams and frauds (such as Unfair commercial practices directive or Consumer rights directive) or breaches of intellectual property rights (such as Directive on the harmonisation of certain aspects of copyright and related rights in the information society).

Terrorist content is any material which amounts to terrorist offenses under the EU Directive on combating terrorism or under national laws — including material produced by, or attributable to, EU or UN listed terrorist organizations. 

What Does the Commission Expect From Online Platforms in Terms of Preventing, Detecting and Removing All Forms of Illegal Content Online?

The Recommendation specifies the mechanisms for flagging illegal content which online platforms should put in place, as well as details on how notices of illegal content should be processed (notice and action procedures). Online platforms are also encouraged to take proactive measures to identify and remove illegal content, including by automated means such as upload filters, where this is appropriate.

In addition, the Recommendation encourages online platforms to cooperate with Member States, trusted flaggers, and among themselves to work together and benefit from best practices. This will help smaller companies to tackle illegal content.

The Recommendation also requires from online platforms a range of transparency measures, including on their content policy, as well as regular reporting on their actions taken as regards illegal content. This will also allow regulators to understand if the proposed measures are effective.

Why Does the Recommendation Differentiate Between Terrorist Content and Other Illegal Content Online?

Complementary recommendations specifically relating to terrorist content are needed in view of the particularly urgent risks to citizens and society associated with terrorist content. The dissemination of such content is used to radicalize and recruit and to procure funding for terrorist activities, to prepare, instruct, and incite attacks.

This recommendation addresses the need for proactive measures as well as the required speed of assessment and action against terrorist content, which is particularly harmful in the first hours of its appearance online. This is in line with the Commission's view that sector-specific differences should be taken into account where appropriate and justified.

The Recommendation builds on and consolidates the progress already achieved under the EU Internet Forum – a key deliverable of the European Agenda on Security – while recognizing the urgent need for a swift and more comprehensive response. This was reiterated by the European Council of 22-23 June 2017, stating that it "expects industry to […] develop new technology and tools to improve the automatic detection and removal of content that incites to terrorist acts. This should be complemented by the relevant legislative measures at EU level, if necessary." Similarly, the European Parliament, in its resolution on Online Platforms of June 2017, urged platforms "to strengthen measures to tackle illegal and harmful content", while calling on the Commission to present proposals to address these issues.

What Should Be the Process of Removing Terrorist Content Online?

In line with actions agreed to under the EU Internet Forum, the recommendation identifies a number of measures to effectively stem the uploading and sharing of terrorist propaganda online. These include:

  • No hosting of terrorist content: Companies should explicitly state in their terms of service that they will not host terrorist content.
  • Improved referral system: Special mechanisms for the submission of and follow-up to referrals from competent authorities--as well as Europol's Internet Referral Unit--should be put in place along with fast-track procedures to remove content within one hour of its referral. At the same time, Member States need to ensure they have the necessary capabilities and resources to detect, identify, and notify terrorist content to internet platforms.  
  • One-hour rule for referrals: Because terrorist content is particularly harmful in the first hours of its appearance online, companies should, as a general rule, remove such content within one hour of its flagging by law enforcement authorities and Europol.
  • Faster proactive detection and effective removal: Proactive measures, including automated detection, are needed to effectively and swiftly detect, identify, and expeditiously remove or disable terrorist content and stop it from reappearing once it has been removed. Companies should share and optimize appropriate technological tools and put in place working arrangements for better cooperation with the relevant authorities, including Europol.
  • Safeguards: To accurately assess the referred terrorist content, or content identified via automated tools, companies need to put in place necessary safeguards — including a human review step before content is removed, so as to avoid unintended or erroneous removal of content which is not illegal.  

It is important to note that to fully address the challenge of terrorist content online, reducing the accessibility of terrorist propaganda is only one side of the response. The other consists of supporting credible voices to disseminate positive alternatives or counter narratives online. To this end, the Commission launched the Civil Society Empowerment Programme (CSEP) under the EU Internet Forum, providing capacity-building and funding to civil society partners to develop such narratives.

What Are the Safeguards to Ensure Fundamental Rights?

It is essential that any measures taken to tackle illegal content online are subject to adequate and effective safeguards to ensure that the online platforms prevent the unintended removal of content which is not illegal.

First, the recommendation calls online platforms to act in a diligent and proportionate manner towards the content they host, especially when processing notices and counter-notices and deciding on the possible removal of or disabling of access to content considered to be illegal.

Particular safeguards, notably human oversight and verifications, should be provided when online platforms use automated techniques to remove content ('Human-in-the-loop' model), as it requires an assessment of the relevant context to determine whether or not the content is to be considered illegal.

Second, the recommendation invites online platforms to give the opportunity to those who provided content that was removed to contest this decision via a counter-notice. This will allow that content that was erroneously removed can be reinstated.

Third, the recommendation calls for online platforms to regularly publish reports explaining to the general public how they apply their content management policies.

What Are the Next Steps?

The Commission will closely monitor the actions taken by the online platforms in response to this Recommendation and determine whether additional steps, including legislation, are required. The Commission will continue analyzing the progress made and will launch a public consultation in the coming weeks.

Member States and companies have to submit relevant information on the removal of terrorist content within three months, and illegal content within six months. For terrorist content online, the EU Internet Forum will continue its voluntary cooperation to progress on its ambitious Action Plan to combat terrorist content online, covering the use of automated detection of such illegal content, sharing related technology and tools with smaller companies, achieve the full implementation and use of the database of hashes, and empowering civil society on alternative narratives.

On child sexual abuse material, the Commission will continue to work with Member States in the implementation of the child sexual abuse directive and will keep supporting the WeProtect Global Alliance to End Child Sexual Exploitation Online and the commitments of States and Industry partners in that framework.

For More Information: