The regulation of social media

The growth of intolerant and racist manifestations in social media has led the European Union to urge technology companies to take a more active role against hate speech.

As a result of pressure by some States and the European Union, Facebook, Twitter, YouTube and Microsoft signed a Code of Conduct in May 2016 on the illegal incitement to hatred on the internet. These companies undertook a series of commitments in this agreement, among them to review requests to withdraw illegal content that constitutes incitement to hatred within 24 hours. They also undertook to establish clear procedures to examine reported content and evaluate complaints they received, in line with their self-regulation rules but also taking into account national legislation as appropriate that transposes the Council Framework Decision on the fight against certain forms and manifestations of racism and xenophobia through Criminal Law, which the agreement considers as its legal basis. The last platform to sign up to this in 2020 was Tik Tok.

Thus, the Code of Conduct of the European Commission reviews the actions of these platforms on an annual basis in an attempt to combat illegal incitement to hate online. The Code and its application and revisions has also been used to design the EU’s future Digital Services Act. A draft proposal was presented by the European Commission in December 2020, and European co-legislators are currently negotiating it.

Strategies to prevent hate speech on the Internet

Campaigns

Based on their content and specific orientation, these campaigns can be split in three categories: awareness creation, affirmative and restrictive. We would highlight the No Hate Speech Youth Campaign (No Hate campaign) launched by the Council of Europe (CoE) in March 2013, as it is an across-the-board campaign that includes a wide range of strategies against hate online. The campaign holds a top spot in the fight against hate speech on the internet, as it has functioned as an umbrella for a series of initiatives at national, regional and local level in European countries.

Strategies based on training and education

Although it is difficult to quantify the problem overall, there is a general feeling internationally that hate speech on social media is worryingly present. Hate, anger and aggression have become common phenomena on social media, leading to emotional harm to their targets and contributing to the stigmatisation and dehumanisation of certain individuals, for a variety of intolerance-based motives. Legal strategies that seek repression and sanction of hate speech are a major dilemma, as they affect an essential right in democracies: freedom of expression.

Code of Conduct of the European Commission

Every year since 2016, the European Commission has evaluated hate speech on the platforms and social media that signed up to the Code of Conduct analysing YouTube, Facebook, Twitter, Instagram, Jeuxvideo and -from next year- Tik Tok. The latest report, dated 20 June 2020, indicates that Internet providers assess 90% of content reported within 24 hours and remove 71% of content considered to be illegal hate speech. However, the platforms need to improve their transparency and response to users even more. Věra Jourová, Vice-president for Values and Transparency and Commissioner for Justice, Consumers and Gender Equality, has said that “The Code of Conduct continues to be a success when it is a case of countering illegal hate speech online. It has contributed urgent improvements that fully respect basic rights and has created valuable links among civil society organisations, national authorities and internet platforms. Now is the time to ensure that all platforms have the same obligations throughout the Single Market and clarify their responsibilities in legislation to ensure that users are more secure on the internet. What is illegal offline, must be illegal online”.

The platforms evaluated 90% of flagged content within 24 hours, whereas they only evaluated 40% in 2016. 71% of content considered as illegal hate speech in 2020 was eliminated in that year, as opposed to only 28% in 2016. The platforms responded to 67.1% of the notifications received. This is a higher proportion than in the previously monitored year (65.4%). They avoid eliminating content that is not classified as illegal hate speech, but the policy of resources for whoever considers themselves harmed does not offer full guarantees, and there are still elements to be resolved from the legal point of view.

The platforms responded to 67.1% of the notifications received. This is a higher proportion than in the previously monitored year (65.4%).

The European Union Digital Services Act

The European Parliament and the Council are negotiating the future Digital Services Act, based on a proposal presented by the European Commission in December 2020.
This Act will establish obligations that will serve to provide better protection for consumers and users, and particularly their basic rights. It will create a framework for online platforms to be more transparent and help maintain a healthy and secure digital ecosystem by applying clear rules. It will also help the single market to innovate, grow and remain competitive.

Under the Act, users and platforms will launch alerts by flagging potentially illegal content, and any illegal content should be withdrawn. The proposal establishes a series of mechanisms and safeguards regarding this withdrawal.
The Act will create a legal framework for alert systems on illegal content, so that it will be removed quickly and effectively and so that users can challenge content decisions that could be considered unfair.

The grey areas of hate on the Internet

It has been seen that it is not enough for platforms to self-regulate, and for the aforementioned reasons, public authorities and society itself should take responsibility.

These grey areas house considerable harmful content, so any attempt to reduce their presence on the web should scrupulously respect people’s freedom of expression, without making this guarantee an open door for infringing the rights of others. Platforms exercising some kind of self-censorship should be avoided, as this is not their purpose.

The debate is very much alive raising many pertinent questions: should more be invested in users’ digital and media literacy? Are the authorities a model for good use of social media? Would a code stating their aims help them to use them better? Should there be greater public democratic control over the platforms? Should authorities play a more active role, for example, by issuing licences to platforms for certain tasks? Are European recommendations needed to standardise criteria on moderating web content? Would it be useful to have judges and mediators specialising in conflicts with broadcasters who do not agree with their content being withdrawn? Would it be positive to reward exemplary stakeholders, websites or checkers who prioritise transparency and veracity?

A healthy and secure digital ecosystem goes well beyond laws.

Notes on hate crime as a serious crime for the European Union

In her speech on the State of the Union in 2021, European Commission President Ursula von der Leyen emphasised that progress in the fight against racism and hate is still “fragile” and that it is time to make changes to “construct a really anti-racist Union that goes from condemnation to action”.

It is in this framework that the Commission’s initiative aimed at extending the list of serious crimes in the EU -contained in article 83(1) of the Treaty on the Functioning of the European Union (TFUE) to include incitement to hate and hate crimes. The EU wants to provide a response to combat the data and analysis provided by the FRA and other researchers that show that manifestations of hate are greater in numbers and motivations and that society is increasingly bitter and polarised. Right now, there is limited standardisation in European criminal law −in terms of definition and sentences− when we talk of hate speech and hate crimes due to racism or xenophobia, but not for other motives.

Including hate speech and hate crime as particularly serious cross-border crimes would mean sharing common criteria that all the Member States would reflect in their criminal law codes.

Prejudice, intolerance and hatred circulate at the speed of light, both inside and outside the EU.

Can society perceive a change like this as something positive? Does society consider it positive?