By Julia Fioretti
BRUSSELS (Reuters) - Social media companies Facebook, Twitter and Google's YouTube have accelerated removals of online hate speech, reviewing more than two thirds of complaints within 24 hours, new EU figures show.
The European Union has piled pressure on social media companies to increase their efforts to fight the proliferation of extremist content and hate speech on their platforms, even threatening them with legislation.
Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram will also sign up to the code, the European Commission said.
The companies managed to review complaints within a day in 81 percent of cases, EU figures released on Friday show, compared with 51 percent in May 2017 when the Commission last monitored compliance with the code of conduct.
On average, the companies removed 70 percent of the content flagged to them, up from 59.2 percent in May last year.
EU Justice Commissioner Vera Jourova has said that she does not want to see a 100 percent removal rate because that could impinge on free speech.
She has also said she is not in favor of legislating as Germany has done. A law providing for fines of up to 50 million euros ($61.4 million) for social media companies that do not remove hate speech quickly enough went into force in Germany this year.
Jourova said the results unveiled on Friday made it less likely that she would push for legislation on the removal of illegal hate speech.
'NO FREE PASS'
"The fact that our collaborative approach on illegal hate speech brings good results does not mean I want to give a free pass to the tech giants," she told a news conference.
Facebook reviewed complaints in less than 24 hours in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases.
Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.
The most common ground for hatred identified by the Commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.
After pressure from several European governments, social media companies stepped up efforts to tackle extremist online content, including through the use of artificial intelligence.
"After a year and a half of intensive EU-wide monitoring, we welcome the European Commission’s announcement today and the clear, demonstrable improvements from all companies," said Stephen Turner, Twitter's head of public policy.
"These latest results and the success of the code of conduct are further evidence that the Commission's current self-regulatory approach is effective and the correct path forward."
The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.
The monitoring exercise was conducted over a period of six weeks in November.
(Reporting by Julia Fioretti; Additional reporting by Foo Yun Chee; Editing by Grant McCool and David Goodman)