Ireland’s media regulator has put social media giant Meta on watch over terrorist content takedowns again — it issued a decision against Facebook on Monday. The Coimisiún na Meán said the tech giant would have to take “specific measures” to prevent its services from being used for the dissemination of terrorist content and report back to the regulator on the measures taken.
The decision follows a similar determination by Coimisiún na Meán against Meta-owned Instagram in November, along with TikTok and X.
The Irish authority plays an oversized role in regulating tech giants’ compliance with a range of digital rule books due to how many opt to locate their regional headquarters in Ireland.
The relevant bit of Ireland’s online safety framework that Coimisiún na Meán is enforcing in today’s decision is a pan-EU law on terrorist content takedowns that was agreed to by the bloc’s lawmakers back in 2021. It requires hosting service providers — in this case social media platforms — to remove terrorist content within one hour of it being reported. Penalties under the regime can reach up to 4% of global annual turnover.
“Under the Terrorist Content Online Regulation, hosting service providers which receive two or more final removal orders from EU competent authorities within the last 12 months may be determined as being exposed to terrorist content,” the Irish regulator wrote in a press release. “An Coimisiún has reached this decision [against Meta-owned Facebook] following the notification of two or more final removal orders in respect of this providers and following engagement with this provider.”
It’s not clear exactly which type of terrorist content was found on Facebook and notified to the regulator. We’ve asked for more details. Meta has been contacted for a response to the Coimisiún na Meán decision.
Update: Meta spokesman Ben Walters emailed a statement in which the company wrote: “This designation means CnaM [Coimisiún na Meán] can assess the measures we have in place to deal with terrorist content.”
Meta, the parent company of Facebook and Instagram, has been put on watch by the European Union (EU) over concerns about the spread of terrorism content on its platforms. The EU has been increasing pressure on tech companies to take responsibility for removing terrorist content from their platforms, and Meta is the latest company to be scrutinized.
Background
The EU has been grappling with the issue of terrorism content online for several years. In 2018, the EU introduced the Terrorism Regulation, which requires tech companies to remove terrorist content from their platforms within one hour of being notified. The regulation also requires companies to implement proactive measures to prevent the spread of terrorist content.
Meta’s Response
Meta has responded to the EU’s concerns by implementing several measures to prevent the spread of terrorist content on its platforms. These measures include:
1. Using AI to Detect Terrorist Content: Meta uses artificial intelligence (AI) to detect and remove terrorist content from its platforms. The company’s AI systems are trained to recognize patterns and keywords associated with terrorist content.
2. Implementing Proactive Measures: Meta has implemented proactive measures to prevent the spread of terrorist content on its platforms. These measures include using algorithms to detect and remove terrorist content before it is reported by users.
3. Partnering with Counter-Terrorism Experts: Meta has partnered with counter-terrorism experts to help identify and remove terrorist content from its platforms. The company works closely with these experts to stay up-to-date with the latest trends and tactics used by terrorist organizations.
4. Providing Transparency Reports: Meta provides transparency reports on its efforts to remove terrorist content from its platforms. These reports provide detailed information on the number of terrorist content removals, the types of content removed, and the sources of the removals.
Benefits of Meta’s Measures
Meta’s measures to prevent the spread of terrorist content on its platforms have several benefits, including:
1. Reducing the Spread of Terrorist Content: Meta’s measures have helped to reduce the spread of terrorist content on its platforms. The company’s AI systems and proactive measures have been effective in detecting and removing terrorist content before it is reported by users.
2. Improving Transparency and Accountability: Meta’s transparency reports provide detailed information on its efforts to remove terrorist content from its platforms. This transparency helps to hold the company accountable for its actions and provides a clear understanding of its efforts to prevent the spread of terrorist content.
3. Enhancing Collaboration with Counter-Terrorism Experts: Meta’s partnership with counter-terrorism experts helps to ensure that the company stays up-to-date with the latest trends and tactics used by terrorist organizations. This collaboration also helps to improve the effectiveness of Meta’s measures to prevent the spread of terrorist content.
Challenges and Limitations
While Meta’s measures to prevent the spread of terrorist content on its platforms have been effective, there are still challenges and limitations to consider. These include:
1. The Ever-Evolving Nature of Terrorist Content: Terrorist organizations are constantly evolving and adapting their tactics, making it challenging for Meta to stay ahead of the threat.
2. The Difficulty of Defining Terrorist Content: Defining terrorist content can be challenging, as it can be subjective and context-dependent. This can make it difficult for Meta to determine what content constitutes terrorist content.
3. The Need for Continuous Improvement: Meta’s measures to prevent the spread of terrorist content on its platforms require continuous improvement. The company must stay up-to-date with the latest trends and tactics used by terrorist organizations and adapt its measures accordingly.
Conclusion
Meta’s measures to prevent the spread of terrorist content on its platforms are an important step in the fight against terrorism. While there are still challenges and limitations to consider, Meta’s efforts demonstrate a commitment to preventing the spread of terrorist content and promoting a safer online environment.
Meta, the parent company of Facebook and Instagram, has been put on watch by the European Union (EU) over concerns about the spread of terrorism content on its platforms. This move has several benefits, including:
Benefits for the EU and its Citizens
1. Reduced Spread of Terrorism Content: By putting Meta on watch, the EU can ensure that the company takes more effective measures to prevent the spread of terrorism content on its platforms. This can help to reduce the risk of radicalization and terrorism in the EU.
2. Improved Online Safety: The EU’s move can help to improve online safety for its citizens. By ensuring that Meta takes more effective measures to prevent the spread of terrorism content, the EU can help to create a safer online environment for its citizens.
3. Increased Transparency and Accountability: By putting Meta on watch, the EU can ensure that the company is more transparent and accountable in its efforts to prevent the spread of terrorism content. This can help to build trust between the EU, Meta, and its citizens.
Benefits for Meta and its Users
1. Improved Reputation and Trust: By taking more effective measures to prevent the spread of terrorism content, Meta can improve its reputation and build trust with its users. This can help to increase user engagement and loyalty.
2. Reduced Risk of Regulation: By taking proactive measures to prevent the spread of terrorism content, Meta can reduce the risk of regulation by the EU. This can help to ensure that the company can continue to operate freely in the EU.
3. Increased Collaboration with Law Enforcement: By working more closely with the EU and law enforcement agencies, Meta can increase its collaboration with these organizations to prevent the spread of terrorism content. This can help to improve the effectiveness of Meta’s efforts and reduce the risk of terrorism in the EU.
Technical Benefits
1. Improved Content Moderation: By investing in more effective content moderation tools and techniques, Meta can improve its ability to detect and remove terrorism content from its platforms.
2. Increased Use of AI and Machine Learning: By using more advanced AI and machine learning techniques, Meta can improve its ability to detect and remove terrorism content from its platforms.
3. Better Collaboration with Other Tech Companies: By working more closely with other tech companies, Meta can share best practices and improve its efforts to prevent the spread of terrorism content.
Conclusion
The EU’s decision to put Meta on watch over terrorism content is an important step in the fight against terrorism. By taking more effective measures to prevent the spread of terrorism content, Meta can improve its reputation and build trust with its users. The benefits of this move include reduced spread of terrorism content, improved online safety, and increased transparency and accountability.