Digital Regulation Platform
«
»
«
»

Social media platform regulation

20.08.2020

The moderation of online content, especially on social media platforms, is a pressing issue for consumers and citizens. Experience of the Facebook Oversight Board, a self-regulatory initiative, will affect the future not only of Facebook, but also of other regulatory initiatives relating to social media and platforms more generally (including self-regulatory[1] and coregulatory initiatives.

Within any one platform, there may be tensions between simple commercial considerations and political/public relations imperatives. To these may be added actual legal constraints, such as already exist in Germany[2] and are proposed in the United Kingdom,[3] to combat online harms. A valuable overview of the situation in 12 Sub-Saharan African countries recommends (among much else) the human-rights-based approach. Potential penalties for legal infractions may sometimes be treated as a cost of doing business, rather than as serious risks that must be avoided; adverse sentences in the court of public opinion could be seen as bigger problems.

Technology itself has a big role to play. The table below illustrates new and emerging safety technologies in the U.K. “safety tech” sector. Facebook and other online platforms are already using similar technologies for identifying and handling questionable content. Clearly many developments can be expected in coming years.

Heading and description Harm Approach Benefit Technologies and services
System-wide governance
Tracing, locating & removing illegal content
  • Terrorist content
  • Extreme/Revenge pornography
  • Child sexual abuse and exploitation
Detection and action against illegal content at system level Tackling online crime, protecting citizens, preventing abuse
  • Hashing
  • URL lists
  • Takedown and domain alerts
  • Keyword collation and monitoring
Platform governance
Platform level response to illegal content, including

preventing illegal content from being published

  • Terrorist content.
  • Extreme/revenge pornography
  • Child sexual abuse and exploitation
  • Sharing of indecent images
  • Encouraging or assisting suicide
  • Harassment
  • Hate crime
Pre-moderation, detection, flagging and removal of illegal content at platform level Protection from illegal online content
  • Threat detection and reporting
  • Platform monitoring
  • Hashing
  • Content filtering
  • Automated and human moderation
  • Image processing
Platform moderation & monitoring
Prevention, detection & action against harmful conduct and / or content
  • Extremist content
  • Cyberbullying
  • Coercive behaviour
  • Intimidation
  • Violent content
  • Toxic Content
  • Advocacy of self-harm
Moderation and monitoring of harmful conduct and / or content Protection from harmful online conduct and content
  • Computer vision
  • Machine learning
Age oriented online safety
Age appropriate design
  • Age inappropriate content
  • Unsafe spaces
Safety by design Design and development of user-centred online environments to keep children safe Age appropriate web services, consent management
Age assurance
  • Age inappropriate content
  • Unsafe spaces
Age detection and verification Protection from age-inappropriate content Age assurance mechanisms: age estimation,

e-IDs, database matching / attribute exchange

User protection
User initiated protection (user, parental or

device-based)

  • Age inappropriate content
  • Unsafe spaces
Safety by design,

age-based safeguarding

Creating safe online experiences for children Endpoint protection software and applications
Network filtering
  • Extremist content
  • Cyberbullying
  • Coercive behaviour
  • Intimidation
  • Violent content
  • Harmful instruction
Detection and blocking access to harmful or inappropriate content Preventing access to harmful material within defined settings Content filtering and monitoring
Information governance
Detecting and disrupting false, misleading and/or harmful narratives
  • Misinformation
  • Disinformation
Fact checking, disinformation research and disruption Ensuring citizen information accuracy and trust in the information environment and wider society
  • Disinformation research
  • Site assurance
  • AI/ML enabled automated fact-checking
Online safety professional services
Compliance & professional services – All Compliance services, research, frameworks and methodologies for auditing, evaluating or mitigating potential harms Enabling the development of safer online communities and embedding safety-by-default Advisory support with implementing technical solutions


Source
: Adapted from Safer Technology, Safer Users: The UK as a World-Leader in Safety Tech.

Notes

  1. Examples include the Global Network Initiative and the Internet Commission. These both aim to provide independent assessments of industry members’ performance, though with different emphases and approaches. Both face the challenge of reconciling a level of commercial confidentiality that will be acceptable to their members with enough transparency to achieve public credibility.
  2. The 2017 Netzwerkdurchsetzungsgesetz is already under review. See Government wants to change the Network Enforcement Act, https://www.bundestag.de/dokumente/textarchiv/2020/kw19-de-netzwerkdurchsetzungsgesetz-692664
  3. See Online harms regulation: Clarity awaited but reforms set to be delayed
Last updated on: 19.01.2022
Share this article to: