#data #AI #onlinesafety
Online threats are increasingly varied, challenging, and widespread – ranging from hate speech to terrorism, from disinformation to child abuse.
These disruptive, unwanted and often illegal activities present a clear risk to the safety and wellbeing of individuals, platforms and societies. Despite incoming regulation in many territories, increasing public support for action, and a growing economy of vendors who provide products, these problems remain fundamentally difficult to solve. All of the solutions currently available raise issues of performance, free speech, proportionality, privacy and technological capability.
There are no silver bullets when dealing with a complex problem like online safety, but Artificial Intelligence (AI) has real potential to drive a step change in detecting and responding to online threats. It can make law and policy enforcement more efficient and effective by supporting, replacing and advancing on human-led interventions.
In recent years, AI has drastically improved and workflows for its use and deployment have been overhauled with the widespread use of large pre-trained models and transfer learning. However, these changes have not been fully leveraged in how AI is used for online safety. In particular, the critical role played by data, and as such the unique position and importance of data owners, has not been fully recognised. This article discusses these changes and their implications, and explores what this means for the future online safety sector.