The growing presence of child sexual abuse material (CSAM) online is a significant global issue, highlighting the urgent need for innovative and effective prevention strategies.
Warning messages are an effective and low cost method to deter individuals from accessing or distributing CSAM. Technology providers have deployed selected deterrence warning messages for users attempting to access CSAM since 2013, when they were first introduced by search providers Google, Bing and Yandex.
Warning messages serve as a digital intervention aimed at preventing online child sexual abuse by targeting individuals at risk of engaging in such behaviour. These messages are designed to increase the perceived risk of detection and legal consequences, thereby deterring potential offenders while also raising their awareness of therapeutic interventions through services like Stop It Now!.
The use of warning messages aligns with situational crime prevention principles, which focus on increasing the effort and risks associated with offending, reducing the rewards, and removing excuses for criminal behaviour. There are a broad range of contexts where warning messages can be deployed, with the potential for a significant reduction in offending behaviour.
Research-Based Evidence for the Effectiveness of Warning Messages
Research has shown that warning messages can be effective in deterring individuals from engaging in harmful sexual behaviours online. These messages disrupt offending and increase awareness of the legal consequences of such actions, thereby deterring the offending behaviour.
In randomised controlled trials, participants who interacted with warning messages were shown to be less likely to proceed to attempt to access a website purporting to contain sexual material. These experiments examined the effectiveness of warnings to users attempting to (a) access a fake “barely legal” pornographic website and (b) share sexualised images to gain entry to a fake “revenge porn”-themed website.
The studies found that warning messages had a statistically significant effect on reducing the number of participants who attempted to view or share sexualised images after receiving a warning. Given that the participants in this study were not attempting to commit an offence, it was concluded that real-life warnings about attempts to view CSAM would be even more effective.
The growing level of online victimisation of young people motivated a recent study that explored how adolescents perceive online warning messages designed to address problematic online behaviours such as the viewing or sharing of child sexual abuse material (CSAM), image-based abuse (IBA), cyberbullying and sexual extortion. It examined this in the context of the young person causing harm, victims and bystanders, finding positive perceptions and engagement with the warning messages across all three categories, but particularly with victims and bystanders.
Evaluation at Scale on the Web
The rollout of warning messages by law enforcement and some technology companies is a promising sign that action is underway. For example, search providers that implemented warnings in 2013 appear to have decreased the number of search attempts. However, public data on the volume and effectiveness of these interventions is very limited.
In 2021 Aylo, who operates Pornhub and other pornography websites, launched warning messages across its platforms. Following this, in 2022, Aylo launched a chatbot in the United Kingdom on the warning page for users who attempted to locate CSAM on Pornhub, in partnership with the Internet Watch Foundation and the Lucy Faithful Foundation. This project, called reThink, culminated in an evaluation of the chatbot’s effectiveness in deterring users from searching for CSAM, utilising rich data from the three partners being evaluated, with the results shared publicly.
This evaluation found that the chatbot had a significant deterrent effect on search volume. 82 per cent of users served the chatbot warning did not search for CSAM again in that session. Several of these individuals requested information on support services and went on to call the Stop It Now UK and Ireland helpline.
This publicly evaluated study is a prime example of the action that is needed to understand how warning messages work, and how they can be effective at scale on mainstream platforms.
Ongoing evaluation and refinement of warning messages is crucial to ensure their continued effectiveness. This includes monitoring user behaviour and adjusting the content and delivery of messages based on feedback and new research findings. The iterative process of evaluating and improving warning messages helps maintain their relevance and potency in deterring online child sexual abuse.
Currently, however, data from most warning message deployments is limited.
The Path Ahead
The evidence underscores the effectiveness of warning messages as a prevention strategy for CSAM online; however, a larger evidence base is still needed. The success of this strategy depends on careful design, implementation, and evaluation to ensure that the messages remain effective and relevant. Collaboration between technology companies, law enforcement, financial institutions, service providers, education and care providers, researchers and child sexual abuse prevention professionals, advocates and survivors is essential to refine and enhance these interventions, ultimately reducing the prevalence of CSAM and keeping children safe from harm.