Best Practice Guide: Implementing Warning Messages for CSAM Prevention

Technology platforms fight a constant battle against illicit content. Static content blocking leaves defensive gaps. Proactive behavioural interventions offer a stronger approach by disrupting attempts to access content and redirecting users to support that enables them to change their behaviour.

This guide provides a set of best practice principles for designing and deploying automated warning messages to disrupt and deter the viewing and distribution of child sexual abuse material (CSAM). These principles are based on the latest research being conducted worldwide with a range of technology companies.

The guide describes: when and where messages can be deployed; how they should be designed; the phrasing and wording to use; and, finally, several key pitfalls to avoid. Implementing these guidelines requires industry collaboration. The CSAM Deterrence Centre invites technology platforms to partner directly with our research and practice teams.

When and where messages should be deployed

Effectiveness depends on delivering the message precisely when a user initiates a high-risk action1. Situations when the warning should be triggered extend beyond simple keywords to capture a broad range of situational contexts.

  • Diverse Trigger Points2: Implement triggers for flagged search terms, attempts to access known CSAM URLs, uploading material which matches known CSAM hashes, and the activation of grooming detection algorithms.
  • Conversational & Contextual Analysis2, 3: Use natural language processing (NLP) and multimodal analysis to identify risk signals in text, images, or metadata exchanges within interactive environments like messaging platforms and comment threads.
  • Layered Defence2: Deploy warnings as part of a defence in depth strategy that includes content blocking, search-term suppression, and friction mechanisms (e.g. delay till next request following trigger). Deploying a warning message alone is not a complete strategy.
  • Escalation: Instead of static triggers, implement a sequenced intervention pathway. If risky behaviour persists, the system should trigger progressively more assertive responses rather than repeating the same message.
  • Differentiate Outcomes: When setting success criteria, distinguish between immediate outcomes (e.g., abandoning a search) and intermediate outcomes observed over time (e.g., longer delays between repeat attempts).

How they should be designed

A warning is only effective if it is noticed, understood, and acted upon.

  • Visual Salience4, 5: Use high-contrast colours (e.g. standard safety colours), bold typography, and recognisable hazard symbols (e.g., “!”) and signal words (e.g. Warning or Caution) to capture immediate attention.
  • Interruptive Formats5: Messages are best paired with a friction mechanism, requiring an exit or a delay before repeating actions. This can include full-page interstitials or pop-ups that require a user response (e.g. clicking “Exit”) before proceeding, or a delay in their ability to attempt similar actions again.
  • Clarity3, 6: Keep messages clear, concise, and direct using simple language. Research suggests that excessively long messages are a major deficiency and likely to be ignored.
  • Message Variation7: Periodically vary the visual design, wording, and framing of messages to counter “warning fatigue” and habituation. Dynamic messaging increases the novelty and can capture user attention. Monitoring message performance continuously to rotate and change messages to mitigate warning fatigue.

What wording to use

The tone and perceived authority of a message are as critical as its visual appearance6. The context in which the message will be deployed affects how it is received. Partnering with us enables your team to access the latest research on how to frame wording for different online environments.

  • Source Credibility5, 8: Where appropriate, attribute messages to authoritative sources, such as law enforcement agencies, regulators, or reputable NGOs (e.g. Internet Watch Foundation).
  • Signpost to Support9, 10: Include non-punitive, anonymous links to help-seeking resources and therapeutic services for users concerned about their behaviour.
  • Legal Framing6, 11: Clearly state the illegality of the behaviour and mention the potential for consequences, when contextually appropriate.
  • Proportionate Response3: Ensure that the tone matches the risk level to preserve user trust and avoid unnecessary escalation for accidental or one-off triggers.
  • Interactive Engagement11: Integrating an interactive chatbot can increase behavioural interruption and provide a bridge to support services.

Key Pitfalls to Avoid

  • Avoid Sensationalism12: Do not use ambiguous, eroticized, or sensationalist imagery/language, as this can inadvertently trigger a “forbidden fruit effect” and increase curiosity.
  • Avoid Excessive Punishment3, 13: While legal warnings are effective, overly punitive or shaming language can trigger defensiveness and reduce help-seeking behaviour.
  • Minimise False Positives2: Use context-aware filtering (e.g. Bayesian classifiers) rather than simple keyword lists to preserve user trust and avoid over-blocking benign content.
  • Avoid Deception3, 13: Users respond negatively to messages that attempt to deceive them. If using a chatbot, be transparent about its automated nature.
  • Static Deployment7: Avoid “set and forget” implementations. Failing to adapt to evolving user behaviours can lead to declining effectiveness and the emergence of avoidance tactics.

Partner With Us

To effectively deter CSAM, we need to share knowledge and work together across the industry. When we act alone, our impact is limited. Together, we can create custom warning systems and measure their effectiveness in real-world settings.

References

  1. Wortley, R., & Smallbone, S. (2012). Internet child pornography: Causes, investigation, and prevention. Bloomsbury Publishing USA.
  2. Hunn, C., Watters, P., Prichard, J., Wortley, R., Scanlan, J., Spiranovic, C., & Krone, T. (2023). How to implement online warnings to prevent the use of child sexual abuse material. Trends and issues in crime and criminal justice, (669), 1-14.
  3. Prichard, J., Scanlan, J., Watters, P., Wortley, R., Hunn, C., & Garrett, E. (2022). Online messages to reduce users’ engagement with child sexual abuse material: a review of relevant literature for the reThink chatbot. University of Tasmania, Hobart. ISBN 978-1-922708-21-2.
  4. Wogalter, M. S., & Mayhorn, C. B. (2005). Providing cognitive support with technology-based warning systems. Ergonomics, 48(5), 522-533.
  5. Laughery, K. R. (2006). Safety communications: warnings. Applied ergonomics, 37(4), 467-478.
  6. Bailey, A., Allen, L., Stevens, E., Dervley, R., Findlater, D., & Wefers, S. (2022). Pathways and prevention for indecent images of children offending: A qualitative study. Sexual Offending: Theory, Research, and Prevention, 17, e6657.
  7. Kim, S., & Wogalter, M. S. (2009). Habituation, dishabituation, and recovery effects in visual warnings. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 53, No. 20, pp. 1612-1616). Sage CA: Los Angeles, CA: SAGE Publications.
  8. Wogalter, M. S., & Mayhorn, C. B. (2008). Trusting the internet: Cues affecting perceived credibility. International Journal of Technology and Human Interaction (IJTHI), 4(1), 75-93.
  9. Scanlan J, Prichard J, Hall LC, Watters P, Wortley R. (2024) ‘reThink Chatbot Evaluation’, University of Tasmania, Hobart. ISBN 978-1-922708-67-0.
  10. Protect Children. (2025). Online Warning Messages for CSAM Prevention: Evidence and Practice Mapping.
  11. Prichard, J., Wortley, R., Watters, P., Spiranovic, C., & Scanlan, J. (2024). The effect of therapeutic and deterrent messages on Internet users attempting to access ‘barely legal’ pornographyChild Abuse & Neglect155, 106955.
  12. Bushman, B. J., & Stack, A. D. (1996). Forbidden fruit versus tainted fruit: Effects of warning labels on attraction to television violence. Journal of Experimental Psychology: Applied, 2(3), 207–226.
  13. Brehm, J. W. (2012). Psychological reactance. Control Motivation and Social Cognition.