logo

The Importance of Child Safety Red Teaming for Your AI Company

Generative AI has seen significant advances in the last two years and has critical implications for child safety. This white paper outlines Thorn's child safety red teaming service, which helps mitigate the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children.

Understand the potential misuses of gen AI and how to mitigate them

While the benefits of generative AI are being realized more every day; like any powerful tool, the threat of misuse and ability to cause harm is equally high. With the right approach to AI development, some of these threats can be mitigated and better contained. Red teaming is one key mitigation.

Why Red Teaming?

Our child safety red team sessions are designed to test AI models and identify risks and vulnerabilities related to child sexual abuse. Without child safety mitigations in place, bad actors can and do misuse generative AI technologies (across multiple data modalities) to: 

  • Exploit children

  • Generate photo-realistic sextortion material

  • Mass produce AI-generated CSAM (AIG-CSAM)

  • Mass produce grooming conversations

Learn more about how red teaming fits into a Safety by Design approach to AI development in this paper.

Where can we send your white paper?
OpenAI logo

“Thorn is unique in its depth of expertise in both child safety and AI technology. The combination makes them an exceptionally powerful partner in our work to assess and ensure the safety of our models.”

CHELSEA CARLSON, CHILD SAFETY TECHNICAL PROGRAM MANAGER, OPEN AI