OpenAI on April 8 published a Child Safety Blueprint outlining three priorities: modernizing laws to cover AI-generated and altered child sexual abuse material, improving provider reporting and coordination with law enforcement, and embedding safety-by-design into AI systems from the ground up. The framework was developed in consultation with NCMEC, the Attorney General Alliance, and child-safety nonprofit Thorn. The move comes amid a reported 14% rise in AI-generated CSAM cases in the first half of 2025, with the Internet Watch Foundation logging over 8,000 such cases in that period.