Stanford Study Exposes Child Safety Risks

A Stanford study revealed over 1,000 illegal child abuse images in an AI training database, raising concerns about AI’s role in child exploitation. The database, compiled by nonprofit LAION, contained billions of online images for AI model training. Experts attribute this issue to rapid innovation and lack of regulation in AI development, warning that similar problems likely exist in other datasets. Yaron Litwin of Canopy, a company specializing in AI content filtering for children, expressed concern about the potential escalation of these issues.

For the full article on Fortune: https://fortune.com/2023/12/21/ai-training-child-abuse-explicit-stanford/?hoq812

Ready to get started?

We built Canopy to empower families to enjoy a safer digital experience.

Discover Canopy!

parental control app management - phone
Mackbook parental control app management

Ready to get started?

We built Canopy to empower families to enjoy a safer digital experience.

You’re not in this alone.

Get helpful tips, stories, and resources from our network.

You’re not in this alone.

Get helpful tips, stories, and resources from our network.