A Stanford study revealed over 1,000 illegal child abuse images in an AI training database, raising concerns about AI’s role in child exploitation. The database, compiled by nonprofit LAION, contained billions of online images for AI model training. Experts attribute this issue to rapid innovation and lack of regulation in AI development, warning that similar problems likely exist in other datasets. Yaron Litwin of Canopy, a company specializing in AI content filtering for children, expressed concern about the potential escalation of these issues.
For the full article on Fortune: https://fortune.com/2023/12/21/ai-training-child-abuse-explicit-stanford/?hoq812