Google has reportedly launched a new data collection project, hiring a contractor to gather facial recognition data from children, offering $50 to parents for their child’s participation.
404 Media reports that the project revolves around Google collecting specific data, including eyelid shape and skin tone of children. Parents are asked to film their children wearing various props, such as hats and sunglasses. TELUS, acting on Google’s behalf, is paying parents $50 for this data collection effort. The primary objective of this project is to build datasets for machine learning, artificial intelligence, and facial recognition technologies.
This method of data collection is a drastic shift from the traditional approaches of scraping existing images or analyzing previously collected materials found on the internet. Instead, Google is directly engaging the public in its data gathering processes and compensating them for their contributions. Google clarified that this initiative is part of their efforts to verify users’ age.
Participants in the project are required to be minors, specifically aged between 13 and 17 years. The process involves taking multiple short videos, each under 40 seconds, with the overall task expected to take 30 to 45 minutes. The project insists that the filming of children be done on private premises, not in public spaces, and requires parental consent.
TELUS, which also offers facial recognition products, stated that the purpose of this collection is to capture a broad cross-section of participants. This aims to ensure that the services and products derived from this data are representative of a diverse set of end-users. The data collection is intended to improve authentication methods, thereby offering more secure tools for users.
Google claims that the involvement of TELUS is solely for participant identification, with Google directly receiving the recorded videos. The tech giant has highlighted its commitment to age-appropriate experiences and compliance with laws and regulations and also highlighted the implementation of strict privacy protections, including the option for participants to delete their data at any time.
Google’s latest data collection scheme is particularly troubling based on the fact that it was revealed in December that AI training datasets include child pornography:
The Associated Press reports that The Stanford Internet Observatory, in collaboration with the Canadian Centre for Child Protection and other anti-abuse charities, conducted a study that found more than 3,200 images of suspected child sexual abuse in the AI database LAION. LAION, an index of online images and captions, has been instrumental in training leading AI image-makers such as Stable Diffusion.
This discovery has raised alarms across various sectors, including schools and law enforcement. The child pornography has enabled AI systems to produce explicit and realistic imagery of fake children and transform social media photos of real teens into deepfake nudes. Previously, it was believed that AI tools produced abusive imagery by combining adult pornography with benign photos of kids. However, the direct inclusion of explicit child images in training datasets presents a more direct and disturbing reality.
Read more at 404 Media here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.