TEXAS, USA — Instagram wants to make sure people 17-years-old and under are having age-appropriate experiences.
That means protecting them from adults they don't know, as well as inappropriate online ads.
Methods used to verify include face-scanning AI or uploading a video selfie. Kids can also select three mutual followers to confirm their age, but those mutual followers must be at least 18-years-old as well.
The use of face-scanning AI, especially on teenagers, raised alarm bells, given the checkered history of Instagram parent Meta, when it comes to protecting users' privacy.
Meta stressed that the technology used to verify people's age cannot recognize one's identity — only age. Once the age verification is complete, Meta said it and Yoti, the AI contractor it partnered with to conduct the scans, will delete the video.
Back in May, a lawsuit filed by Texas Attorney General Ken Paxton accused the company of misusing facial recognition technology.
The lawsuit caused Meta to temporarily block certain augmented reality filters for Texans. The filters returned a week later.
The ban on AR filters also took place in Illinois. The state has a similar law to the CUBI Act in Texas, called the Illinois Biometric Information Privacy Act, meaning the filters can't be used within the state lines of Illinois.
In April 2022, Facebook settled a class-action lawsuit for upwards of 1.4 million users in Illinois.
Their lawsuit claimed Facebook "collected and stored biometric data of Facebook users in Illinois without proper notice or consent", which is in violation of Illinois law.
Settlement payments were mailed to residents who fit the definition of "Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011."
The Associated Press contributed to this report.