Tech giant Meta Platforms is facing a lawsuit in the United States over alleged privacy breaches linked to its AI-powered smart glasses.
The case stems from reports that employees working for a contractor in Kenya reviewed sensitive footage captured by users, according to TechCrunch.
Click here to join our WhatsApp Channel
The lawsuit follows revelations by Swedish media that workers at a subcontracted firm in Kenya were assigned to examine images and videos recorded through Meta’s smart glasses. Some of the material reviewed reportedly contained highly sensitive personal content.
According to TechCrunch, the footage was part of a quality review process meant to help improve the glasses’ artificial intelligence capabilities. However, the reports have triggered concerns about how the company handles user data and whether customers were clearly informed that human reviewers might access their recordings.
The complaint was filed in the United States by Gina Bartone from New Jersey and Mateo Canu from California. The two are represented by Clarkson Law Firm, a legal group known for pursuing cases against large technology companies.
In the suit, the plaintiffs accuse Meta of violating privacy and consumer protection laws and misleading customers about the product’s privacy safeguards.
They argue that Meta promoted the smart glasses with phrases such as “designed for privacy, controlled by you” and “built for your privacy,” which they claim led buyers to believe their recordings would remain completely private. The plaintiffs say they were never told that overseas contractors could review their content.
Click Here To Subscribe To Our YouTube Channel
Meta’s manufacturing partner, Luxottica, has also been named in the legal action.
The lawsuit also draws attention to how widely the product has been adopted. Reports indicate that more than seven million people bought Meta’s smart glasses in 2025 alone, raising questions about how much personal data could potentially be reviewed.
According to the complaint, footage recorded by the glasses may enter a data processing pipeline where it can be reviewed by human analysts to help train or refine AI systems. The plaintiffs claim users were not given the option to opt out of this process.
Meta has defended its practices, stating that human reviews occur only when users voluntarily share content with its AI tools. The company told BBC that contractors may examine data submitted to Meta AI in order to enhance the system’s performance.

The company also pointed to its privacy policy and additional terms of service, which indicate that interactions with its AI technologies may be reviewed either automatically or manually. One version of the policy states that Meta may examine interactions with AI systems—including conversations and messages—through automated tools or human reviewers.
A spokesperson for Meta, Christopher Sgro, said that media captured using the glasses normally stays on the user’s device unless it is deliberately shared with Meta or other platforms.
“Ray-Ban Meta glasses help users interact with AI hands-free to answer questions about the environment around them,” he said.
Sgro added that the company sometimes relies on contractors to review certain shared data but takes measures to filter the material in order to protect privacy and remove identifying information.
The case highlights growing global concerns about wearable technologies such as smart glasses and other AI-driven devices that continuously collect data about users and their surroundings.
The Lower Eastern Times Opening The Third Eye