Meta is confronting a major legal challenge in the United States after privacy advocates and customers filed a class action lawsuit alleging that the company misled users about how its AI‑enabled smart glasses handle personal data.
According to reporting by TechCrunch, the lawsuit was filed in federal court in San Francisco by plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the Clarkson Law Firm. The complaint claims that Meta and its hardware partner, EssilorLuxottica of America, violated consumer protections by advertising the Ray‑Ban Meta AI smart glasses as “designed for privacy” and “controlled by you,” despite evidence that footage recorded by users may be accessed by third‑party workers.
Investigations by Swedish media outlets revealed that workers employed by a subcontractor in Nairobi, Kenya have been reviewing sensitive video and audio content captured by the smart glasses. Reports suggest that some of this material includes footage of private, intimate moments — such as individuals undressing or using the bathroom — which the lawsuit claims consumers were not made aware could be accessed off‑device. These allegations have heightened concerns about how wearable tech companies handle user privacy.
San Francisco Chronicle/SFGate likewise reported that plaintiffs accuse Meta of false advertising, fraud, and breach of contract. The lawsuit argues that Meta’s marketing created a reasonable expectation that recordings would remain private and controlled by the user, yet contractors hired to help label data for AI training purposes reportedly had access to intimate and highly personal footage without explicit user understanding or consent. Users are seeking class certification, an injunction against current advertising claims, and punitive damages.
Privacy advocates and regulatory bodies have also taken notice of the wider implications. The Verge reports that the controversy has prompted the UK’s Information Commissioner’s Office (ICO) to contact Meta for clarity on its data‑handling practices, particularly around whether privacy protections such as face‑blurring are effective. While Meta maintains that media remains on a user’s device unless they explicitly share it and that content review is standard practice to improve AI systems, critics argue that the lack of clear communication about human review undermines user trust.
Meta has stated that contractors are only involved when users choose to share content with Meta AI and that the company takes steps to protect privacy, including filtering data and blurring identifiable features. However, the lawsuit and accompanying investigations underscore ongoing questions about transparency and consent in the rapidly expanding market for AI‑enabled wearables.
