
Meta is once again facing serious controversy, this time over claims that its AI smart glasses are violating user privacy in disturbing ways. The company is now being sued in a new class-action case that paints a frightening picture of how deeply personal data could be mishandled.
A report by Swedish newspapers uncovered that workers at a subcontractor in Kenya were assigned to review footage from users’ Meta smart glasses. The shocking part is that much of this footage contained private moments, including people undressing, having sex, or using the toilet. Meta said the glasses blur faces automatically to protect privacy, but sources close to the process revealed that the blurring often failed.
The investigation has alarmed regulators and triggered a response from the United Kingdom’s Information Commissioner’s Office, which is now looking into how Meta handles data from its smart glasses. In the United States, two customers, Gina Bartone of New Jersey and Mateo Canu of California, have filed a class-action lawsuit through Clarkson Law Firm, accusing Meta and its partner Luxottica of misrepresenting the privacy of their product and deceiving consumers.
The complaint claims that Meta’s marketing was built on false promises. Ads for the smart glasses used confident phrases such as “built for your privacy” and “controlled by you,” giving customers the impression that their personal lives would remain confidential. However, the lawsuit says that footage from the glasses is fed into a data pipeline where human reviewers can access the clips, and users are not given a choice to opt out.
Clarkson Law, known for taking on tech giants like Apple, Google, and OpenAI, argues that this case reveals the true scale of Meta’s data practices. More than seven million people bought these smart glasses in 2025, meaning millions of hours of personal recordings could have been exposed to human eyes without proper consent.
Meta responded by telling the BBC that it uses contractors to review data shared with Meta AI in order to improve user experience. The company said this process is described in its privacy policy and supplemental terms of service, though critics argue the information is buried in legal text few people ever read. Privacy advocates insist that vague policy statements cannot justify exposing people’s most private moments to human review.
The lawsuit also highlights how Meta advertised its smart glasses as secure and user-controlled. Marketing materials promised “an added layer of security” and claimed users could decide what to share. Customers relied on these statements, unaware that human workers might see what their glasses recorded.
This growing scandal feeds into wider fears about smart gadgets that constantly watch and listen. Devices like AI glasses and always-on microphones are being criticized as tools of “luxury surveillance.” Some developers have even released apps that alert people when someone nearby is wearing smart glasses, reflecting growing public concern about being recorded without consent.
Meta has not commented directly on the lawsuit, but its spokesperson said that content remains private unless it is shared with Meta AI. The company added that it filters data to protect privacy and remove identifying information. Still, many users feel those reassurances are not enough after hearing that human contractors could view their recordings.
The case raises urgent questions about the future of personal technology. If smart glasses can secretly expose what people do in their own homes, how much control do users really have in a world where even what they see might no longer be private?