Meta Platforms Inc. is facing a class-action lawsuit alleging that its AI-powered smart glasses, Ray-Ban Meta, violate user privacy. The legal action, filed in a U.S. federal court, claims the company’s marketing promised privacy and user control, while in practice, subcontractors reviewed sensitive customer footage without adequate consent.
Allegations of Misleading Marketing and Data Handling
According to the lawsuit, Meta’s promotional materials for its Ray-Ban Meta smart glasses emphasized user privacy. The company stated that recorded photos and videos would remain on the device unless a user chose to share them. However, an investigation revealed a different reality behind the scenes.
The complaint details that third-party contractors, hired by Meta, were tasked with reviewing and annotating video data captured by the glasses. This footage, intended to train and improve Meta’s artificial intelligence models, reportedly included highly personal content.
Nature of the Reviewed Content
The legal filing states that subcontractors reviewed video clips containing nudity, intimate acts, and other private moments. Workers were reportedly shown footage of people in bathrooms, bedrooms, and other locations where privacy is expected. The lawsuit argues this practice directly contradicts Meta’s public assurances of user control over their data.
This process of human review for AI training data is a common industry practice, but the core allegation is that Meta failed to properly inform users or obtain meaningful consent for the collection and handling of such sensitive material.
Legal and Regulatory Context
The lawsuit accuses Meta of violating several laws, including Illinois’ Biometric Information Privacy Act (BIPA), which has stringent rules for collecting biometric data. It also alleges breaches of wiretapping statutes, unfair competition laws, and implied contract terms. The plaintiffs are seeking financial damages and a court order to halt the alleged practices.
This case arrives amid heightened global scrutiny of technology companies’ data collection methods, particularly concerning wearable devices with cameras and microphones. Regulatory bodies in multiple jurisdictions are increasingly focused on transparency and consent in AI development.
Potential Implications and Next Steps
The outcome of this litigation could have significant implications for the development and marketing of consumer AI hardware. A ruling against Meta may force stricter data handling protocols and more explicit consent mechanisms across the industry. It also raises questions about the ethical boundaries of using real-world data to train commercial AI systems.
Meta is expected to file a formal response to the allegations in court. The company may seek to have the case dismissed or argue that its user agreements provided sufficient disclosure. Legal experts anticipate a protracted legal process, with possible settlements or regulatory inquiries emerging as parallel developments. The court will likely set a schedule for hearings and evidence discovery in the coming months.
Source: GeekWire