Connect with us
David Greene Google lawsuit

Artificial Intelligence

NPR Host David Greene Sues Google Over AI Voice

NPR Host David Greene Sues Google Over AI Voice

David Greene, a longtime host of National Public Radio’s “Morning Edition,” has filed a lawsuit against Google, alleging that the company used his voice without permission to create an artificial intelligence narrator for its NotebookLM tool. The complaint was filed in a U.S. federal court, though a specific date was not immediately disclosed in initial reports.

The legal action centers on Google’s NotebookLM, an AI-powered research and writing assistant. Greene claims the default male voice option for the tool’s audio narration feature is based on a synthetic replication of his distinctive vocal characteristics. He asserts this constitutes a misappropriation of his likeness and voice, which are protected under publicity rights laws.

Basis of the Legal Claim

The lawsuit alleges that Google trained its AI voice model using audio recordings of Greene’s work from NPR’s extensive broadcast archives. His legal team contends that the resulting synthetic voice is immediately recognizable to listeners as an imitation of Greene’s professional delivery, which has been a staple of American morning radio for years.

Greene is seeking a court order to compel Google to cease using the voice model and is pursuing unspecified monetary damages. The claim highlights the growing legal tensions between individual rights and the data collection practices used to train large-scale artificial intelligence systems.

Google’s Position and Industry Context

Google has not issued a detailed public statement regarding the specific allegations in the lawsuit. The company typically states that it develops its AI products responsibly and in accordance with applicable laws. NotebookLM, launched as an experimental product, is designed to summarize and answer questions about documents uploaded by users, with the voice feature reading back generated responses.

This case enters a complex and largely untested area of law concerning AI and intellectual property. Similar legal questions are being raised by authors, artists, and other media professionals whose copyrighted works or personal attributes have been used as training data for generative AI models without explicit consent or licensing agreements.

Potential Implications for AI Development

The outcome of this lawsuit could have significant ramifications for the broader technology industry. It directly challenges a common practice in AI development: using publicly available data, including audio and video, to train machine learning models. A ruling in Greene’s favor might force AI companies to alter their data sourcing methods and implement more rigorous consent procedures for using individuals’ voices or likenesses.

Legal experts note that right of publicity laws, which vary by state, traditionally protect against the unauthorized commercial use of a person’s name, image, or likeness. Applying these statutes to a synthetic AI voice clone represents a novel legal frontier.

The court is expected to set a schedule for Google’s formal response to the complaint and subsequent proceedings. Observers anticipate that this case may take months, if not years, to resolve, potentially involving motions to dismiss, discovery phases, and expert testimonies on AI technology. Its progression will be closely monitored by media organizations, AI ethicists, and legal scholars focused on the intersection of technology and personal rights.

Source: GeekWire

More in Artificial Intelligence