Connect with us
Google NotebookLM voice lawsuit

Tech News

Google Denies Using NPR Host’s Voice in AI Podcast Tool

Google Denies Using NPR Host’s Voice in AI Podcast Tool

A legal case has been filed against Google concerning allegations that its experimental AI tool, NotebookLM, used a synthetic voice mimicking that of former NPR host David Greene to narrate podcasts. Google has formally denied these claims.

Details of the Allegations

The core of the complaint centers on Google’s NotebookLM, an AI-powered research and writing assistant. The tool includes a feature that can generate audio summaries or “podcasts” from a user’s provided documents. According to the filing, the AI-generated narration for these audio summaries utilized a voice model allegedly trained on, and strikingly similar to, that of veteran radio journalist David Greene.

David Greene is well known to public radio audiences across the United States, having co-hosted NPR’s “Morning Edition” for nearly a decade. His distinctive vocal delivery is considered a signature element of his broadcasting career. The lawsuit suggests that using his voice likeness without consent for a commercial AI product constitutes a violation of his rights.

Google’s Official Response

In a statement addressing the allegations, a Google spokesperson issued a clear denial. The company stated that the voice in question is not David Greene’s and was not designed to imitate him. Google emphasized that the voices available within NotebookLM are created using its own proprietary text to speech technology and are not based on any specific individual’s voice without authorization.

The tech giant further explained that NotebookLM is an experimental product offered through its Labs platform, which is designed for user feedback and iterative development. The company maintains that all features are developed in accordance with its AI principles and ethical guidelines.

Broader Context of AI and Voice Rights

This incident occurs amidst growing legal and ethical scrutiny surrounding generative AI technologies. The ability of AI to clone or synthesize human voices with high accuracy has raised significant concerns about consent, copyright, and personal identity. Several high profile cases have emerged where actors, singers, and other public figures have challenged the unauthorized use of their voice or likeness in AI generated content.

The legal landscape in this area is still evolving. Rights of publicity, which protect an individual’s name, image, and likeness, vary by jurisdiction. Applying these traditional laws to AI generated synthetic media presents novel challenges for courts and legislators worldwide.

Potential Implications and Next Steps

The outcome of this case could have implications for how AI companies develop and train their voice synthesis models. A ruling against Google might necessitate more rigorous vetting of training data and explicit consent procedures for any voice data used. Conversely, a ruling in Google’s favor could clarify the boundaries of voice imitation under current law.

Legal experts anticipate that the discovery process will likely involve technical analysis to determine the origins of the AI voice model’s training data. The court may also examine internal Google communications related to the development of the NotebookLM audio feature. As the case proceeds, it is expected to contribute to the ongoing debate about establishing clear regulatory frameworks for synthetic media and AI ethics.

Source: Mashable

More in Tech News