Connect with us
AGI risks and governance

Artificial Intelligence

Barry Diller Backs Sam Altman But Warns AGI Needs Controls

Barry Diller Backs Sam Altman But Warns AGI Needs Controls

Media and technology executive Barry Diller offered a defense of OpenAI CEO Sam Altman this week while simultaneously issuing a stark warning about the trajectory of artificial general intelligence (AGI). Diller spoke during a recent interview where he addressed both the leadership challenges at the AI company and the broader, unpredictable implications of the technology.

Diller, the chairman of IAC and Expedia Group, did not specify the exact venue or date of the statements but made clear his position on Altman. He described Altman as a capable leader facing immense pressure. However, Diller pivoted quickly to the core of his concern, stating that trust in any individual becomes secondary when considering the potential scale of AGI.

“Trust is irrelevant,” Diller said, according to reports. He argued that regardless of how much confidence one has in any specific CEO or company, the arrival of AGI presents a force that is inherently unpredictable. Diller stressed the necessity of building guardrails around the development of such powerful systems.

Defense of Altman Amidst Company Turmoil

Diller’s comments come after a period of significant internal instability at OpenAI. The organization, which develops the widely used ChatGPT platform, experienced a sudden and controversial change in leadership late last year. Altman was briefly removed from his position by the board before being reinstated following intense pressure from employees and investors.

The situation highlighted deep divisions within the company regarding the safety and speed of AI development. Diller’s defense of Altman suggests a respect for his management of these complex internal dynamics. Diller acknowledged that Altman operates in a uniquely challenging environment, where commercial pressures frequently clash with existential safety considerations.

The Unpredictable Nature of AGI

The core of Diller’s warning focused not on OpenAI’s internal politics but on the fundamental nature of the technology itself. AGI refers to a hypothetical AI system that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond a human being. No such system currently exists.

Diller argued that once a technology reaches this level of capability, the potential for unforeseen consequences multiplies dramatically. He suggested that mechanisms, potentially including government regulation, are essential. Diller did not propose specific policies but framed the need for guardrails as a matter of prudence rather than a lack of confidence in any particular company. He emphasized that the very nature of AGI makes it unmanageable through trust alone.

Industry Context and Implications

Diller’s perspective adds a voice from outside the core AI research community to an ongoing debate. Technologists, ethicists, and policymakers are actively discussing how to balance rapid innovation in AI with safety measures. Some argue for a pause in the training of the most powerful models, while others believe that voluntary industry standards are sufficient.

Diller’s statement aligns more closely with those who advocate for external oversight. By stating that trust is irrelevant, he implicitly rejects the idea that self-regulation by companies like OpenAI will be enough. His comments serve as a reminder that as AI capabilities advance, the conversation will increasingly move from technical details to broader societal and governance questions.

Looking Forward

There is no set timeline for the arrival of AGI. Experts remain deeply divided on whether it is years or decades away. However, companies including OpenAI continue to invest heavily in research to accelerate progress. The debate over safeguards is expected to intensify as those efforts yield more advanced systems. Diller’s intervention suggests that figures outside the technology sector are preparing for a future where the development of AGI requires a collective, rather than corporate, response. The discussions around appropriate governance structures and international cooperation are likely to become central topics in technology policy in the coming years.

Source: GeekWire

More in Artificial Intelligence