{"id":6851,"date":"2026-05-07T07:48:02","date_gmt":"2026-05-07T07:48:02","guid":{"rendered":"https:\/\/delimiter.online\/blog\/agi-risks-and-governance\/"},"modified":"2026-05-07T07:48:02","modified_gmt":"2026-05-07T07:48:02","slug":"agi-risks-and-governance","status":"publish","type":"post","link":"https:\/\/delimiter.online\/blog\/agi-risks-and-governance\/","title":{"rendered":"Barry Diller Backs Sam Altman But Warns AGI Needs Controls"},"content":{"rendered":"<p>Media and technology executive Barry Diller offered a defense of <a href=\"https:\/\/delimiter.online\/blog\/ios-27-third-party-ai-models\/\" title=\"OpenAI\">OpenAI<\/a> CEO <a href=\"https:\/\/delimiter.online\/blog\/elon-musk-openai-departure\/\" title=\"Sam Altman\">Sam Altman<\/a> this week while simultaneously issuing a stark warning about the trajectory of artificial general intelligence (AGI). Diller spoke during a recent interview where he addressed both the leadership challenges at the AI company and the broader, unpredictable implications of the technology.<\/p>\n<p>Diller, the chairman of IAC and Expedia Group, did not specify the exact venue or date of the statements but made clear his position on Altman. He described Altman as a capable leader facing immense pressure. However, Diller pivoted quickly to the core of his concern, stating that trust in any individual becomes secondary when considering the potential scale of AGI.<\/p>\n<p>\u201cTrust is irrelevant,\u201d Diller said, according to reports. He argued that regardless of how much confidence one has in any specific CEO or company, the arrival of AGI presents a force that is inherently unpredictable. Diller stressed the necessity of building guardrails around the development of such powerful systems.<\/p>\n<h2>Defense of Altman Amidst Company Turmoil<\/h2>\n<p>Diller\u2019s comments come after a period of significant internal instability at OpenAI. The organization, which develops the widely used ChatGPT platform, experienced a sudden and controversial change in leadership late last year. Altman was briefly removed from his position by the board before being reinstated following intense pressure from employees and investors.<\/p>\n<p>The situation highlighted deep divisions within the company regarding the safety and speed of AI development. Diller\u2019s defense of Altman suggests a respect for his management of these complex internal dynamics. Diller acknowledged that Altman operates in a uniquely challenging environment, where commercial pressures frequently clash with existential safety considerations.<\/p>\n<h3>The Unpredictable Nature of AGI<\/h3>\n<p>The core of Diller\u2019s warning focused not on OpenAI\u2019s internal politics but on the fundamental nature of the technology itself. AGI refers to a hypothetical AI system that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond a human being. No such system currently exists.<\/p>\n<p>Diller argued that once a technology reaches this level of capability, the potential for unforeseen consequences multiplies dramatically. He suggested that mechanisms, potentially including government regulation, are essential. Diller did not propose specific policies but framed the need for guardrails as a matter of prudence rather than a lack of confidence in any particular company. He emphasized that the very nature of AGI makes it unmanageable through trust alone.<\/p>\n<h2>Industry Context and Implications<\/h2>\n<p>Diller\u2019s perspective adds a voice from outside the core AI research community to an ongoing debate. Technologists, ethicists, and policymakers are actively discussing how to balance rapid innovation in AI with safety measures. Some argue for a pause in the training of the most powerful models, while others believe that voluntary industry standards are sufficient.<\/p>\n<p>Diller\u2019s statement aligns more closely with those who advocate for external oversight. By stating that trust is irrelevant, he implicitly rejects the idea that self-regulation by companies like OpenAI will be enough. His comments serve as a reminder that as AI capabilities advance, the conversation will increasingly move from technical details to broader societal and governance questions.<\/p>\n<h4>Looking Forward<\/h4>\n<p>There is no set timeline for the arrival of AGI. Experts remain deeply divided on whether it is years or decades away. However, companies including OpenAI continue to invest heavily in research to accelerate progress. The debate over safeguards is expected to intensify as those efforts yield more advanced systems. Diller\u2019s intervention suggests that figures outside the technology sector are preparing for a future where the development of AGI requires a collective, rather than corporate, response. The discussions around appropriate governance structures and international cooperation are likely to become central topics in technology policy in the coming years.<\/p>\n<p>Source: GeekWire<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Media and technology executive Barry Diller offered a defense of OpenAI CEO Sam Altman this week while simultaneously issuing a stark warning about the trajectory of artificial general intelligence (AGI). Diller spoke during a recent interview where he addressed both the leadership challenges at the AI company and the broader, unpredictable implications of the technology. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":6852,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[220],"tags":[5996,221,228,8046,864,265,1456],"class_list":["post-6851","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","tag-agi","tag-ai","tag-artificial-intelligence","tag-barry-diller","tag-media-entertainment","tag-openai","tag-sam-altman"],"_links":{"self":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/6851","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/comments?post=6851"}],"version-history":[{"count":0,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/posts\/6851\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media\/6852"}],"wp:attachment":[{"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/media?parent=6851"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/categories?post=6851"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/delimiter.online\/blog\/wp-json\/wp\/v2\/tags?post=6851"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}