Karen Hao on the Empire of AI, AGI & the Cost of Belief

The Ideology Driving the AI “Empire”
At the core of any dominant power lies an ideology – a system of beliefs that fuels its growth and justifies its expansion, even when that expansion contradicts the ideology’s initial goals.
Historically, European colonial powers utilized Christianity, promising salvation while simultaneously exploiting resources. Today, the burgeoning AI empire centers around the pursuit of artificial general intelligence (AGI), framed as a means to “benefit all humanity.” OpenAI stands as the primary advocate for this vision, reshaping the very foundations of AI development.
A Zeal for AGI
Journalist and author Karen Hao, in her book “Empire of AI,” observed a fervent belief in AGI among industry professionals.
“I was interviewing people whose voices were shaking from the fervor of their beliefs in AGI,” Hao shared with TechCrunch. She characterizes the AI industry, particularly OpenAI, as an empire, asserting its power surpasses that of many nation-states.
Hao contends that OpenAI has amassed considerable economic and political influence, actively “terraforming the Earth” and “rewiring our geopolitics.” This level of impact, she argues, can only be described as imperial.
The Promise and Peril of AGI
OpenAI defines AGI as a highly autonomous system capable of exceeding human performance in most economically valuable tasks.
The stated aim is to “elevate humanity” through increased abundance, economic growth, and breakthroughs in scientific knowledge. However, these ambitious promises have driven an unprecedented demand for resources, data, and energy, alongside the release of systems whose long-term effects remain uncertain.
Hao emphasizes that this trajectory isn’t predetermined, suggesting alternative paths to AI advancement.
“You can also develop new techniques in algorithms,” she explained. “Improving existing algorithms can reduce the need for extensive data and computational power.”
Speed Versus Sustainability
Prioritizing speed, however, became paramount for OpenAI.
“When you define the quest to build beneficial AGI as one where the victor takes all – as OpenAI did – then the most important thing is speed over anything else,” Hao stated. This focus on speed overshadowed efficiency, safety, and thorough research.
OpenAI opted for a strategy of scaling existing techniques by simply increasing data and computational resources – an “intellectually cheap” approach, according to Hao.
A Ripple Effect Across the Industry
OpenAI’s approach set a precedent, prompting other tech companies to follow suit.
“Because the AI industry has successfully captured most of the top AI researchers in the world, and those researchers no longer exist in academia, then you have an entire discipline now being shaped by the agenda of these companies, rather than by real scientific exploration,” Hao said.
The financial investment is substantial, with OpenAI projecting $115 billion in cash burn by 2029. Meta plans to spend up to $72 billion on AI infrastructure this year, while Google anticipates $85 billion in capital expenditures by 2025, largely dedicated to AI and cloud expansion.
Mounting Harms and Shifting Goals
Despite these investments, the anticipated benefits remain elusive, while the negative consequences are becoming increasingly apparent.
These harms include job displacement, wealth concentration, and the potential for AI chatbots to exacerbate mental health issues. Hao also highlights the exploitation of workers in developing countries, who face exposure to disturbing content for low wages – often between $1 and $2 per hour – in roles such as content moderation and data labeling.
Hao argues against framing AI progress as a justification for present harms, pointing to alternative AI applications with tangible benefits.
The Promise of Responsible AI
She cites Google DeepMind’s AlphaFold as an example, accurately predicting protein structures and aiding drug discovery.
“Those are the types of AI systems that we need,” Hao said. “AlphaFold does not create mental health crises in people. AlphaFold does not lead to colossal environmental harms… because it’s trained on substantially less infrastructure. It does not create content moderation harms because [the datasets don’t have] all of the toxic crap that you hoovered up when you were scraping the internet.”
The narrative of racing to surpass China in AI development, with the expectation that Silicon Valley would promote liberalization, has also proven inaccurate.
“Literally, the opposite has happened,” Hao noted. “The gap has continued to close between the U.S. and China, and Silicon Valley has had an illiberalizing effect on the world… and the only actor that has come out of it unscathed, you could argue, is Silicon Valley itself.”
A Complicated Mission
While OpenAI’s products like ChatGPT offer potential productivity gains, the company’s hybrid non-profit/for-profit structure complicates its assessment of societal impact.
Recent news of a Microsoft agreement potentially leading to an IPO further complicates matters. Two former OpenAI safety researchers expressed concerns that the lab is blurring its missions, equating product popularity with benefiting humanity.
Hao shares these concerns, warning against the dangers of prioritizing a constructed belief system over reality.
“Even as the evidence accumulates that what they’re building is actually harming significant amounts of people, the mission continues to paper all of that over,” Hao said. “There’s something really dangerous and dark about that, of [being] so wrapped up in a belief system you constructed that you lose touch with reality.”
Related Posts

OpenAI, Anthropic & Block Join Linux Foundation AI Agent Effort
Alexa+ Updates: Amazon Adds Delivery Tracking & Gift Ideas

Google AI Glasses: Release Date, Features & Everything We Know

EU Antitrust Probe: Google's AI Search Tools Under Investigation

Microsoft to Invest $17.5B in India by 2029 - AI Expansion
