My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.

In this episode of the Speaking Out of Place podcast, Professor David Palumbo-Liu talks with investigative journalist Karen Hao. She explains that OpenAI is anything but “open”—very early on, it left behind that marketing tag to become increasingly closed and elitist. Her massive study, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI had a rather different subtitle in its UK edition: Inside the reckless race of total domination. She fleshes out the overlap between these two points of emphasis. Hao argues that in general, the AI mission “centralizes talent around a grand ambition” and “centralizes capital and other resources while eliminating roadblocks, regulation, and dissent.” All the while, “the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, resources, however the centralizer wants.”  Karen explains that she chose the word “empire” precisely to indicate the colonial nature of AI’s domination: the tremendous damage this enterprise does to the poor, to racial and ethnic minorities, and to the Global South in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy.  This is a discussion everyone should be part of.

Karen Hao is a bestselling author and award-winning reporter covering the impacts of artificial intelligence on society. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and its global implications, which became an instant New York Times bestseller. She writes for publications including The Atlantic and leads the Pulitzer Center's AI Spotlight Series, a program that trains thousands of journalists worldwide on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. Her work is regularly taught in universities and cited by governments. She has received numerous accolades for her coverage, including an American Humanist Media Award, an American National Magazine Award for Journalists Under 30, and the TIME100 AI. She received her Bachelor of Science in mechanical engineering from MIT.

DAVID PALUMBO-LIU

Thank you so much for being on the show. I know you've been incredibly busy. You've done a ton of these kinds of podcasts and interviews, so I really appreciate you taking the time to be on yet another one. Please explain to the audience the difference between AI and AGI.

KAREN HAO

So AI is… I'll give the long explanation, please. AI, as a term, was originally coined in 1956 to start a new discipline for recreating human intelligence in computers. It was just a couple of years after Alan Turing asked the very famous question, "Can machines think?"

There was this effort where scientists all came together at Dartmouth University, saying, "Why don't we form a new discipline to tackle this question?" There was actually a debate around what to name the discipline. Then, John McCarthy, who was an assistant professor at Dartmouth at the time, decided he was going to name it artificial intelligence, much to the great chagrin of his mentor, who said, "This term makes no sense, and it's going to confuse people because we don't know what human intelligence is."

But it stuck anyway. Later on, decades later, he said that he invented the term artificial intelligence because he needed some money for a summer study. So he was trying to fundraise for that, bringing together people at Dartmouth University and trying to attract more attention, public attention, and media attention as well.

He was very successful in doing that. AI ended up becoming effectively a marketing term. It was a way to be evocative about something that the researchers were actually already doing under other disciplinary names. Back then, what AI meant was human-level intelligence. The problem is that over the decades, from 1956 all the way until present day, that term became cheapened because as more and more industries started working on AI, they began claiming, "We have AI already."

What they meant was not that they had human intelligence already; they just meant they had products that were able to do certain types of tasks we might ascribe to intelligence. So then AI stopped meaning human-level intelligence and just started meaning products and services that exist today.

When OpenAI came along, they wanted to differentiate their quest from just building products and services of today. They wanted to return to the original definition of AI, which is human-level intelligence. In order to make that distinction, they used the term artificial general intelligence to make clear they were still pursuing the ultimate ambition of the field. 

Now you can argue whether they are actually pursuing that ultimate ambition in the field. It seems like they might just be building products and services again. So potentially, over the next couple of decades, AGI will suddenly just be synonymous with products and services, becoming yet another term that people will use to try and demarcate the distinction between AI and the ambition.

PALUMBO-LIU

Already existing things like voice recognition are things that everybody uses and is cool with; we use them all the time. But AGI became a kind of even more humanlike. If that becomes too mundane, they'll invest in something else. So it was always, as you say, with AI and AGI, it was marketing.

It was to get money invested under the promise that they'll deliver something that you can't even imagine. It's going to be good no matter what it is. On page 400 of your book, you very usefully list three distinct points about AI, and I'll read them back to you. First, you say the mission centralizes talent by rallying around a grand ambition. 

Second, the mission centralizes capital and other resources while eliminating roadblocks, regulation, and dissent. Finally, and most consequentially, you say the mission remains so vague that it can be interpreted and reinterpreted to direct the centralization of talent, capital, and resources however the centralizer wants.

I would just add something else, a fourth point that you emphasize throughout the book; in fact, you emphasize it in the title itself, which is the tremendous damage this enterprise does to the poor, racial and ethnic minorities, and the global south in general in terms of minds, bodies, the environment, natural resources, and any notion of democracy. 

This is a big question, and I'd like to have you take as much time as you want to unpack it in any way you want. I'll repeat: centralization of talent, centralization of capital and resources, the open-ended nature of aims and values, and the specific and regional nature of this new empire. Just riff on that any way you want because you summarize it so well.

HAO

Yeah. My book is called Empire of AI because I'm trying to articulate this argument and illustrate that these companies operate exactly like empires of old. I highlight four features that essentially encapsulate the three things you read. However, I started talking about it in a different way after writing the book.

The four features are: they lay claim to resources that are not their own, which is the centralization of resources; they exploit an extraordinary amount of labor, both in the development of the technology and the fact that they're producing labor-automating technologies that then suppress workers' ability to bargain for better rights; they monopolize knowledge production, which comes when they centralize talent.

In the last decade, the AI industry has successfully hired most of the top AI researchers in the world out of academia. That has distorted the scientific discipline of AI research in the same way that you would imagine climate science would be distorted if most climate scientists were bankrolled by oil and gas companies.

The final feature and parallel that I highlight is that they have this existential competition narrative. They have to be an empire and a good empire because there are evil empires that exist in the world that they need to be strong enough to combat. The evil empire in OpenAI's imagination shifts over time. Originally, it was Google, and then it became China. 

They quite literally engage in this competition narrative and also invoke a religious and ideological dimension as well, which is exactly what empires of old did. They say, "We are the good empire. We are bringing progress and modernity to all of humanity. We are engaging in a civilizing mission and giving humanity the opportunity to go to heaven. If the evil empire wins, then humanity goes to hell." This kind of rhetoric has quite literally taken hold within Silicon Valley, and I talk quite a lot about this religious rhetoric that has been adopted to justify these belief systems that now undergird the entire enterprise of the AI industry and its imperial expansion.

I think that's the only way to fully understand the dynamics of how the industry operates and the implications of allowing them to continue operating this way. As you mentioned, this results in the undermining of democracy around the world. Empires and democracies are antithetical to one another.

Empires operate on a totally different logic, a hierarchical logic in which some people are born with rights that other people do not deserve, where some people are superior while others are inferior. Democracy, of course, is based on the complete opposite philosophy: that everyone is equal and therefore has an equal right to be part of collectively determining the future. 

Silicon Valley has never operated in that way, but they've used AI to aggressively expand their imperial ambitions, which is what I'm trying to highlight, point out, and critique in the book.

Photo credit: Shoko Takayasu

*

Speaking Out of Place, which carries on the spirit of Palumbo-Liu’s book of the same title, argues against the notion that we are voiceless and powerless, and that we need politicians and pundits and experts to speak for us.

Judith Butler on Speaking Out of Place:

“In this work we see how every critical analysis of homelessness, displacement, internment, violence, and exploitation is countered by emergent and intensifying social movements that move beyond national borders to the ideal of a planetary alliance. As an activist and a scholar, Palumbo-Liu shows us what vigilance means in these times.  This book takes us through the wretched landscape of our world to the ideals of social transformation, calling for a place, the planet, where collective passions can bring about a true and radical democracy.”

David Palumbo-Liu is the Louise Hewlett Nixon Professor and Professor of Comparative Literature at Stanford University. He has written widely on issues of literary criticism and theory, culture and society, race, ethnicity and indigeneity, human rights, and environmental justice. His books include The Deliverance of Others: Reading Literature in a Global Age, and Speaking Out of Place: Getting Our Political Voices Back. His writing has appeared in The Washington Post, The Guardian, The Nation, Al Jazeera, Jacobin, Truthout, and other venues.
Bluesky @palumboliu.bsky.social
Apple Podcasts · Spotify · Website · Instagram