Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI
By Karen Hao
Penguin Press, 2025
“I believe the future is going to be so bright that no one can do it justice by trying to write about it now,” said OpenAI CEO Sam Altman in 2024. “Although it will happen incrementally, astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace” (p. 19).
This type of grandiose, quasi-utopian claim about AI is often paired with hyperbolic statements about its inevitability in our lives—as if the companies have no choice in whether to press ahead, and as if average citizens have no choice in whether to use these technologies.
Karen Hão, author of Empire of AI, shows us that the rapid infusion of AI into every aspect of our lives is not inevitable; rather, it is happening because investors are choosing to pump hundreds of billions of dollars into the AI sector rather than investing in scientific research. It is happening because people are blithely deciding to offload their creativity and thought to chatbots to save a little time. It is happening because business leaders and school administrators are choosing to accept AI tools in hopes of improving “efficiency.”
Hao’s in-depth reporting about OpenAI reveals that a very small group of AI business leaders and investors are making decisions that will reshape daily life and our economy. Empire of AI reveals how tech CEOs are usually driven by greed, utopian philosophy, and personal ambition. She also shows how these visions and values are already causing downstream problems.
Early warning signs emerged several years ago when publishers, musicians, artists, and authors began to file hundreds of lawsuits claiming that AI companies had stolen troves of copyrighted intellectual property, which the companies used to train their algorithms (p. 136).
AI companies’ unethical appropriation of intellectual property revealed a greater ambition: to monopolize all information and data.
“It’s not a coincidence that AI today has become synonymous with colossal, resource-hungry models that only a tiny handful of companies are equipped to develop, and that desire us to make their products into the foundations for everything,” writes Hao about our increasing acceptance of and dependence on AI (p. 94).
Sam Altman of OpenAI has spoken openly about his belief that “monopolies are good,” a view he shares with billionaire investor Peter Thiel (p. 39). Monopolies are antithetical to free markets, but people often forget that a pro-monopoly worldview assumes that a handful of business leaders have the moral stature and wisdom to “rule the roost” while ensuring that the rest of society will thrive.
Hao is skeptical of that belief. Her book shows how AI business leaders have deprioritized safety and ethics in order to pursue profits. She documents power struggles, resignations, and relational conflicts within OpenAI. All of these factors, she says, make it difficult to trust the visions and actions of AI companies.
“[The] quest for domination of that technology … ultimately rests on the polarized values, clashing egos, and messy humanity of a small handful of fallible people,” writes Hao.
One major concern has been the mistreatment and exploitation of tech workers in developing countries.
AI systems ingest massive amounts of data that they scrape indiscriminately from online sources. A lot of that data is polluted with the worst internet content imaginable, including horrific violence. The algorithms have no moral sense or knowledge of truth, so they cannot automatically identify and remove the sewage in the data pool.
To overcome this problem, AI companies have hired thousands of low-income workers in developing countries to search for, identify, and flag bad stuff, thereby teaching the machines what to avoid or accept.
As you can imagine, these jobs require workers to spend thousands of hours each year viewing unspeakable horrors. Many become severely emotionally disturbed or clinically depressed. They lose sleep. Their relationships break down.
Meanwhile, the tech company CEOs—who run companies worth hundreds of billions of dollars— have chosen to pay these people not more than a few dollars per hour.
Hao presents myriad other dangers of AI. Most problems—spreading disinformation, hallucinations, classroom cheating, possible widespread job losses, etc.,—have been reported in newspapers and magazines. But Hao also describes the immense stress that AI systems place on land, water, and energy infrastructure.
Construction of huge data centers around the world—all owned by just a few Big Tech companies—is booming. These “megacampuses” often occupy areas as large as college campuses. They house thousands of CPUs that demand exorbitant levels of electricity to function and stay cool.
“Developers and utility companies are now preparing for AI megacampuses that could soon require 1,000 to 2,000 megawatts of power,” writes Hao. “A single one could use as much energy per year as around one and a half to three and a half San Franciscos” (p. 275).
Hao adds that Microsoft and OpenAI have plans for a $100 billion data center that will—by itself—require 5,000 megawatts of energy per year, which is equivalent to the annual energy use of New York City (p. 280).
To cool these data centers also requires unfathomable amounts of clean water. Researchers found that AI datacenters could consume nearly two trillion gallons of fresh water per year by 2027, which is half the water consumed annually by the United Kingdom (p. 277).
In light of these facts, we must choose whether to adopt AI. To make that choice, we need to evaluate the philosophies and values that motivate Big Tech leaders. Then we need to decide whether to support or reject the technologies they design.
“The critiques that I lay out in this book of OpenAI’s and Silicon Valley’s broader vision are not by any means meant to dismiss AI in its entirety,” writes Hao. “What I reject is the dangerous notion that broad benefit from AI can only be derived from—indeed, will ever emerge from—a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labor and art, toward an ultimately imperial centralization project” (p. 413).
AI companies can only be profitable if people choose to use their products. Average citizens can vote every day by choosing how to spend our money. We can defund companies that lack good ethics or that neglect the well-being of society. We are not powerless. Empire of AI reminds readers that artificial intelligence is not inevitable.
Quote to Ponder
“Tech is impacting the whole world out of Silicon Valley, but the whole world is not getting a chance to impact tech.” — Timnit Gebru, founder of the Distributed AI Research Institute