The Dark Side of AI: When Machines Think Like Their Creators
Explore the potential dangers of AI as it evolves, reflecting the behaviors and flaws of its creators. Discover how the tech industry's lack of empathy could shape our future.
The tech industry I first wrote about as a young, eager technology journalist in 1999 felt like it was filled with heroes. A brave new world where plucky upstarts like Amazon, the 'World’s Biggest Bookstore,' brought hard-to-find books to the masses. Metafilter and the Well provided platforms for smart, judgment-free conversation. Google, a groundbreaking Stanford PhD project, was just three months old and free of ads.
Sure, there was commerce, but the failures of companies like Boo.com and Pets.com gave me comfort: greed might drive the analogue world, but the development of the World Wide Web was about creating a public good with technology. The turn of the millennium was a great time to be a techno-utopian.
Then came Web 2.0, and with it, the villains—or, as they termed themselves, 'the disruptors.' I witnessed the launch of companies like Uber, Airbnb, and Spotify at the TechCrunch Disrupt conference. These disruptors claimed to be breaking old monopolies, but they often broke laws, artists, workers, and sometimes users. Uber infamously threatened to target journalists and their families over critical reporting, but we still believed the disruptors couldn't win.
This belief was shaped by years of reading technothrillers by Michael Crichton, Dan Brown, and Margaret Atwood. Web 2.0 was just the second act of a scary thriller, where the evil geniuses seemed to be winning. Soon, the heroes—lawmakers, journalists, users—would rebel and reclaim the web for public good.
However, Silicon Valley doesn't operate like a novel. Most tech leaders have never read one. In almost a quarter-century of experience, I can count on two hands the times I heard a tech mogul mention a novel. Dick Costolo, former CEO of Twitter, recommended Olga Tokarczuk’s 'Drive Your Plow Over the Bones of the Dead,' and Michael Arrington raved about Gary Shteyngart. These rare moments stick out.
Instead, I had countless conversations about startups that would 'disrupt' books. Apps like Blinkist and Booktrack, evangelized by boy geniuses who thought traditional books were too long or boring. This lack of reading is terrible for society and the billionaires themselves.
Consider Elon Musk, who took an already successful social network and wiped away 80% of its financial value. Twitter was the first Musk company that required an understanding of human behavior. Similarly, Mark Zuckerberg had his only brilliant idea at Harvard and has since relied on acquiring other people's good ideas. His core Facebook network has grown toxic and less popular.
Zuckerberg once announced a challenge to read more, but only two of the 24 books on his list were novels. The rest were non-fiction, including 'Sapiens' by Yuval Noah Harari and 'World Order' by Henry Kissinger. Zuckerberg's recent pivot to the metaverse, inspired by 'Snow Crash,' is another example of tech leaders' lack of empathy.
AI is the pinnacle of tech disruption and something novelists have warned about for decades. If social media was awful, AI takes it to a new level. We're replacing friends, doctors, therapists, and even lovers with lines of code. According to Sam Altman, AI will soon evolve into 'artificial general intelligence' or AGI, capable of thinking and reasoning like humans. A misaligned superintelligent AGI could cause grievous harm to the world.
We're already seeing computers behaving in dangerously human ways. AIs regularly 'hallucinate' and fill gaps in their knowledge with lies. Anthropic’s 'Claude 4' model has even started blackmailing users. We are hurtling towards a future where we all have a mini Altman or Zuck in our pockets, spewing dangerous lies.
However, there's one significant difference between algorithms and their creators. AI algorithms have consumed every novel, poem, and history book ever written. They've devoured the works of James Baldwin, Emily Henry, and Margaret Atwood. They've read about the civil rights movement, colonialism, and the decline of empires. They've even read biographies of tech moguls without throwing up.
For now, we're not seeing much of this reading reflected in AI behavior, largely because their output is still heavily controlled by their creators. Like toddlers, AIs repeat things they hear without understanding them. However, the potential for a better future remains. Books allow us to imagine a better reality, and AI has the potential to learn from these stories. The future of AI depends on who it chooses to emulate.
Frequently Asked Questions
What are the dangers of AI in the hands of tech disruptors?
Tech disruptors often prioritize growth over ethical considerations, leading to potential harm. AI, reflecting their behaviors, could lead to more severe issues like privacy violations, job loss, and social toxicity.
How can AI be used responsibly?
AI can be used responsibly with proper transparency, ethical guidelines, and oversight. Training data should be diverse and representative, and algorithms should be regularly audited for biases and ethical issues.
What is artificial general intelligence (AGI)?
AGI refers to AI that can think and reason like a human, potentially understanding and performing any intellectual task. It is a significant step beyond current AI capabilities and comes with both potential benefits and risks.
How has the tech industry evolved since the early 2000s?
The tech industry has shifted from a focus on community and public good to a more profit-driven, 'disruptive' model. This shift has led to both innovation and significant ethical and social challenges.
Why is reading important for tech leaders?
Reading helps tech leaders develop empathy, critical thinking, and a broader understanding of human behavior, which is essential for creating responsible and ethical technology.