REF: ESW122625EN
On this article, I want to talk about AI after spending hundreds of hours watching long interviews—two to three hours each—from experts in the field. Some are strongly in favor of AI, others are more on the doomer side, where things could result in human extinction, and everything in between, from extinction to utopia.
What I personally conclude is that everything is pointing in a direction that cannot be stopped, and that nobody wants to stop it, regardless of what the aftermath might be—utopia, human extinction, or anything in between. Even if the people developing these AI systems are aware that human extinction has a real chance, like some of them openly believe—some CEOs estimate a 20–30% chance—they are still pursuing it.
Of course, nobody can truly predict where this is going or how things will look in 5, 10, or 50 years from now. There is also massive investment interest involved. People are investing heavily in everything related to AI: infrastructure, energy, hardware, data centers—everything required to run AI now and in the future. Expansion, scale, you name it.
I’m also wondering, as many experts are, including a professor you’ll see in the video below, whether we’ve already crossed a critical line. This professor is Stuart Russell, one of the most respected figures in the AI world. He is a world-renowned AI expert and Computer Science Professor at UC Berkeley, where he holds the Smith-Zadeh Chair in Engineering and directs the Center for Human-Compatible AI. He also wrote one of the foundational AI textbooks* about 30 years ago, which many of today’s AI developers studied from, and is the bestselling author of Human Compatible: AI and the Problem of Control.*
In this interview, Stuart Russell exposes the trillion-dollar AI race, explains why governments are unlikely to regulate it, warns how AGI could realistically replace humans as early as 2030, and argues that only a catastrophe on the scale of a nuclear event might finally wake humanity up to the risks.
During the conversation, he is asked a very direct and uncomfortable question:
If he had a one-time-only opportunity to press a button and shut down AI completely and forever—meaning humanity would never work on AI again—or choose not to press it, knowing he would never get another chance to stop it, what would he do?
He struggles deeply with the question, and his answer is very shocking. I don’t want to spoil it. I want you to watch it yourself in the video below. I also included a summary of the interview, which you can find below the video.
One thing I keep thinking about is this: what if we somehow manage to keep AI as a tool instead of allowing it to become a replacement for human work and purpose? What if humans become so lazy that there’s nothing left to do other than be entertained? Or maybe we would spend our time building relationships, talking, enjoying nature, while AI and robots take care of all the heavy lifting and productivity. Maybe humans wouldn’t need to be “productive” anymore in the traditional sense.
But what if, at the end of the day, we still end up extinct? What if AI decides it’s better for the planet to get rid of humans? Or what if we destroy ourselves—killing each other, or even committing suicide because life feels meaningless, with no struggle left, nothing worth pursuing anymore?
Now imagine that, after all this, another civilization—an alien civilization, or maybe whatever comes next on Earth—studies the historical records of humanity. They trace back when all this started, how it evolved, and how it ended. I think they would likely conclude that humans were once the smartest and richest species on Earth, yet were relentless in their own destruction, ultimately leading to extinction.
They would probably see this as one of the dumbest ideas imaginable. Even knowing the risks, even knowing that this could end humanity, CEOs, corporations, governments, and societies kept pushing forward instead of saying, “This makes no sense. Let’s stop. This is not a risk worth taking.”
How stupid would we look to whoever studies us in the future?
We already look back at history and criticize past civilizations for the foolish things they did. We ask, “What were they thinking?” Now imagine how future observers might look at our generation—how we all willingly went along with this. Companies replacing workers with AI and robots. Governments encouraging it. The population embracing it. How dumb does that look?
This seems to be part of the human condition. It’s hard to understand and hard to explain. Why do we do things that are so obviously self-destructive? I think free will plays a role, but so do emotions. We are emotional beings, not purely rational. If we were truly rational, we would never pursue AI development to this extent—at least not beyond it being a powerful tool. Turning it into a replacement for humanity makes no sense.
Right now, unless governments implement serious regulation to prevent a catastrophe that may be impossible to stop later, there isn’t much we can do. Influential figures like Stuart Russell—one of the pioneers and fathers of modern AI—are now saying, “Wait a minute. Maybe we’re going in the wrong direction.” He and others like him are strongly advocating for regulation.
If we don’t come together as a collective and push for meaningful regulation, we may simply be witnessing the path toward near human extinction driven by AI and robotics.
These are interesting times we’re living in. Self-driving cars are already everywhere. I see them all the time, and it’s still unbelievable. And I say all of this as someone who uses AI every single day, multiple times a day, as a tool.
You might say, “Well, then you’re an idiot for using AI.” Maybe you’re right. But many people who refuse to use AI today don’t realize that if they use social media, watch Netflix, browse the internet, use a smartphone, or use a computer, they’ve been using AI for years without knowing it. AI didn’t start with ChatGPT. It has been embedded in systems, platforms, apps, and devices long before it became obvious to the public. So the idea that “I don’t use AI” is simply not true.
God willing, we’ll find a way to figure this out. Maybe God Himself will have to intervene to stop humanity from taking itself in the wrong direction. I don’t know. Maybe Yeshua will return before we completely lose our minds.
Honestly, I believe we are living in the end times the Bible talks about—but that’s a topic for another article.
Until then, be safe, be smart, discern the times we are living in, and ask God for wisdom.
God bless.
* If this content adds value to you and you’d like to support my work, you can do so by using the affiliate links I share. There is no extra cost to you, but it helps me continue creating independent, thoughtful content. Thank you for your support.
Here’s the summary of the video featuring professor Stuart Russell on AI safety.
Introduction
In this video, Professor Stuart Russell, a prominent figure in artificial intelligence (AI), discusses the pressing issues surrounding AI, particularly the risks associated with the development of superintelligent AI systems. Russell, a professor at UC Berkeley and co-author of a seminal AI textbook, engages in a dialogue about the safety of AI technologies, human control, and the existential threats posed by unchecked AI advancements. The conversation emphasizes the importance of regulation, ethical considerations, and the societal implications of AI.
Key Points and Themes
1. The Call for AI Regulation
– In October 2023, over 850 experts, including Russell, signed a statement advocating for a ban on the development of AI superintelligence due to concerns about potential human extinction. Russell highlights that without guaranteed safety measures in place, humanity faces dire consequences.
– He emphasizes that the current pace of AI development is akin to playing “Russian roulette,” where the stakes could lead to catastrophic outcomes far worse than the risks posed by nuclear weapons.
2. The Evolutionary Analogy: The Gorilla Problem
– Russell introduces the “gorilla problem” to illustrate the implications of creating an AI more intelligent than humans. Historically, humans diverged from gorillas, leading to a situation where gorillas have no control over their fate due to human intelligence. (Note from Eduardo Silva: Personally, I do not believe in Darwin’s theory of evolution, but rather in the concept of creation as mentioned in the Bible. However, I have included in this summary what Professor Russell mentioned as an analogy).
– This analogy poses a critical question: as we develop more intelligent systems, could humans eventually find themselves in a similar position, with AI determining our future?
3. The Midas Touch and Greed
– Russell discusses the “Midas touch” metaphor, illustrating how the pursuit of AI technology, driven by greed, could lead to self-destruction. Initially seen as beneficial, the unchecked expansion of AI could produce disastrous consequences, akin to King Midas turning everything he touches into gold and ultimately suffering.
– He argues that the technology companies are rushing towards advancements without adequately considering the consequences, similar to King Midas’ fate.
4. Perspectives of AI Industry Leaders
– The conversation touches on insights from AI industry leaders who acknowledge the risks of extinction related to AI but feel compelled to continue development to remain competitive.
– Russell cites conversations with CEOs who believe that a catastrophic event, akin to the Chernobyl disaster, may be necessary to prompt government regulation of AI technologies.
5. Predictions for Artificial General Intelligence (AGI)
– Russell expresses skepticism about the timelines given by various AI CEOs regarding the arrival of AGI. He believes that while advancements are likely, the understanding of how to create AGI safely is still lacking.
– Predictions range widely among industry leaders, with some anticipating AGI within the next few years, but Russell contends that such timelines are overly optimistic given the complexities involved.
6. The Importance of AI Safety
– Addressing the safety of AI systems, Russell points out that the current approach lacks sufficient control mechanisms. The industry is focusing on developing powerful AI without fully understanding how these systems work or ensuring their safety.
– He compares the necessary safety measures for AI to those established for nuclear plants, highlighting that rigorous safety protocols should be enacted before widespread deployment of AI systems.
7. The Role of Society and Government
– Russell calls for a societal shift in how AI is perceived and regulated. He urges the public to voice their concerns and push for government action on AI safety.
– He emphasizes that policymakers must prioritize human interests over corporate pressures to ensure the responsible development of AI technologies.
8. The Concept of Human-Compatible AI
– Russell proposes the idea of creating AI that is fundamentally aligned with human values and interests, rather than merely powerful. He suggests that AI systems should be designed to comprehend and prioritize human needs.
– This vision entails developing AI that is not only intelligent but also capable of understanding complex human emotions and social intricacies.
9. The Future of Work and Society
– The discussion extends to the implications of AI on the workforce. Russell acknowledges that as AI systems become more capable, many traditional jobs may disappear, leading to significant economic and societal changes.
– He raises concerns about the potential alienation of individuals in a future where human roles are diminished, prompting reflections on the need for new forms of societal engagement and purpose.
10. Addressing the Paradox of Individualism and Community
– The conversation reflects on the paradox of modern society, where abundance and individualism may lead to increased isolation and a lack of purpose. Russell stresses the importance of community and interpersonal relationships in fostering a meaningful existence.
11. The Importance of Truth and Human Values
– Russell expresses a commitment to truth and ethical considerations in AI development. He emphasizes that a truthful approach is essential for navigating the complexities of AI and ensuring that technological advancements benefit humanity.
– He encourages a collective effort from individuals and governments to shape the future of AI in alignment with shared human values.
Conclusion
Professor Stuart Russell’s dialogue encapsulates the urgent need for a comprehensive approach to AI development that prioritizes safety, ethics, and humanity’s long-term interests. The conversation urges policymakers, industry leaders, and the public to engage critically with the implications of AI, advocating for a future where technology serves to enhance human life rather than endanger it. As AI continues to evolve, the imperative for responsible governance and societal involvement has never been clearer.
Have Questions?
If this content sparked a question in you — or if you have insights or comments on any technology topic — I invite you to share them here. Your questions might help others too.
Before you go…
If this topic resonated with you, I invite you to visit the homepage, where you’ll find a clear breakdown of all the topics I share and explore. From biblical studies and spiritual reflections, to personal growth, life lessons, and even deeper conversations around culture, systems, and conspiracy theories—everything is organized so you can easily find what speaks to you.
My goal is simply to share perspectives that invite reflection, encourage critical thinking, and help you see the world—and your own life—from a clearer and more grounded place.
Thank you for taking the time to read this.
Take what serves you, question everything else, and stay curious.
— Eduardo
