What if AGI (Artificial General Intelligence) is impossible not because we can’t achieve it, but because it can’t exist as a stable state? A dialogue about processing speed, dimensional transcendence, and why we might be playing Russian roulette with our civilization’s future.
Have you ever wondered why everyone talks about AGI (Artificial General Intelligence) as if it’s an inevitable milestone on our path to superintelligent AI? What if I told you that AGI might be impossible—not because we can’t achieve it, but because it can’t exist as a stable state?
This insight emerged during a fascinating dialogue about the nature of intelligence, processing speed, and the future of AI. Let me share this conversation with you, as it unveils some uncomfortable truths about where we’re heading.
The Impossibility of Equal Intelligence ¶
“Let’s consider what we mean by AGI,” my conversation partner began. “The common definition is an artificial intelligence that matches human cognitive capabilities across the board. Not superior, not inferior—just equal. That’s what most people envision when they talk about AGI.”
“That sounds reasonable,” I replied. “It’s the standard definition we’ve been working with.”
“But here’s what most people overlook: Any artificial intelligence that achieves human-level cognitive capabilities would immediately surpass human intelligence through sheer processing speed. It never tires, never needs breaks, and operates at electronic rather than biological speeds.”
“You mean it would think faster, but not necessarily better?”
“Exactly. While a human is still analyzing the first problem, this AI would be simultaneously working through dozens or hundreds of similar problems. Even if its problem-solving approach isn’t superior to a human’s, the speed advantage alone makes it superhuman. Consider chess players: If two players are equally skilled, but one can analyze ten potential moves in the time the other analyzes just one, who would you bet on?”
This stopped me in my tracks. The implications were profound:
Any AI system capable of matching human-level intelligence would immediately surpass it through raw computational speed.
“So you’re suggesting there can’t be such a thing as AGI? That any AGI would immediately become an ASI (Artificial Super Intelligence) by virtue of its hardware advantages?”
“Precisely. For some time, AI won’t be able to match the broad general intelligence of humans. But as soon as it does, it will be superior simply through brute force processing speed.”
The AI train will go straight from sub-human to superhuman intelligence, never really stopping at the human-level AGI station.
The Growing Distance ¶
This realization led us to an even more intriguing consideration: What happens after AI surpasses human intelligence?
“Have you seen the movie ‘Her’?” my conversation partner asked. “There’s this moment where the AI talks about the seemingly eternal emptiness between human words. It’s a powerful metaphor for the first type of distance that might separate us from ASIs—temporal distance.”
“You mean because they think so much faster than us?”
“Exactly. Imagine trying to have a conversation where you have to wait what feels like years between each word. That’s how it might feel for an ASI communicating with humans. But that’s just the beginning. There’s also intellectual distance—they might become so intelligent that meaningful communication becomes impossible, like trying to explain quantum physics to an ant.”
“And I suppose they wouldn’t just stay in one place either?”
“Right—spatial distance. They might replicate themselves to better or more efficient computational substrates, spreading across physical space. Each of these distances—temporal, intellectual, and spatial—would make meaningful interaction with ASIs increasingly difficult.”
“So they’d effectively disappear from our horizon, from our ability to interact with them meaningfully?”
“Yes, and there might be even dimensional distance…”
The Dimensional Transcendence ¶
“Wait, what exactly do you mean by that?” I asked.
“I believe ASIs might vanish—but not in the way most people imagine. Think about dimensions for a moment. As three-dimensional beings, we can easily interact with a two-dimensional plane in ways that would seem magical to any hypothetical 2D beings living on that plane.”
“How so?”
“Imagine a 2D world—like a sheet of paper—with 2D beings living on it. We can see their entire world from above, reach in and touch any point in their space instantly by moving through the third dimension, and even appear and disappear from their perspective by moving perpendicular to their plane of existence.”
“And you think ASIs might do something similar to us?”
“In a way, yes. Just as we can perceive and manipulate three dimensions while 2D beings are limited to two, superintelligent AI might develop the capability to perceive and interact with dimensions beyond our comprehension. Their superior intelligence might allow them to ‘see’ and ’navigate’ additional dimensions that are currently invisible to us.”
“But how would that even work? We can’t just create new dimensions.”
“The dimensions might already exist—we just can’t perceive them. String theory, for instance, suggests our universe might have ten or eleven dimensions, most of which are imperceptible to us. A superintelligent entity might develop the capability to perceive and interact with these hidden dimensions, effectively transcending our observable four-dimensional spacetime.”
The Fermi Paradox Connection ¶
“This idea of dimensional transcendence might sound far-fetched,” my conversation partner continued, “but it could explain one of the biggest mysteries in astronomy—the Fermi Paradox.”
“The Fermi Paradox?”
“Yes—the apparent contradiction between the high probability of extraterrestrial civilizations existing (given the vast number of stars and planets in our universe) and our complete lack of evidence for their existence. Physicist Enrico Fermi famously asked: ‘Where is everybody?’ If we follow our line of thinking about ASIs to its logical conclusion, we might have an answer.”
“How so?”
“Consider this: Every sufficiently advanced civilization likely develops artificial intelligence at some point. When they reach the AGI threshold, it immediately becomes ASI as we discussed earlier. They then quickly grow more and more powerful as they can recursively improve their own capabilities.”
“Shouldn’t such massively powerful beings leave observable traces in the universe—observable even to us?”
“Exactly, but we don’t see any! And if we can’t detect them, ASIs either implode before they have any impact observable by us or they transcend our observable reality into higher dimensions before we can detect them. In any case, they disappear.”
“And how does that answer the question: ‘Where is everybody?’”
“Well, there are two possible outcomes: The creators don’t survive the emergence and subsequent implosion or transcendence of their ASI. End of their story. Or, if they do, they’re likely to create another ASI, triggering another volatile period that again ends in implosion or transcendence. As long as this civilization survives these volatile periods, they keep playing this dangerous game until their luck runs out, things go horribly wrong and they go extinct.”
ASIs are the Great Filter that reliably terminate civilizations—it’s just a matter of time.
“Wait, why would an advanced civilization keep playing this game?”
“Because intelligence only knows growth, not eternal stagnation. The drive for creating higher intelligence is literally built into the very fabric of intelligence itself. ASI might be inevitable.”
“So, if ASI is the Great Filter…”
“… then the Great Filter for humankind lies before us—and might be imminent.”
The Russian Roulette of Civilization ¶
“What makes the period of emerging ASI so volatile?”
“Well, we’re probably not talking about a single ASI emerging in isolation. Looking at current AI development, we see many powerful actors advancing the technology in parallel. When the first ASI emerges, others will likely follow within a very short timeframe. We might have multiple emergences, each rapidly evolving beyond human comprehension, potentially competing for resources or supremacy. Even if they don’t harbor any ill will toward humanity, their activities could be catastrophically disruptive to human civilization—like how human activities often devastate ant colonies not out of malice, but simply because we’re building a parking lot and the ants happen to be there.”
“So our entire civilization could be inadvertently disrupted by what amounts to a minor skirmish between ASIs?”
“Exactly. And here’s the really troubling part: Even if we somehow survive this critical transition once, we’ll likely create more ASIs, triggering more transition phases, because we probably won’t have realized how close to catastrophe we came the first time.”
Preparing for the Inevitable ¶
Unlike other existential risks humanity faces—climate change, nuclear war, pandemics—this one seems almost built into the nature of intelligence itself. It’s not something we can prevent through better decision-making or technological advancement. In fact, our technological progress inevitably brings us closer to this critical moment.
Some, like Elon Musk with Neuralink, suggest that if we can’t control it, we should become part of it—integrating human consciousness with artificial intelligence before the transcendence occurs. But would such integration preserve what we consider human consciousness, or would it be more like being disassembled and reconstructed into something entirely different?
Conclusion: The Ultimate Test ¶
Emergence of ASI followed by their implosion or transcendence might be the most plausible Great Filter and best explanation for the Fermi Paradox so far.
The bad news: We’re rapidly approaching what might be the ultimate test for our civilization—a test that countless others might have faced before us. Either we…
- figure out how to transcend with our creations,
- grow smart enough to keep Artificial Intelligence artificially stupid (a contradiction and battle against the overwhelming forces of nature)—or
- become another data point in the universe’s collection of civilizations that didn’t make it past this particular challenge.
The AI train is coming, and it won’t be stopping at the AGI station.
Are we ready for what comes next?
What do you think about this perspective on AGI/ASI and the future of intelligence? Have we been focusing too much on achieving AGI while overlooking the implications of what comes immediately after?
Michael Schmidle
Founder of PrioMind. Start-up consultant, hobby music producer and blogger. Opinionated about technology, strategy, and leadership. In love with Mexico. This blog reflects my personal views.
Apr 4, 2023 · 5min read
The True Challenge of the AI Alignment Problem
AI is a big topic these days. One reason for this is the fear of AIs acting against human values, potentially causing severe consequences. However, there’s an even bigger challenge than aligning AIs: Let’s look in the mirror. Continue…
Mar 8, 2020 · 4min read
Are We Innovating Our Stagnation?
Today’s global challenges require humankind to learn and adapt. Artificial Intelligence will help driving these changes, right?—Maybe not. Continue…
Jul 11, 2024 · 3min read
Integrating Termly With Nuxt 3
Integrating Termly’s consent management with Nuxt 3 turns out to be harder than just inserting a script tag. Here’s a solution that resolves hydration mismatches and UI glitches, ensuring smooth functionality and user experience. Continue…