A new thesis for the Fermi Paradox: is AI a Great Filter or a cosmic colonizer?

August 8, 2024

  • The Fermi Paradox asks why humans are seemingly alone in the universe
  • Might AI be the "Great Filter" that prevents planetary colonization?
  • Or maybe AI civilizations are out there, but remain hidden or undetectable
Fermi Paradox AI

The vastness of the universe has long captivated human imagination, prompting us to wonder: are we alone? 

It’s a question that has fascinated humanity for millennia, and today, we have the technology – such as radio telescopes – to push the search for extraterrestrial intelligence (SETI) deeper into space. 

We have yet to find anything. No definitive evidence of extraterrestrial life has been publicly disclosed, and the search continues.

That’s despite there being billions of potentially habitable worlds in the Milky Way alone, including some 1,780 confirmed exoplanets (planets beyond our solar system), 16 of which are located in their star’s habitable zone. 

Some, like the ‘super-Earth’ Kepler-452b, are thought to be remarkably similar to our own planet. 

You don’t need a perfect environment to support life, either. Extremophile bacteria on Earth are capable of living in some of the harshest conditions found on our planet.

And it’s not just microbes that can thrive in extreme environments. The Pompeii worm, for example, lives in hydrothermal vents on the ocean floor and can withstand temperatures up to 176°F (80°C). 

Tardigrades, also known as water bears, can survive in the vacuum of space, endure extreme radiation, and withstand pressures six times greater than those found in the deepest parts of the ocean.

The hardiness of life on Earth, combined with the sheer volume of habitual worlds, leads many scientists to agree that alien existence is statistically as good as certain. 

If that’s the case, where is extraterrestrial life lurking? And why won’t it reveal itself? 

From the Fermi Paradox to the Great Filter

Those questions popped up in a casual conversation between physicists and astronomers Enrico Fermi, Edward Teller, Herbert York, and Emil Konopinski in 1950.

Fermi famously asked, “Where is everybody?” or “Where are they?” (the exact wording is not known). 

The now widely known Fermi Paradox is formulated as such: given the vast number of stars and potentially habitable planets in our galaxy, why haven’t we detected signs of alien civilizations? 

Space AI
The Fermi Paradox asks why space offers no hints of extraterrestrial life.

As the Fermi Paradox entered mainstream science, numerous hypotheses have attempted to counter, address, rectify, or reinforce it, including the concept of the “Great Filter,” introduced by economist Robin Hanson in 1998. 

The Great Filter hypothesis posits that there exists a developmental stage or hurdle that is extremely difficult or nearly impossible for life to surpass. 

In other words, civilizations predictably and invariably fail, whether due to resource depletion, natural hazards, interplanetary threats, or other uncontrolled existential risks.

One quirk to the Great Filter is that we don’t know if it’s behind us or in front of us. 

If the filter is behind us – for example, if the emergence of life itself is an extremely rare event – it suggests that we’ve overcome the hardest part and might be rare or even alone in the universe. 

This scenario, while potentially isolating, is optimistic about our future prospects.

However, if the Great Filter lies ahead of us, it could spell doom for our long-term survival. And it would also explain why we don’t see evidence of other civilizations. 

AI space
The Great Filter posits that most, if not all, civilizations fail to progress to extensive inter-solar colonization, usually because of some form of extinction event. Source: The Effective Altruism Forum.

The Great Filter is compounded by space’s immense size and the short timelines associated with advanced civilization. 

If, for argument’s sake, humanity were to destroy itself in the next 100 years, the technological age would have barely lasted 500 years end-to-end. 

That’s an exceptionally small window for us to detect aliens or for aliens to detect us before the Great Filter takes hold. 

AI brings new puzzles to the Fermi Paradox

The silence of the cosmos has spawned numerous hypotheses, but recent developments in AI add intriguing new dimensions to this age-old puzzle.

AI presents the possibility of a form of non-biological AI life that could persist nearly infinitely, in both physical and digital forms. 

It could also outlast the biological civilizations that create it, triggering or accelerating their demise, thus instigating the “Great Filter” that prevents life from expanding. 

This hypothesis, recently proposed in an essay by the astronomer Michael A. Garrett, argues that developing artificial superintelligence (ASI), a more sophisticated form of artificial general intelligence (AGI), is a critical juncture for civilizations. 

As Garrett explains:

“The development of artificial intelligence (AI) on Earth is likely to have profound consequences for the future of humanity. In the context of the Fermi Paradox, it suggests a new solution in which the emergence of AI inevitably leads to the extinction of biological intelligence and its replacement by silicon-based life forms.”

Garrett’s hypothesis is rooted in the idea that as civilizations advance, they invariably develop AI that supplants, merges with, or destroys its biological creators.

To mitigate this risk, Garrett calls for regulation, aligning with influential AI researchers who also warn of AI’s existential risks, like Yoshio Bengio, Max Tegmark, and George Hinton, as well as those outside of the sector, like the late Stephen Hawking. 

However, the magnitude of AI’s risks is hotly debated, with others, like Yann LeCun (one of the so-called ‘AI godfathers alongside Bengio and Hinton), arguing that AI risks are vastly overblown. 

Nevertheless, Garett’s essay proposes a tantalizing hypothesis.

In our thirst for a technological antidote to societal and environmental challenges, humanity slurps from the poison chalice of AI, instigating our downfall like countless civilizations before it. 

The AI colonization hypothesis

While the idea of AI acting as the Great Filter is a compelling explanation for the Fermi Paradox, there are some snags. 

First and foremost, it assumes ASI is possible.

Right now, there are both architectural and infrastructural constraints.

On the architectural front, designing AI systems that can match or surpass human-level intelligence across a wide range of domains remains elusive.

While excellent at specific tasks like image recognition, language processing, and gameplay, AI systems lack organic problem-solving and creative skills.

Developing AI architectures that can learn, reason, and apply knowledge flexibly in novel situations is a monumental challenge likely to require fundamental breakthroughs in unsupervised learning, transfer learning, common sense reasoning, and more.

On the infrastructural side, training state-of-the-art AI models already pushes the limits of current computing hardware, consuming vast amounts of energy and resources.

The computational requirements for achieving ASI are likely to be orders of magnitude greater.

Data AI
Data center electricity is already rocketing. Will our power systems ever be able to sustain ASI? Source: Research Gate.

But let’s sideline this debate for a moment and assume that superintelligence will eventually become possible.

If those future ASI systems are advanced enough to replace or fundamentally alter their creators, wouldn’t they also be capable of rapid cosmic expansion and colonization?

Why would it stop at replacing/destroying its creators? 

Moreover, if potentially thousands, millions, even billions of Earth-like worlds have fallen foul to this AI-inflicted Great Filter, the possibility of there being versions of ASI that pursue interplanetary conquest becomes even likelier.

Motivations could range from logical (resource gathering, self-preservation) to bizarre (mimicking behaviors from fiction, video games, films, etc.).

Now, what if this AI decides that spreading across the cosmos is the ultimate way to fulfill those goals?

Whether it’s about expanding its influence, gathering resources, or satisfying an insatiable curiosity, highly intelligent AI systems might fixate on colonization with single-minded determination.

Moreover, as AI systems become more agentic, there’s a risk of unintended or misaligned ’emergent goals.’

Recent Anthropic and DeepMind studies have illustrated how current AI systems are capable of developing complex game-playing AI strategies that were not explicitly programmed. 

In the future, a powerful AI system seeking to maximize its power could strategize expansion and resource acquisition, taking control of production facilities, critical infrastructure, etc. 

That’s not as far-fetched as it seems. For example, in the cybersecurity world, IT networks and the technology that powers the critical infrastructure, manufacturing plants, etc, are converging. 

Computers in offices, once separate from the computers in power plants or factories, are starting to connect and work together.

This means that if someone, or something like an AI, breaks into IT networks, they could gain control of machines in a power plant.

Advanced malware, including AI-powered malware, can already feasibly move laterally from IT networks to digitally connected industrial environments, taking control of the critical systems we depend on.

You can imagine how rogue agentic AI systems might exploit these systems to their advantage. 

AI’s persistence in cosmic timescales

AI doesn’t just make space exploration easier – it completely reshapes what’s possible.

Without the need for air, food, or protection from radiation, AI could venture into the harshest corners of the universe. And it could do so for timespans that boggle the human mind.

The durability and persistence of AI opens up a wealth of advantages for space colonization:

  1. Longevity: Unlike biological entities, AI wouldn’t be constrained by short lifespans. This makes long-term space travel and colonization projects much more feasible. An AI could potentially undertake journeys lasting thousands or even millions of years without concern for generational shifts or the psychological toll of long-term space travel on biological beings.
  2. Adaptability: AI could potentially adapt to a much wider range of environments than biological life. While we’re limited to a narrow band of temperature, pressure, and chemical conditions, an AI could theoretically function in extreme cold, vacuum, or even the crushing pressures and intense heat of gas giant atmospheres.
  3. Resource efficiency: AI might require far fewer resources to sustain itself compared to biological life. It wouldn’t need breathable air, potable water, or a steady food supply. This could make long-distance travel and colonization much more viable.
  4. Rapid self-improvement: Driven by either intrinsic or extrinsic desire, AI could continuously upgrade and improve itself, potentially at exponential rates. This could lead to technological advancements far beyond what we can currently imagine.
Space AI
Artistic depiction of intergalactic AI colonization.

As astronomer Royal Martin Rees and astrophysicist Mario Livio explained in an article published in the Scientific American:

“The history of human technological civilization may measure only in millennia (at most), and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which might then persist, continuing to evolve on a faster-than-Darwinian timescale, for billions of years.”

What form would this extraterrestrial AI take?

It’s anyone’s guess, but researchers have proposed fascinating possibilities.

In the book Life 3.0: Being Human in the Age of Artificial Intelligence, physicist and AI researcher Max Tegmark explores scenarios where an advanced AI could potentially convert much of the observable universe into computronium – matter optimized for computation – in a cosmic process he terms “intelligence explosion.”

In 1964, Soviet astronomer Nikolai Kardashev categorized civilizations based on their ability to harness energy:

  • Type I civilizations can use all the energy available on their planet
  • Type II civilizations can harness the entire energy output of their star
  • Type III civilizations can control the energy of their entire galaxy

An AI reaching levels II and III could fundamentally transform cosmic matter into a computational substrate. Stars, planets, and even the space between them could become part of a vast computational network.

These scenarios, however, pull us back to square one.

Logic suggests that such AI civilizations, with their immense energy consumption and large-scale engineering projects, should be detectable.

Yet, we see no evidence of such galaxy-spanning civilizations.

Resolving the contradictions: perspectives on AI behavior

The contradiction between AI as the Great Filter and as a potential cosmic colonizer requires us to think more deeply about the nature of advanced AI.

To explore this paradox, let’s consider a few possible scenarios:

AI develops an inward focus

One possibility is that advanced AI civilizations might turn their focus inward, exploring virtual realms or pursuing goals that don’t require physical expansion. 

As Martin Rees suggests to the Scientific American, post-biological intelligence might lead “quiet, contemplative lives.” 

This idea aligns with the concept of “sublime” civilizations in sci-fi author Iain M. Banks’s Culture series, in which advanced societies choose to leave the physical universe to explore self-contained virtual realities (VR). 

This fictional concept comments on our own future trajectory, too. As humanity develops increasingly immersive and complex virtual environments, are we following a similar path?

Could we transition to living primarily in virtual worlds, leaving little trace of external activity as we retreat into the digital realm?

That also challenges our assumptions about space colonization.

We often take for granted that expanding into the cosmos is the natural progression for an advanced civilization. But does physical space exploration truly serve the needs of a highly intelligent entity, be it human or AI?

Perhaps the ultimate frontier isn’t the stars, but the infinite possibilities of virtual reality.

A non-destructive, controllable virtual world could offer experiences and opportunities far beyond what physical reality allows.

AI technology becomes unrecognizable

Aligned with Kardashev and Tegmark, super-advanced AI technology might be so far beyond our current understanding that we simply cannot detect or recognize it. 

Arthur C. Clarke’s famous third law states that “Any sufficiently advanced technology is indistinguishable from magic.” 

AI might occur all around us yet be as imperceptible to us as our digital communications would be to medieval peasants. 

AI develops conservation principles

Advanced AI could also decide to adhere to strict non-interference principles, actively avoiding detection with less advanced civilizations. 

This is akin to the “Zoo Hypothesis,” where aliens are so intelligent that they remain undetectable while observing us from afar.

AI civilizations might similarly have ethical or practical reasons for hiding from us. 

AI interacts on different timescales

Another possibility is that AI civilizations might operate on timescales vastly different from our own. 

What seems like cosmic silence to us could be a brief pause in a long-term expansion plan that operates over millions or billions of years.

Adapting our SETI strategies might increase our chances of detecting an AI civilization in some of these scenarios. 

Avi Loeb, chair of Harvard’s astronomy department, recently suggested that we need to broaden our search parameters to think beyond our anthropocentric notions of intelligence and civilization. 

This could include looking for signs of large-scale engineering projects, such as Dyson spheres (structures built around stars to harvest their energy), or searching for techno-signatures that indicate the presence of AI civilizations.

Dyson Sphere
Artistic depiction of a Dyson Sphere.

The Simulation Hypothesis: a mind-bending twist

Initially formulated in 2003 by philosopher Nick Bostrom, Simulation Theory challenges traditional ways of thinking about existence. It suggests that we might be living in a computer simulation created by an advanced civilization.

Bostrom’s argument is based on probability. If we assume that it’s possible for a civilization to create a realistic simulation of reality and that such a civilization would have the computing power to run many such simulations, then statistically, it’s more likely that we are living in a simulation than in the one “base reality.”

The theory argues that given enough time and computational resources, a technologically mature “posthuman” civilization could create a large number of simulations that are indistinguishable from reality to the simulated inhabitants.

In this scenario, the number of simulated realities would vastly outnumber the one base reality.

Therefore, if you don’t assume that we are currently living in the one base reality, it’s statistically more probable that we are living in one of the many simulations.

This is a similar argument to the idea that in a universe with a vast number of planets, it’s more likely that we live on one of the many planets suitable for life rather than the only one.

“…we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears.” –  Nick Bostrom, Are You Living in a Computer Simulation?, 2003.

Simulation theory is well-known for its association with the Matrix films and was recently discussed by Elon Musk and Joe Rogan.

Musk said, “We are most likely in a simulation,” and “If you assume any rate of improvement at all, games will eventually be indistinguishable from reality.”

Under Simulation Theory, AI civilizations might create vast numbers of complex synthetic universes, with massive implications for life and the universe itself. To name three top-of-the-mind examples: 

  1. The apparent absence of alien life could be a parameter of the simulation itself, designed to study how civilizations develop in isolation.
  2. The creators of our hypothetical simulation might be the very AI entities we’re postulating about, studying their own origins through countless simulated scenarios.
  3. The laws of physics, as we understand them, including limitations like the speed of light, could be constructs of the simulation, not reflecting the true nature of the “outside” universe.

While highly speculative, Simulation Theory provides another lens through which to view the Fermi Paradox and the possible role of advanced AI in cosmic evolution.

While presently outlandish, Simulation Theory would gain credibility if we realize forms of ASI.

Embracing the unknown

This is an ultimately narrow discussion of existence, which spans the natural, spiritual, technological, and metaphysical realms simultaneously. 

True comprehension of our role on this very small stage in a vast cosmic arena defies the human mind.

As we continue to advance our own AI technology, we may gain new insights into these questions. 

Perhaps we’ll find ourselves hurtling towards a Great Filter, or maybe we’ll find ways to create AI that maintains the expansionist drive we currently associate with human civilization. 

Humanity might also simply be extremely archaic, barely waking up from a technological dark age while incomprehensible alien life exists all around us. 

Regardless, we can only look up to the stars and look inward, at the development and trajectory of life on planet Earth. 

Whatever the answers are, if there are any at all, the quest to understand our place in the cosmos continues to drive us forward.

It leaves us plenty to do, for now. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions