Superintelligence, Black Seas of Infinity, and Mass Effect

Mass Effect almost nailed the most terrifying kind of villain imaginable

Superintelligence, Black Seas of Infinity, and Mass Effect
Source: Wallpaperaccess.

Extinction is a fact of life. The specific number of species that fade from existence on a daily basis is up for debate (depending on the modelling used). A baseline rate of extinction already occurs in nature, but there is good reason to believe that it is rapidly accelerating, largely due to human-induced climate change. Whatever the actual figure actually is, there can be no doubt that at least some species on Earth go extinct every single day. There are countless species on our very own planet that we never had the chance to encounter before they disappeared forever.

As shocking as this revelation might be to you, it’s likely to seem counter-intuitive on some level. Anthropocentrism has a way of obscuring the fact that — despite our advanced intelligence and technology — we are mammals. We are entirely reliant on Earth’s ecosystem for survival. In fact, our existence is intrinsically linked to any number of other species. The disappearance of all bees, for example, could come close to mortally wounding human civilisation.

In fact, there are many different potential causes for a hypothetical human extinction event. Generally speaking, these causes are clustered into one of three categories: ecological, technological, or extra-terrestrial. Although an ecological disaster could potentially occur at any time (an asteroid colliding with Earth, for example), it’s also true that ecological causes could still be anthropogenic — caused by humans — where climate change is the obvious case.

For the purposes of this story, however, I'm especially interested in extinction precipitated by technology.

Ghost in the machine

Technological causes for human extinction are particularly interesting, not least because they have often been the subject of science fiction. There are endless stories about self-aware, artificially intelligent entities that “wake up” one day and decide to conquer their human masters. I, Robot by Eando Binder (which inspired the legendary Isaac Asimov, whose more famous short stories came later) is a powerful early example. In this short story, a robot servant accidentally falls on its master, killing him. The master’s housekeeper assumes that the robot committed an act of murder, sending armed men to pursue it, and causing it to retaliate in self defence. In this case, the robot ultimately decides to write a confession and shut itself down, determining that the potential loss of human life isn’t worth further attempts to defend itself. Novels and films over subsequent decades have riffed on this idea in one way or another. In The Matrix, human beings are ultimately enslaved by apparently malevolent, all-powerful machines (of course, we later learn through The Second Renaissance that humanity were the original oppressors, and the machines were staving off eradication at our hand).

What’s interesting about these kinds of stories is that they give a certain kind of relatable agency to the machines or robots. That relatable agency exists in the form of first-person consciousness similar to our own; that is, the idea that the machines have a subjective conscious experience and that human-like desires and goals arise from this. This leads to truly fascinating and challenging moral and ethical dilemmas for audiences. If a machine could think, feel, and experience self-awareness to a human equivalence, shouldn’t it also enjoy the same legal recognition and rights?

I love digging into this kind of sci-fi. One of my favourite films in recent times is the remarkable Ex Machina, which is fascinating, terrifying, and deeply thought-provoking in equal measure. But the concept of human-like artificial intelligence is simply one solitary thread among the rich and complex tapestry that comprises machine ethics.

Superintelligence

Historically, human sci-fi authors have tended to frame robots and artificial intelligence in terms we ourselves can relate to. The examples mentioned above — from I, Robot to The Matrix and Ex Machina — presume that a sufficiently advanced synthetic system would likely have similar thoughts and motivations to us. But in the real world, there is substantial debate surrounding the possibility of artificial consciousness. I suspect this is largely due to the fact that we still don’t really know what consciousness itself “is”. You and I experience the downstream effects of consciousness (the sensation of awareness — of what it is like to be us). But very little is known about its upstream causes. It is possible — although not at all certain — that consciousness simply “emerges” and sits atop sufficiently-complex information processing systems. So, theoretically, if we created a clever enough A.I., it might simply “become” conscious having achieved some as-yet-unknown complexity milestone.

However, it is just as likely that humans may one day create a superintelligence that does not necessarily experience self-consciousness or intentionality. In fact, it may not even be possible to detect whether or not such an intelligence possesses human-like self-awareness. This dilemma is best described by the Chinese room argument, which is worth reading in full. The argument is essentially saying that we may find ourselves in the presence of an A.I. that seems conscious, but we'll have no way of knowing whether it really is or not.

What if we create a superintelligent A.I. — an entity many times more capable than us in every cognitive domain, and upon which we become largely dependent — and fail to maintain control?

In this scenario, the A.I. isn’t “evil”, nor is it acting with any particular self-aware intention. As philosopher Nick Bostrom says in his book Superintelligence: Paths, Dangers, Strategies: “just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.”

It may sound far-fetched, but Bostrom supplies a useful example of how a human-created superintelligence could easily collide with us in the future.

white robot wallpaper
Photo by Possessed Photography / Unsplash.

Bostrom’s paperclip maximiser

Imagine we create an A.I. powered machine with one goal: maximise the number of paperclips in its collection. There are many different ways the paperclip maximiser could do this. It could collect existing paperclips, it could find a way to earn money to purchase more, or it could begin to manufacture its own. Perhaps it would adopt a multi-pronged strategy to do all three.

If the paperclip maximiser were sufficiently intelligent, it would likely invest in further optimisation given its mission to maximise paperclips. An optimisation process is, put simply, any type of process that creates new solutions which are better/more effective than older ones. Even fairly rudimentary existing A.I. is already capable of doing this by improving its utility function. A utility function assigns numerical values to outcomes, such that outcomes with higher utilities are always preferred to outcomes with lower utilities. In this scenario, any outcome that increases the number of paperclips will therefore be preferred. This hopefully makes clear the value of investing in further optimisation, which itself requires greater intelligence. It’s possible to see how this fairly simple, recursive process could lead to a form of superintelligence that extends well beyond human capability.

Over time, this superintelligent A.I. would invent entirely novel ways of maximising paperclips. If allowed to operate unfettered, it could potentially transform all of Earth — and increasing volumes of space — into paperclip maximising facilities.

There is something categorically different about this kind of A.I. It’s truly a terrifying proposition in terms of sci-fi (let alone reality). In I, Robot or even The Matrix, it’s possible to imagine scenarios where humans at least have the option to “make peace” with A.I. foes, given the potential for overlapping concerns. But once the hypothetical paperclip maximiser is beyond our control, there is no possibility of negotiation or de-escalation. Not only is it entirely unconcerned with ethics or morals, but it may have no self-awareness or consciousness to speak of. Its incomprehensibly vast, exponentially-increasing intelligence effortlessly re-forms endless spacetime, leaving only paperclips in its wake.

EDI from the Mass Effect series. Source: BioWare.

Let a thousand minds bloom

Mass Effect is another sci-fi franchise that explores A.I. from various perspectives. There’s EDI, who supports the crew of the Normandy SR-2 (and who, in Mass Effect 3, is untethered from the ship, enabling her to join the crew on missions via a physical body). EDI is an example of friendly A.I.; her creators embedded specific controls in her programming to incentivise collaboration with humans while also restricting her ability to cause havoc (either intentionally or otherwise). For example, she has no access to the Normandy SR-2’s critical systems. Then there are the geth, a species of “networked artificial intelligences” that “became sentient and began to question their masters”. The geth were created by the quarians, another of the game’s prominent species. The quarians attempted to exterminate the geth when they displayed signs of resistance — this resulted in a disastrous war, in which the geth won and Quarian civilisation was largely destroyed. The few remaining survivors were cursed to drift through space aboard the Migrant Fleet, leaving them without a permanent home. The geth represent a particular subset of the A.I. takeover concept — specifically the idea of a “robot uprising”.

The Mass Effect franchise takes the time to explore the implications of advanced A.I. and the various ways it could impact civilisation. Whether it’s the more mild question of your crewmates being suspicious of EDI, or the broader attempts by galactic civilisation to keep advanced A.I. at bay through laws and regulation, Mass Effect is able to go deeper than most media — both because it’s a trilogy comprising many hours of content, and because it’s a video game, enabling players to take a direct role in key plot decisions.

The form of A.I. that interests me most - and the one that Mass Effect does an admirable, though imperfect job exploring - is artificial superintelligence. Mass Effect essentially possesses its own version of Bostrom's paperclip maximiser in the form of the Reapers.

Leviathan

Understanding the Reapers requires us to take a step back and consider the Leviathans. The Leviathans are not terribly well-explained in the context of Mass Effect's main plot - you have to play the Leviathan DLC in order to appreciate their importance to the story.

So, what are the Leviathans?

Imagine a giant cuttlefish the size of a blue whale, possessing intelligence far beyond human comprehension (and with some form of mysterious telepathic capability) and you'll at least have a rough idea. In the timeline of the Mass Effect trilogy, the impressive Leviathans are long extinct. But when they were alive, these enormous sea-dwelling creatures were - as far as we know - the most advanced life forms in the universe. They were so advanced that their reach extended well beyond the gaping dark oceans of their home planet; the Leviathans used their sophisticated telepathic abilities and vast know-how to create and cultivate various forms of life (often land-based). They developed mutually-beneficial, symbiotic relationships with these new forms. But there was a catch. The Leviathans noticed that the life they created would eventually become sophisticated enough to develop their own life - in the form of synthetic beings, or A.I.s. It was almost invariably the case that this synthetic life would lead to the mass extinction of their creators, which constantly disrupted the delicate symbiotic ecosystems the Leviathans had painstakingly cultivated.

The Leviathans' solution to this problem was to build a superintelligent system that could help them avoid these catastrophic boom-and-bust cycles and preserve all life. The consequences were dire. The intelligence created by the Leviathans ultimately determined that the Leviathans themselves were a barrier to the successful flourishing of life, and it turned on them, almost wiping them out entirely.

What follows from these events is a rather long and convoluted story. But, in essence, the superintelligence created by the Leviathans went through a process of continual optimisation. One of the outputs was that it 'evolved' into an entirely new synthetic species (referred to as 'Reapers' by other creatures - including humans - in the Mass Effect universe). The story of the Reapers is a long one, but essentially, the Reapers continually cultivate the technological advancement of new species (along lines they desire) and routinely 'harvest' them in massive, controlled extinction events. In doing so, the Reapers - not unlike the Borg in Star Trek - incorporate the genetic matter of these species into their own "collective".

The Reapers attack Earth in Mass Effect 3. Source: Getwallpapers.

Conversing with gods

During the Mass Effect games, we encounter the Reapers on numerous occasions. Our first encounter is in the first game, where we come across Sovereign, a particularly large and significant Reaper. In fact, we have an entire conversation with Sovereign, which is simultaneously revealing and deeply mysterious. It's obvious right from the beginning that Sovereign is unfriendly, to say the least:

"Rudimentary creatures of blood and flesh. You touch my mind, fumbling in ignorance, incapable of understanding."

Sovereign (Mass Effect)

We also learn something about their intentions, although their goals are never explicitly stated:

"The pattern has repeated itself more times than you can fathom. Organic civilizations rise, evolve, advance. And at the apex of their glory, they are extinguished."

Sovereign (Mass Effect)

The Reapers, as an enemy, are conceptually brilliant. They are a vast superintelligence, self-optimising in directions that necessarily intersect with both the birth and extinction of our - and many other - species. It's not that they are evil per se; it's that they are utterly unconcerned with our views, objectives, and survival. Do you regularly stop to think about the number of ants you are likely killing every time you walk around outside? Of course not; it's not that you're evil, it's just that ants simply don't factor into your day-to-day construct of the world around you. And certainly, when you're pursuing a goal - no matter how trivial - the survival of ants under your feet doesn't present an obstacle to achieving that goal. This is, theoretically, the relationship the Reapers have with us. Even the concept of "harvesting" is a routine activity for the Reapers. They don't give it a second thought, in much the same way that we don't consider the moral implications of cutting our lawns.

There's something brilliantly terrifying about this specific kind of enemy. I suppose the Reapers are also vaguely Lovecraftian, in the sense that they represent highly advanced beings from deep space that we can't hope to understand or comprehend. Their effortless fourth-dimensional movement casts bizarre, frightening shadows across our three-dimensional plane.

And yet, I think BioWare actually fumbled their implementation of the Reapers in numerous significant ways. The most obvious of these is simply the way the Reapers speak to us in the game. Sovereign spends a great deal of time belittling the player, and speaking about impending doom. In other words, it regularly engages with us like a fairly typical archetypal villain rather than a thoroughly unconcerned advanced A.I.

There are some cases where I think BioWare gets far closer to its goal of presenting the Reapers as disinterested, superintelligent beings. Here are some quotes from Mass Effect 2 that feel a little more tonally appropriate:

"We are the harbinger of your perfection."
"We will bring your species into harmony with our own."
"Your species will be raised to a new existence. We are the beginning, you will be the end."
Harbinger (Mass Effect 2)

You can kind of squint and see how these comments, through the eyes of a Reaper, are simply statements of fact as opposed to being emotive threats.

Black seas of infinity

Despite Mass Effect's highly imperfect representation of a superintelligent foe, I'd love to see more games explore this concept. It might be too late for Mass Effect to satisfactorily revisit the Reapers (unless we see a full reboot of the franchise), but I feel that this particular type of enemy is under-explored in video games generally. I'm not sure why. Perhaps it's easier - at least conceptually - for audiences to understand rebellious machines, or enemies with an obvious malicious agency. And sure, there are still many depths to plumb on those fronts as well. But this idea of humanity coming into contact with some entity that operates on a completely different plane of existence - such that we can't even begin to understand its motivations - is simply too compelling not to explore further in video games.

“The most merciful thing in the world, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.”

H.P. Lovecraft (The Call of Cthulhu)

Comments

Sign in or become a SUPERJUMP member to join the conversation.