“Multiplicity” Meets “The Matrix”: Mind Uploading and Forking in the Cloud (Part 1)

In their book “Rapture of the Nerds”, award-winning science fiction writers Cory Doctorow and Charles Stross have imagined a future whereby anyone can be uploaded to the cloud if they tire of their “meat body” existence. It ties in to the notion of the “singularity”, and the eventual advancement and convergence of technology to the point that we can simply upload a complete copy of our consciousness to a computer if we so desire. The technological singularity has been gaining mainstream attention for more than a decade, but the idea of humans moving into virtual realms has its conceptual roots in culture and literature spanning back centuries.

Verner Vinge, science fiction writher and former SDSU professor of maths and computer science, said in 1993: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Whether that brings to mind visions of an apocalyptic Matrix-style future in ten years or not, there are others like Ray Kurzweil who interpret this as a serious opportunity for technological change and the advancement of the human race.

Often, science fiction predicts the present rather than the future, given the impact of contemporary events on literary predictions of what is to come. Victorian author Mary Shelley had her own contemporary reflections on consciousness, and wrote about runaway technology mastering humans as well as the reanimated dead flesh stuff. New Deal-supporter Isaac Asimov wrote about worlds transformed by social programmes in his Foundation series. Now we have proponents and opponents of mind uploading and technological immortality who themselves are coming to grips with the explosion of online social networks, virtual environments and human-computer symbioses.

Saying that people who are happy to literally put their brains into computers are technophiles and the rest are luddites is pretty simplistic: the reality is that people – all types of people – have contemplated moving themselves into virtual worlds for centuries, built on foundations and religious doctrines thousands of years in the making. Lapsarianism, the idea that things were better back in the old days and are only getting worse (maybe even to the point of an apocalypse), was turned on its head by the Age of Enlightenment, which proposed that we all undergo continual human improvement. Following such continual improvement through to its ultimate outcome, we may end up with an “inverse apocalypse” where anything is possible. For example, as in the book, this could result in righteous true believers mind uploading into a paradisiacal AI (artificial intelligence) heaven while the technophobes are left behind in their clumsy, physical earthly forms.

But when we develop a human-equivalent AI, what are the logical consequences? If we can have a piece of software that is the equivalent of a human brain running on a certain piece of hardware, then we should be able to run that same human software faster on better hardware, and even parallelize it. (For parallelization, think of a single person doing a search for a keyword in a library full of books with a single pen, and then imagine the speed improvement if you could have multiple people with pens doing the same work in parallel.) Then we get faster thinking: maybe five years of thinking can be carried out in five months, or five minutes, or even five seconds if the parallel hardware is powerful enough. The AI is like us, but it gets a lot more thinking done.

It’s not beyond the realms of logic to posit that there may be other forms of cognition in the future. We already have many animals with brains that are less powerful than ours – cats, frogs, etc. – so why should we always have the most powerful type of cognition possible? We just haven’t found a better way yet. Just as we have produced some strong AI algorithms, parallelized human-equivalent AIs may find a faster cognitive model in a time period that will appear to be very short by human standards. As the AI runs faster and faster, something that is much brighter than us could eventually take over to the point that we are no longer in the driving seat of human progress (much like a pet cat who doesn’t have to go to the shop to buy cat food or indeed care where it comes from).

Science fiction visionaries in the eighties and nineties came up with various components of the singularity later popularised by Kurzweil, including molecular nanotechnology, mind uploading, cryonics, space colonisation, and more, but mind uploading bears further explanation. Essentially a Matrix-style idea posited by transhumanist Hans Moravec, a conscious human brain undergoes surgery using a nanoscale brain robot. It takes a single neuron, maps connections to neighbouring neurons, and sees which afferent signals trigger an efferent state transition. This allows its behaviour to be simulated, for example, using nanoelectrodes. Finally, the neuron is killed off once the simulated version works in the same way. Then, simply iterate 100 billion times. Easy, huh? The patient is supposed to be fully conscious the whole time, but if you were able to do such an operation, the human identity now resides somewhere new.

This sounds way too far fetched, but something in this vein was carried out in the late 1990s when researchers at UCSD took a Californian spiny lobster, removed a biological neuron from its natural circuit and replaced it with an electronic neuron using $7.50 of parts from a Radio Shack store. The neuron in question was one of 14 that controlled the lobster’s digestive system. When it was removed, an unhealthy set of oscillations was created in the other neurons, but when the electronic neuron was added back in, the system returned to normal.

So, let’s hypothesise that we now have a mind in a bottle running in some virtual world, and our bottled brain can be provided with a virtual body to interact with (software) objects. What does it all mean, and what can we do with it?

In the next part, we’ll look at the use and abuse of uploaded minds, and more! Sincere thanks to Charles Stross and Cory Doctorow for providing me with notes from their talk.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s