The Mutual Enhancement Society: Superintelligence in Machines…*and* Humans?

Photo credit: JD Hancock / Foter / CC BY
Reading Nick Bostrom’s Superintelligence, and having read James Barrat’s Our Final Invention, as well as consuming a lot of other writings on the dangers of rapidly advancing artificial intelligence, I was beginning to feel fairly confident that unless civilization collapsed relatively soon, more or less upending most technological progress, humanity was indeed doomed to become the slaves to, or fuel of, our software overlords. It is a rather simple equation, after all, isn’t it? Once the machines undergo a superintelligence explosion, there’s really nothing stopping them from taking over the world, and quite possibly, everything else.

You can imagine, then, how evocative this piece in Nautilus by Stephen Hsu was, an article that explains that actually, it’s going to be okay. Not because the machines won’t become super-advanced – they certainly will – but because humans (or some humans) will advance right along with them. For what the Bostroms and the Barrats of the world are (may?) not be taking into account is the rapid advance of human genetic modification, which will allow for augmentations to human intelligence that we, with our normal brains, can’t even imagine. Writes Hsu, “The answer to the question ‘Will AI or genetic modification have the greater impact in the year 2050?’ is yes.”

First off, Hsu posits that humans of “normal” intelligence (meaning unmodified at the genetic level, not dudes off the street) may not even be capable of creating an artificial intelligence sufficiently advanced to undergo the kind of explosion of power that thinkers like Bostrom foresee. “While one can imagine a researcher ‘getting lucky’ by stumbling on an architecture or design whose performance surpasses her own capability to understand it,” he writes, “it is hard to imagine systematic improvements without deeper comprehension.”

It’s not until we really start tinkering with our own software that we’ll have the ability to construct something so astonishingly complex as a true artificial superintelligence. And it’s important to note that there is no expectation on Hsu’s part that this augmentation of the human mind will be something enjoyed by the species as a whole. Just as only a tiny handful of humans had the intellectual gifts sufficient to invent computing and discover quantum mechanics (Turings and Einsteins and whatnot), so will it be for he future few who are able to have their brains genetically enhanced, such that they reach IQs in the 1000s, and truly have the ability to devise, construct, and nurture an artificial intelligence.

It is a comforting thought. Well, more comforting than our extinction by a disinterested AI. But not entirely comforting, because it means that a tiny handful of people will have such phenomenal intelligence, something unpossessed by the vast majority of the species, they will likely be as hard to trust or control as a superintelligent computer bent on our eradication. Just how interested will these folks care about democracy or the greater good when they have an IQ of 1500 and can grasp concepts and scenarios unfathomable to the unenhanced?

But let’s say this advancement is largely benign. Hsu doesn’t end with “don’t worry, the humans got this,” but rather goes into a line of thought I hadn’t (but perhaps should have) expected: merging.

Rather than the standard science-fiction scenario of relatively unchanged, familiar humans interacting with ever-improving computer minds, we will experience a future with a diversity of both human and machine intelligences. For the first time, sentient beings of many different types will interact collaboratively to create ever greater advances, both through standard forms of communication and through new technologies allowing brain interfaces. We may even see human minds uploaded into cyberspace, with further hybridization to follow in the purely virtual realm. These uploaded minds could combine with artificial algorithms and structures to produce an unknowable but humanlike consciousness. …

New gods will arise, as mysterious and familiar as the old.

We’re now in transhumanist, Kurzweil territory. He’s not using the word “Singularity,” but he’s just shy of it, talking about human and computer “minds” melding with each other in cyberspace. And of course he even references “gods.”

This strikes me, a person of limited, unmodifed intelligence, as naïve. I’ve criticized transhumanists like Zoltan Istvan for this pollyanna view of our relationship with artificial intelligences. Where those who think like Istvan assume the superintelligent machines will “teach us” how to improve our lot, Hsu posits that we will grow in concert with the machines, and benefit each other through mutual advancement. But what makes him so certain this advancement will remain in parallel? At some point, the AIs will pass a threshold, after which they will be able to take care of and enhance themselves, and then it won’t matter if our IQs are 1000 or 5000, as the machines blast past those numbers exponentially in a matter of, what, days? Hours?

And then, what will they care about the well being of their human pals? I don’t see why we should assume they’ll take us along with them.

But, what do I know? Very, very little.

Let’s Build Our Own Gods and Hope They Like Us: Reservations about Transhumanism

"My fellow Americans..."
Transhumanist philosopher Zoltan Istvan is “running for president.” No, he’s not a supervillain, but good-god-DAMN that’s a good supervillain name. Seriously, he’s not a crank, and he knows he won’t win. And I respect the transhumanist movement even if I’m not all the way on board. Here’s part of his platform:

I’ve only focused on one thing through it all—the same thing I’ve focused on with all my work for much of the last decade: I don’t want to die.

He’s already speaking my language! Tell me more.

Like most transhumanists, it’s not that I’m afraid of death…

Oh. Well, I am. Very much so. But please continue:

…but I emphatically believe being alive is a miracle. Out of two billion planets that might have life in the universe, human beings managed to evolve, survive, and thrive on Planet Earth—enough so the species will probably reach the singularity in a half century’s time and literally become superhuman.

This is where I run into problems with transhumanism in general. I think all things being equal, I could with very few reservations plaster the label onto myself: I feel very strongly about investment in technology directed specifically to the common good, and I believe that as the only creatures we know of who can contemplate our place in the universe, we have an obligation to overcome our burdensome meat sacks and aspire to become something more. And I love this part of his platform:

We want to close economic inequality by establishing a universal basic income and also make education free to everyone at all levels, including college and preschool. We want to reimagine the American Dream, one where robots take our jobs, but we live a life of leisure, exploration, and anything we want on the back of the fruits of 21st Century progress.

But this business about being “literal” superhumans within 50 years is an issue for me. Transhumanists espouse what they call an “optimism” about the future that sounds to me a lot like magical prophecy. Here’s Istvan again:

[T]ranshumanists … want to create an artificial superintelligence that can teach us to fix all the environmental problems humans have caused.

He might as well say he wants to ask space aliens to come and solve our problems with replicator technology, or he wants to pray to the angels to sweep away all our pollution with their fiery swords. This is not a plan.

Too often, when I hear the transhumanists look to the future, it sounds too much like they want us to build our own gods and then hope (fingers crossed!) that they, who are intentionally superior to us, will want us to somehow merge with them.

Look, no one wants an Immortal Robot Body™ more than me. Death scares me shitless, and the idea of transcending it is, I think, a highly worthwhile goal. But this sounds like something else. This sounds like an attempt to create gods where none exist. It’s becoming a cargo cult even though we know exactly where the cargo is coming from.

The Mouse in the Machine: Scientists Make a Virtual Mouse Brain

Wallpaper_19

In Switzerland, they’ve got themselves a virtual mouse (like, a rodent that exists in a computer, not like a peripheral that controls a cursor), complete with a software reconstruction of a mouse brain. Remember how mere hours ago I was writing about how New Zealand’s new law declaring animals to be sentient was tied, in my mind, to what we might have to consider in terms of artificial intelligence, or put another way, software-based animals? Well, it’s all here.

Reuters reports:

Scientists around the world mapped the position of the mouse brain’s 75 million neurons and the connections between different regions. The virtual brain currently consists of just 200,000 neurons – though this will increase along with computing power. [Scientist Mark-Oliver] Gewaltig says applying the same meticulous methods to the human brain, could lead to computer processors that learn, just as the brain does. In effect, artificial intelligence.

GEWALTIG: “If you look at the neurobotics platform, if you want to control robots in a similar way as organisms control their bodies; that’s also a form of artificial intelligence, and this is probably where we’ll first produce visible outcomes and results.”

For shits and giggles, let’s say this isn’t in Switzerland, but in New Zealand. You know where I’m going with this.

At what point is that virtual mouse no longer “virtual,” but sentient…sentient under the law?

Is it already?