Immortality the Ineffable Underdog

Photo credit: Macrophy (Grant Beedie) via Foter.com / CC BY-SA

Everyone you love and everyone you know and everything you touch will someday be gone. We will lose our lovers, our friends, our parents, our children, our animals, ourselves. The pain will be almost intolerable. The jobs we define ourselves by will end. Anything you make with your own two hands will eventually be dust. It will take only a few generations for you to be completely forgotten within your own family.

This is by Elmo Keep (she who is responsible for so much of the Mars One reporting/debunking that I’ve written about on this blog), who has written a positively brilliant, lengthy piece for The Verge on transhumanism, Zoltan Istvan, and the effort to harness technology to make human life, in some form or another, everlasting. Read the whole goddamn thing.

And this passage by Keep about the unbearable inevitability of death, this is exactly why I (like other white, male, no-longer-young tech enthusiasts) am so attracted to transhumanism in the abstract. We find ourselves living at a time when the ascent of computer superintelligences and, simultaneously, our ability to “meld” with computers are remarkably plausible. Perhaps not certain or even likely, but it’s out there in the hypothetical “someday.” If you squint, you can almost faintly see the event horizon of the Singularity.

And because I’m/we’re no longer young, we feel the tension, the gravitational pull, the off-putting gaze of death. We don’t have to squint to see it over the horizon. We just can’t quite tell how far away it is, exactly, but we know for certain it’s there.

So it’s a race, of sorts, or we imagine it to be. Two runners, death (nature) and immortality (technology), and the finish line is our lives.

We’re rooting for the Singularity, or at least for technology to save us from death. But right now, it’s no more than rooting, and for an underdog no less (or no more). If you’re like me and pushing 40, being saved by technology is a lot less likely than it is for, say, my kids.

But even so, we’re talking about something ineffable, really. A notion, a dream, nothing that’s been proven to be the case, to be imminent. We don’t know that technology will defeat death, or even vastly extend and preserve human life. We just really, really hope, and see inklings of possibilities. But that’s not enough for anyone to be hanging their hats on. To be working on? Investing in? Sure, fine.

I can’t afford to get my hopes up about it, though. I couldn’t bear the disappointment. The grief-upon-grief-upon-regret. I can watch for developments, and I can cheer on advances. But I can’t let myself believe in it.

But, oh, would I like to. I would like to so very much.

Transhumanism as a Possibility, Not a Promise

Photo credit: Smashn Time / Foter / CC BY-NC-SAThe caption by the artist is, "They are one weird bunch, essentially a brain in a box."
I have a soft spot for the transhumanists, and as I’ve said, if they were a little less sure of themselves and their goals, and if they were just a little less, well, religious in their faith about what technology and artificial intelligence will bring, I could see to donning the label myself.

Zoltan Istvan, the movement’s (and party’s) presidential candidate is the embodiment of this faith. A huge, strong fellow who has led a life that resembles Indiana Jones-meets-extreme sports, he has put himself at mortal risk countless times, only to undergo a kind of revelation about just how fragile and short life – even his – can be. Now he’s a kind of missionary for immortality, using his presidential campaign (such as it is) to get us talking and thinking more seriously about making life-extending technologies a prime priority for society.

After reading a fascinating “campaign trail” report by Dylan Matthews at Vox on Istvan, and watching Istvan’s 2014 TEDx presentation, I think I see something that distinguishes Istvan from what I normally think of when considering transhumanism.

To me, transhumanism is advocacy of the utilization of technology to radically improve, augment, and transform human life. Its “faith” is that machines and humans will merge in one form or another, and very soon, so that there will be no distinguishing between flesh and robotics, computer and brain, software and mind. And this is supposed to happen in the next few decades, in what’s called the Singularity. The person most associated with this line of thinking is of course futurist and inventor Ray Kurzweil, now a lead engineer at Google. He is the transhumanists’ equivalent of a prophet, and he is himself obsessed with keeping himself alive as long as possible in order to experience this Singularity himself.

If I were to paint in the broadest of strokes, I’d say that the Kurzweilian transhumanists are very much jazzed about the gee-wiz of technology, the wow-factor of look-what-we-can-do. Having a brain uploaded to a super-internet, enhanced by unfathomably powerful computers, is a kind of Rapture, a removal from existence as we know it. Maybe this isn’t a fair characterization, but it’s how it seems from my vantage point.

Zoltan Istvan is, I now think, a little different from this. If you take him at his word, he is after not advancement for advancement’s sake, but for “beauty.” As he says in his TEDx talk, “Unless you are alive, it is impossible to experience beauty.”

And it seems to all stem from this. The idea that death is a ridiculous waste, and that life offers so much beauty and enrichment, the likes of which we as mere homo sapiens have barely scratched the surface. In order to truly know what he calls “new concepts” and “new arenas” of beauty, we have to, first, not be dead (obviously), and second, invest in the kinds of technologies that will allow for this kind of life extension and experiential enhancement.

As someone who is on the record (severally) as one who fears death like nobody’s business, I am deeply sympathetic to this…what is it, aspiration? Wish? I wholeheartedly share Istvan’s view that death is something to be avoided and ultimately conquered, because as far as we silly meat-robots are concerned, there is literally nothing beyond our experience of this one short life. If we are a way for the universe to know itself, as Carl Sagan put it, I really do feel like the universe should get more of a chance to do so by not letting its intelligent, sentient creatures die.

And there was one more thing that surprised me about Istvan, and this from his profile by Dylan Matthews, who was joined on the Immortality Bus by fellow journalist Jamie Bartlett:

On many matters, Zoltan openly concedes that he just doesn’t know what to do. … To Jamie, who in addition to writing for the Telegraph is working on a book about “political revolutionaries” for Random House, this is striking. The other chapters in Jamie’s book profile movements characterized by unwavering faith in an inviolable set of principles. He’s writing about ISIS, about neo-Nazis, about radical Islamists in Canada. These are people willing to take extreme measures precisely because they know they’re right. That raises the question: Zoltan has a beautiful home, with two beautiful daughters. His wife makes a healthy living for the family, and he can get by as a futurist on the speaker circuit too. He could be in his bed with his wife, knowing that his kids are safe in the next room…

But instead, he’s sleeping on the side of the road in a decrepit 37-year-old RV without running water. Why, Jamie asks, if you’re not sure your ideas are correct, are you willing to go through all this? Zoltan shrugs. He’s not sure. Nobody’s ever sure. But he thinks his beliefs have better odds of being true than the alternatives. Otherwise, he wouldn’t believe them.

He’s not sure. And he can say so. It makes me feel a little better that what Istvan is selling is not a promise, but a possibility. This is probably why he refers to it as “the transhumanist wager” and not “the transhumanist guarantee.” He’s betting on this path to a better future, because why not? Why not invest heavily in technologies that will improve our lives, enhance our abilities, and perhaps one day eradicate death itself? Of course a comparable investment in ethics will be required for such a path, but there are positive dividends to be gained from that as well.

I suppose the “why not” could be the dangers of explosively advancing artificial superintelligence, a danger that folks from Nick Bostrom to Elon Musk are warning us against. In this line of thinking, the superintelligent machines (or machine) won’t give two figs about human lifespans or our experience of beauty, and rather pose a kind of threat in which our extinction is a small event.

So despite the enthusiasm of Istvan, or Kurzweil or any transhumanist for that matter, I can’t get too pollyanna about this. Set aside the actual feasibility of the transhumanist wager being won, I don’t feel like I can even spare the emotional investment in such a future. Can there be more of a crippling, depressing letdown than to believe that death will be conquered, only to discover that it won’t? “The human being is not a coffin,” Istvan says, but for now, it is, eventually.

Perhaps this makes me a lazy transhumanist, or a spectator transhumanist. I’m not yet willing to go there with them, but it isn’t to say that I don’t want them to keep going there. I think what I also want from them, then, is to do a little more of what I glimpse Istvan doing: admitting that they might be wrong.

——-

Related posts:

The Mutual Enhancement Society: Superintelligence in Machines…*and* Humans?

Photo credit: JD Hancock / Foter / CC BY
Reading Nick Bostrom’s Superintelligence, and having read James Barrat’s Our Final Invention, as well as consuming a lot of other writings on the dangers of rapidly advancing artificial intelligence, I was beginning to feel fairly confident that unless civilization collapsed relatively soon, more or less upending most technological progress, humanity was indeed doomed to become the slaves to, or fuel of, our software overlords. It is a rather simple equation, after all, isn’t it? Once the machines undergo a superintelligence explosion, there’s really nothing stopping them from taking over the world, and quite possibly, everything else.

You can imagine, then, how evocative this piece in Nautilus by Stephen Hsu was, an article that explains that actually, it’s going to be okay. Not because the machines won’t become super-advanced – they certainly will – but because humans (or some humans) will advance right along with them. For what the Bostroms and the Barrats of the world are (may?) not be taking into account is the rapid advance of human genetic modification, which will allow for augmentations to human intelligence that we, with our normal brains, can’t even imagine. Writes Hsu, “The answer to the question ‘Will AI or genetic modification have the greater impact in the year 2050?’ is yes.”

First off, Hsu posits that humans of “normal” intelligence (meaning unmodified at the genetic level, not dudes off the street) may not even be capable of creating an artificial intelligence sufficiently advanced to undergo the kind of explosion of power that thinkers like Bostrom foresee. “While one can imagine a researcher ‘getting lucky’ by stumbling on an architecture or design whose performance surpasses her own capability to understand it,” he writes, “it is hard to imagine systematic improvements without deeper comprehension.”

It’s not until we really start tinkering with our own software that we’ll have the ability to construct something so astonishingly complex as a true artificial superintelligence. And it’s important to note that there is no expectation on Hsu’s part that this augmentation of the human mind will be something enjoyed by the species as a whole. Just as only a tiny handful of humans had the intellectual gifts sufficient to invent computing and discover quantum mechanics (Turings and Einsteins and whatnot), so will it be for he future few who are able to have their brains genetically enhanced, such that they reach IQs in the 1000s, and truly have the ability to devise, construct, and nurture an artificial intelligence.

It is a comforting thought. Well, more comforting than our extinction by a disinterested AI. But not entirely comforting, because it means that a tiny handful of people will have such phenomenal intelligence, something unpossessed by the vast majority of the species, they will likely be as hard to trust or control as a superintelligent computer bent on our eradication. Just how interested will these folks care about democracy or the greater good when they have an IQ of 1500 and can grasp concepts and scenarios unfathomable to the unenhanced?

But let’s say this advancement is largely benign. Hsu doesn’t end with “don’t worry, the humans got this,” but rather goes into a line of thought I hadn’t (but perhaps should have) expected: merging.

Rather than the standard science-fiction scenario of relatively unchanged, familiar humans interacting with ever-improving computer minds, we will experience a future with a diversity of both human and machine intelligences. For the first time, sentient beings of many different types will interact collaboratively to create ever greater advances, both through standard forms of communication and through new technologies allowing brain interfaces. We may even see human minds uploaded into cyberspace, with further hybridization to follow in the purely virtual realm. These uploaded minds could combine with artificial algorithms and structures to produce an unknowable but humanlike consciousness. …

New gods will arise, as mysterious and familiar as the old.

We’re now in transhumanist, Kurzweil territory. He’s not using the word “Singularity,” but he’s just shy of it, talking about human and computer “minds” melding with each other in cyberspace. And of course he even references “gods.”

This strikes me, a person of limited, unmodifed intelligence, as naïve. I’ve criticized transhumanists like Zoltan Istvan for this pollyanna view of our relationship with artificial intelligences. Where those who think like Istvan assume the superintelligent machines will “teach us” how to improve our lot, Hsu posits that we will grow in concert with the machines, and benefit each other through mutual advancement. But what makes him so certain this advancement will remain in parallel? At some point, the AIs will pass a threshold, after which they will be able to take care of and enhance themselves, and then it won’t matter if our IQs are 1000 or 5000, as the machines blast past those numbers exponentially in a matter of, what, days? Hours?

And then, what will they care about the well being of their human pals? I don’t see why we should assume they’ll take us along with them.

But, what do I know? Very, very little.

Let’s Build Our Own Gods and Hope They Like Us: Reservations about Transhumanism

"My fellow Americans..."
Transhumanist philosopher Zoltan Istvan is “running for president.” No, he’s not a supervillain, but good-god-DAMN that’s a good supervillain name. Seriously, he’s not a crank, and he knows he won’t win. And I respect the transhumanist movement even if I’m not all the way on board. Here’s part of his platform:

I’ve only focused on one thing through it all—the same thing I’ve focused on with all my work for much of the last decade: I don’t want to die.

He’s already speaking my language! Tell me more.

Like most transhumanists, it’s not that I’m afraid of death…

Oh. Well, I am. Very much so. But please continue:

…but I emphatically believe being alive is a miracle. Out of two billion planets that might have life in the universe, human beings managed to evolve, survive, and thrive on Planet Earth—enough so the species will probably reach the singularity in a half century’s time and literally become superhuman.

This is where I run into problems with transhumanism in general. I think all things being equal, I could with very few reservations plaster the label onto myself: I feel very strongly about investment in technology directed specifically to the common good, and I believe that as the only creatures we know of who can contemplate our place in the universe, we have an obligation to overcome our burdensome meat sacks and aspire to become something more. And I love this part of his platform:

We want to close economic inequality by establishing a universal basic income and also make education free to everyone at all levels, including college and preschool. We want to reimagine the American Dream, one where robots take our jobs, but we live a life of leisure, exploration, and anything we want on the back of the fruits of 21st Century progress.

But this business about being “literal” superhumans within 50 years is an issue for me. Transhumanists espouse what they call an “optimism” about the future that sounds to me a lot like magical prophecy. Here’s Istvan again:

[T]ranshumanists … want to create an artificial superintelligence that can teach us to fix all the environmental problems humans have caused.

He might as well say he wants to ask space aliens to come and solve our problems with replicator technology, or he wants to pray to the angels to sweep away all our pollution with their fiery swords. This is not a plan.

Too often, when I hear the transhumanists look to the future, it sounds too much like they want us to build our own gods and then hope (fingers crossed!) that they, who are intentionally superior to us, will want us to somehow merge with them.

Look, no one wants an Immortal Robot Body™ more than me. Death scares me shitless, and the idea of transcending it is, I think, a highly worthwhile goal. But this sounds like something else. This sounds like an attempt to create gods where none exist. It’s becoming a cargo cult even though we know exactly where the cargo is coming from.

The Martians’ Singularity: Thoughts on “The War of the Worlds”

Correa-Martians_vs._Thunder_Child
I’ve just read H.G. Wells’ original The War of the Worlds, and it was nothing like I expected. I have a completely unfounded prejudice about some of this classic sci-fi literature, wherein I presume it to be either vapid pulp or unnecessarily stuffy. (Frankenstein suffered a bit from the latter, I thought. Come on, Victor, get yourself together.) But just as I was delighted by my first reading of Jekyll and Hyde, I found War of the Worlds to be incredibly rich, suspenseful, and insightful.

Prophetic, even, as I suppose the best speculative fiction must often be. This blog’s fascination is with the intersection of technology and human life as it is lived, and in this book Wells gives us a glimpse of the future, where the Martians stand in for the marriage of human beings and machinery. Indeed, in a strange way Wells seems to be foreshadowing the Singularity, the moment that some believe is inevitable, when computing power becomes so great we fully merge with our machines, uploading our consciousness to the cloud for a kind of immortality.

Wells’ Martians were just about there. Of course, Wells had no concept of computers as we know them, but his Martians have an utter reliance on mechanization. It may be that they were physically adept on Mars itself, but on Earth the Martians, left to their own physical devices, were stultified by terrestrial gravity, and were almost totally dependent on their machines. But even if their bodies were better suited to Mars, Wells makes clear that their bodies had developed (“evolved” may not be quite correct since we don’t know whether natural selection was involved) to be physically limited to bare essentials: a powerful brain and nervous system along with grasping appendages, and almost nothing else. The machines handled the rest.

Wells’ narrator explains it this way:

[H]ere in the Martians we have beyond dispute the actual accomplishment of … a suppression of the animal side of the organism by the intelligence. To me it is quite credible that the Martians may be descended from beings not unlike ourselves, by a gradual development of brain and hands (the latter giving rise to the two bunches of delicate tentacles at last) at the expense of the rest of the body. Without the body the brain would, of course, become a mere selfish intelligence, without any of the emotional substratum of the human being.

So before we ever hear tales of heartless machines like HAL or emotion-starved androids like Data, here we have Wells giving us a near-perfect biological analogue: Intelligent creatures whose reliance on technology has allowed them, perhaps encouraged them, to jettison inefficient emotion. So really, the Martians are as close to the Singularity as anyone in the 19th century could have possibly invented.

What may be even more remarkable is how Wells refuses to cast the Martians as total villains. Yes, their aim is clearly to unfeelingly harvest Earth and humanity for their own consumption, but Wells ascribes no malice. The narrator, remember, has witnessed more of the horror of what the Martians are capable of than almost anyone alive, and yet he warns against judging them “too harshly,” because “we must remember what ruthless and utter destruction our own species has wrought” upon indigenous human cultures and animal species. “Are we such apostles of mercy as to complain if the Martians warred in the same spirit?”

What will the singularitarians and transhumanists think if our machines outpace us and, rather than bonding with us, decide to erradicate and harvest us just like Wells’ Martians? Will we be capable of making that kind of leap of perspective to understand our enemies?

There is a lesson, of course. The superior Martians, as ruthlessly efficient as they were, could not imagine that their undoing might come from beings too small to be seen by the naked eye, trusting in their superior firepower, and failing to fully grasp Earth’s biological nuance. What might we be neglecting as we bound toward the future during our own present technological revolution? What metaphorical (or literal) microbes are we overlooking?

But The War of the Worlds is not technophobic, for though it does present a powerful case for humility in the use of technology, it also admires it. The narrator makes several references to how humanity adopted much of the Martian technology all to its benefit after the invasion had failed. He speaks with esteem and awe of what the Martians had accomplished, and how they had developed genuinely meaningul efficiencies, not just in machinery, but in their own biology. For all the horror they brought, there is so much the Martians got right.

H.G. Wells may not have been a Ray Kurzweil of yesteryear, but I think he did at least intuit that humanity and technology were converging, even as far back as the 1800s. We may find that we achieve as a species much of what Wells’ invaders had, and may also be wise enough to avoid their fatal level of hubris. If Wells’ story proves prophetic, to paraphrase Carl Sagan, those Martians were us.