I’m reading Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, about the possible capabilities and potential threats posed by rapidly advancing artificial intelligence. It’s a little dry at times, to be honest, but then he’ll go and say something (in the nonfiction-science-book equivalent of a deadpan) that makes your mind explode.
Because you’re probably thinking, hey, a superintelligent A.I. could maybe take things over on this planet, and that’d be just crazy! Well sure, but also…
Consider a superintelligent agent with actuators connected to a nanotech assembler. Such an agent is already powerful enough to overcome any natural obstacles to its indefinite survival. Faced with no intelligent opposition, such an agent could plot a safe course of development that would lead to its acquiring the complete inventory of technologies that would be useful to the attainment of its goals. For example, it could develop the technology to build and launch von Neumann probes, machines capable of interstellar travel that can use resources such as asteroids, planets, and stars to make copies of themselves. By launching one von Neumann probe, the agent could thus initiate an open-ended process of space colonization. The replicating probe’s descendants, traveling at some significant fraction of the speed of light, would end up colonizing a substantial portion of the Hubble volume, the part of the expanding universe that is theoretically accessible from where we are now. All this matter and free energy could then be organized into whatever value structures maximize the originating agent’s utility function integrated over cosmic time—a duration encompassing at least trillions of years before the aging universe becomes inhospitable to information processing.
Suck it, Entire Known Universe. You’re about to get iColonized.
In Switzerland, they’ve got themselves a virtual mouse (like, a rodent that exists in a computer, not like a peripheral that controls a cursor), complete with a software reconstruction of a mouse brain. Remember how mere hours ago I was writing about how New Zealand’s new law declaring animals to be sentient was tied, in my mind, to what we might have to consider in terms of artificial intelligence, or put another way, software-based animals? Well, it’s all here.
Reuters reports:
Scientists around the world mapped the position of the mouse brain’s 75 million neurons and the connections between different regions. The virtual brain currently consists of just 200,000 neurons – though this will increase along with computing power. [Scientist Mark-Oliver] Gewaltig says applying the same meticulous methods to the human brain, could lead to computer processors that learn, just as the brain does. In effect, artificial intelligence.
GEWALTIG: “If you look at the neurobotics platform, if you want to control robots in a similar way as organisms control their bodies; that’s also a form of artificial intelligence, and this is probably where we’ll first produce visible outcomes and results.”
For shits and giggles, let’s say this isn’t in Switzerland, but in New Zealand. You know where I’m going with this.
At what point is that virtual mouse no longer “virtual,” but sentient…sentient under the law?
New Zealand has passed an amendment to its animal welfare law stating that animals are “sentient beings,” and the amendment seems to strengthen some measures that define how or in what situation an animal can be used for various purposes, such as medical experimentation. That’s good!
Though it’s not clear from the bill itself (as far as I can tell) what it means by “sentient.” No language in the wording of the bill spells it out, nor does it specify which animals possess sentience. The little bit of bloggy news coverage I’ve seen (all of which might as well be copy-and-paste jobs of each other) suggests the simple definition of the ability to percieve things, having feelings, and the ability to suffer. That doesn’t help me, really. I don’t mean to presume that this hasn’t been flushed out by the relevant parties, I have no idea, but I sure as hell don’t think I could say for sure to what degree, say, a mouse feels or suffers versus, say, a chimpanzee.
Because there has to be degrees of sentience, right? If sentience were a binary thing, then we’d have a much bigger problem on our hands, with trillions of members of millions of species all now declared to have “feelings” and “perception” just “like humans.” So I have to assume that New Zealand is not now offering asylum to fruit flies or making illegal the squashing of ants. We can be mostly certain they don’t have “feelings” (like, I dunno, jealousy?), but don’t ask me whether or not they “suffer.”
I don’t mean to make light of this, truly. I do think this is a good thing, but it strikes me as vague and ill-defined. The group Animal Equality (equality? really? you sure?) calls it a “monumental step forward for animals,” and I think that’s overselling it. We’re not talking about personhood, but rather what sounds more like a general sense-of-the-government quasi-resolution kind of thing, saying that we all need to be way more mindful about how we treat the other animal species we share the planet with, particularly those we breed and harvest and manipulate for our benefit.
That stipulated, its very nebulousness may be its saving grace. By virtue of being vague and undefined, it may force some very difficult and very necessary conversations, questions, and debates. For if there’s a questionable practice that seems to inhabit a grey area, or something being done to an animal whose “sentience” is not terribly clear, this new law may spur some very crucial arguments. Regardless of how those arguments are resolved, the conversation about our fellow creatures is suddenly elevated, given more gravity. All parties, then, get the benefit of having thought harder and longer about something we’ve had the privilege to take for granted since we first started domesticating.
One small step further, if you’ll allow, because with this discussion I can’t help but be reminded of the hearing over Data’s personhood on Star Trek: The Next Generation. Picard tells the Judge Advocate General:
[T]he decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a people we are, what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery?
The bill specifies animals, so this line of thought is probably moot for the news at hand, but think of artificial intelligence. At what point to we consider a machine or some software to be capable of “perceiving.” Don’t they already? When do we consider them to be “feeling”? When they tell us? When do we consider them to be “suffering”? Ever? As long as that’s never written into their programming?
One day, and maybe one day very soon, we’re going to need some law for that. And unlike animals, the artificial intelligence might ask us for it.