I’m reading Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, about the possible capabilities and potential threats posed by rapidly advancing artificial intelligence. It’s a little dry at times, to be honest, but then he’ll go and say something (in the nonfiction-science-book equivalent of a deadpan) that makes your mind explode.
Because you’re probably thinking, hey, a superintelligent A.I. could maybe take things over on this planet, and that’d be just crazy! Well sure, but also…
Consider a superintelligent agent with actuators connected to a nanotech assembler. Such an agent is already powerful enough to overcome any natural obstacles to its indefinite survival. Faced with no intelligent opposition, such an agent could plot a safe course of development that would lead to its acquiring the complete inventory of technologies that would be useful to the attainment of its goals. For example, it could develop the technology to build and launch von Neumann probes, machines capable of interstellar travel that can use resources such as asteroids, planets, and stars to make copies of themselves. By launching one von Neumann probe, the agent could thus initiate an open-ended process of space colonization. The replicating probe’s descendants, traveling at some significant fraction of the speed of light, would end up colonizing a substantial portion of the Hubble volume, the part of the expanding universe that is theoretically accessible from where we are now. All this matter and free energy could then be organized into whatever value structures maximize the originating agent’s utility function integrated over cosmic time—a duration encompassing at least trillions of years before the aging universe becomes inhospitable to information processing.
Suck it, Entire Known Universe. You’re about to get iColonized.