Barry Dainton On Singularities and Simulations If we arrive at a stage where artificial intelligences (or AIs) that we have created can design AIs that are more powerful than themselves, and each new generation of AI rapidly creates still more powerful AIs, then the ‘intelligence explosion’ — or singularity — foreseen by Good, Vinge and others could easily become a reality. Since the arrival of superintelligent machines would be a momentous, world- changing occurrence, we would be wise to consider how best to deal with this eventuality should it occur; we should also attempt to ascertain whether the singularity is as imminent as some of its propo- nents maintain. David Chalmers’ ‘The Singularity: A Philosophical Analysis’contains much that is valuable on both fronts. With regard to the key issue of whether a singularity is possible at all, I think Chalmers is right in saying that it is certainly not out of the question. As for how to minimize the dangers posed by an emergent superintelligence, the measures Chalmers proposes — implanting the right values, isolating the first super-intelligent systems in virtual universes — look to be promising avenues. My focus in what follows will be on some of the consequences of a computer-based intelligence explosion, assuming we can survive it. The combination of superintelligence and massive power will make it possible for computers to create and sustain virtual environments of a size and complexity that is way beyond anything we are currently capable of devising. Will it be possible — or desirable — to ‘upload’ ourselves into these virtual worlds? Chalmers has interesting things to say on this issue; I will be suggesting a slightly different take on it. Journal of Consciousness Studies, 19, No. 1–2, 2012, pp. 42–85 Correspondence: Email: bdainton@liverpool.ac.uk Copyright (c) Imprint Academic 2013 For personal use only -- not for reproduction