A.I. think therefore A.I. am.

I wrote this one in September 2008 for Philosophy 305: Philosophical Logic at the University of Canterbury in Christchurch.

I really enjoyed writing this, largely because I got to reference works of fiction (and even a computer game) as much as I referenced philosophical writings.  It gets more interesting in the second half when I start delving into them more.

If you enjoy this and are hungry for more, I definitely recommend reading Philip K. Dick’s article which I reference in here. Blows your mind.

THE SIMULATION ARGUMENT AND ITS IMPLICATIONS FOR ONTOLOGY, THEOLOGY, MORALITY, CREATIVITY AND EVERYDAY LIFE
by Caleb Anderson

Blessed are the legend-makers with their rhyme
of things not found within recorded time.
– J.R.R. Tolkien, Mythopoeia

“Those artsy-fartsy twerps next door create living, breathing, three-dimensional characters with ink on paper,” he went on. “Wonderful! As though the planet weren’t already dying because it has three billion too many living, breathing, three-dimensional characters!”
– Kurt Vonnegut, Timequake

The first matrix I designed was quite naturally perfect. It was a work of art. Flawless. Sublime. A triumph only equalled by its monumental failure.
– ‘The Architect’, The Matrix Reloaded

There have been many works of fiction in recent years where characters are forced to confront the disquieting knowledge that surroundings, or their entire world, is not as real as they were led to believe. Stories like these are popular because of how they affect the reader, who begins to question whether he could be in a similar situation. It occurs to him that although he “accept[s] the reality of the world with which [he is] presented” (The Truman Show), he could in fact be “living in a dream world” (The Matrix). Nick Bostrom’s Simulation Argument takes this possibility one step further, and uses an indifference principle to show that if such simulations will ever exist, they probably already do, and chances are we’re in one of them right now.

The structure of the Simulation Argument as formulated by Bostrom (2003, 243-255) is that at least one of the following hypothesises is true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation (Bostrom 2003, 243).

Essentially Bostrom is saying that if we believe that humans will some day have the ability and inclination to simulate consciousness, it is logical to assume that they will do so multiple times; enough that the simulated worlds greatly outnumber the real one(s). Therefore, statistically speaking it is far more likely that the world we find ourselves in is a simulation, rather than the real world. This goes against many people’s intuition, which allows the eventual possibility of artificial intelligence, but believes that right here, right now, this is real. According to Bostrom’s argument this is an irrational position to take; it is, in a sense, having one’s cake and eating it too. “Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation” (Bostrom 2003, 255). We are not forced to believe that we are living in a simulation (hereafter referred to as the “simulation menace” following Dainton). But if we cannot accept option 1 or 2, the menacing option 3 is the only one left.

The argument rests on two assumptions which Bostrom refers to as the Bland Indifference Principle and Substrate-Independence. The Bland Indifference Principle (Bostrom 2003, 248-249) is the idea that if we don’t know whether we have a certain property, but we do know what percentage of our population has the property, we should treat that percentage as our chance of having the property (analogies are drawn to the chance of having symptomless disease). Applied to this situation, if the number of simulated worlds is far greater than the number of real ones, and we don’t know if we are simulated or real, we must conclude that we are far more likely to be simulated. It seems impossible to reject the Bland Indifference Principle without throwing out all our common sense surrounding probability calculations. Weatherston, in his critique of the Simulation Argument, tries to reject the Bland Indifference Principle, or at least its application to the Simulation Argument, but it seems to be a straw man argument responding to a more general form of the indifference principle than that which Bostrom actually proposes (Bostrom 2005, 92). In addition, Weatherson (431) claims that to be “very confident that [one] is human, even while knowing that most human-like beings are Sims” is perfectly rational, because everyone does it. He misses the point that our confidence proves nothing, as we would be equally confident of our own ‘reality’ even if we were in a simulation.

The other assumption, Substrate-Independence, is the one that could make or break the argument. Substrate-independence is the idea that consciousness can exist across a variety of different media or ‘substrates’, not just carbon-based lifeforms like us. Ergo, “a computer running a suitable program would be conscious” (2003 344). Bostrom admits that this assumption is “not entirely uncontroversial” but proposes that “we shall here take it as a given”. He says it is “very widely accepted” in the philosophy of mind, does not provide any references in favour of it. He also seems to have coined the phrase ‘substrate-independence’ himself (Bostrom 2005 90) which makes it difficult to scour the literature for support. Following Dennett, Bostrom holds a weak view of what constitutes self or consciousness, where there is nothing metaphysical, such as a soul, required for consciousness – it simply appears when “the right sort of computational structures and processes” (Bostrom 2003 244) are present. But other philosophers of mind resist jumping to the conclusion that artificial consciousness is possible, such as Searle whose Chinese Room argument illustrates how a computer could act exactly like it is thinking, but not really be thinking at all. To deny substrate-independence, and say that simulated consciousness will never happen, would be the easiest way to escape the simulation menace, but this would not be disproving the simulation argument, it would merely be selecting option 1.

Another factor which cannot prove or disprove the simulation argument, but can affect which option we choose, is the current level of technology with respect to artificial intelligence. Bostrom (2003 244-247) argues that computers will very soon be powerful enough to create universe-simulations. Futurist Ray Kurzweil estimates that “by approximately the year 2050, for the then equivalent of $1,000, you will be able to purchase a computer with greater processing power than that of all the brains of all humans that have ever lived” (Jenkins 24). Yet even if it takes thousands of years for us to get there, or even if we never get there (option 1), the simulation menace (option 3) is still not ruled out; Bostrom is careful to specify that his three options are not mutually exclusive; “at least one” (2003 243) must be true. Of course, if the simulation menace is merely an uninteresting outside chance akin to a ‘brain in vat’ argument; the simulation argument is not very interesting. The interesting part is the idea that if multiple conscious simulations exist, it follows that we’re in one. In Sam Hughes’ narrative retelling of the argument I don’t know, Timmy, being God is a big responsibility, when the protagonists invent a universe simulation, they realise that they are simulated too. Current technological trends would seem to indicate that this stage in our own story – invention ergo realisation – is not as far away as we may assume.

The likelihood of the simulation menace is hard to measure. Bostrom (2008) argues that we lack sufficient evidence to decide on any of the three options, and observes that many people think it is obvious which option is true, but they all have a different ‘obvious answer’. He personally assigns a probability of about 20% to option 3, and has a hunch that option 2 is the most likely; though he admits that this is merely a subjective opinion. Personally I find option 2 – the idea that we will be able to make a large number of conscious simulations but will not due to lack of interest or moral reasons – very naïve. As far as I am aware, humans have always been creative and curious, and I can’t see us losing interest in art and science right at the point where our creative and experimental potential reaches its technological zenith. Even if it is believed by post-humans that playing around with conscious ‘Sims’ (The Sims) is unethical, that would not stop naughty programmers doing it. It is believed by modern humans that murder is unethical, but despite conscience and laws it still happens frequently. There is a chance that our enlightened descendents will prefer “direct stimulation of the brain’s reward centers” (Bostrom 2003 253), perhaps with drugs and efficiency à la Brave New World (Huxley), but this opens up all sorts of other debates about utilitarianism and what truly motivates us; Brave New World is not classified as a dystopia for nothing. The most likely causes for option 2 are provided by Peter Jenkins (36-37) who says that self-aware simulations would defeat the purpose of it, and that multiple simulations would overload the computers. He argues that this proves that any simulations will necessarily be discontinued as soon as their creatures develop the ability to create their own simulations. But this is a fairly big conclusion to jump to.

Therefore my own subjective opinion leads me to believe that options 1 and 3 are most likely. I am sympathetic to option 1 due to my skepticism about substrate-independence, and the possibility that the human race will either die out soon (Bostrom 2001) or start going backwards technologically also lends it credence. Yet I also believe that option 3 – the simulation menace – is very possible. What I find most bizarre about the argument is that it seemingly forces me to turn to science to answer deep metaphysical questions about my own reality. And it is not even science about myself, but science about things as obscure as robots, bombs and weather. The simulation menace has huge implications, not only for my personal ontology, but for concepts such as theology, free will, creativity and ethics.

The very nature of reality itself is called into question by the simulation menace, which seems to point towards ‘levels of reality’ (Bostrom 253). We tend to think that if we are not computer simulations we are real, because we inhabit the privileged position of the ‘top level’ of reality. But what does this mean for theists? If God exists, does that mean we are not real, because he has kicked us off our pedestal? And if we still claim that we are real, would not any created or simulated beings have the right to claim the same? The world around us may seem real, in fact some thinkers such as John Locke believe that our sense-impressions of the world are our only ultimate source of knowledge. But we have no other level of reality to compare our world to; perhaps the world inhabited by God is on a far higher level of reality than us. Indeed, perhaps to God we are as unreal as the computer game The Sims is to us. The film Men In Black portrays the idea that there are multiple universes within universes, with marble-sized universes found within ours, and ours being marble-sized within another. Perhaps this kind of picture can describe levels of reality as well as physical size; the reality of created worlds could be dwarfed by the superior reality of their creators. Bostrom usually talks of “ancestor-simulations” (Bostrom 2003 247) which implies that the simulations are similar to the original world. Yet perhaps it is not possible to create a simulation as extensive or real as one’s own reality, but it is possible to create smaller and ‘less real’ simulations. Even though some ‘reality leakage’ would occur, the inhabitants of each level would be unable to compare their own reality with the higher one, and therefore would not know what they were missing.

With this in mind, even the strongest theism seems to lose much of its potency. Even supposing belief in a God who (a) is on a higher plane of reality to us, (b) created our world and (c) controls and interacts with it, does not necessitate believing in anything more than another being like us, who is probably a simulation too (Bostrom 2003 252). He could be merely step 472 to our step 473, in which case it is hard to see something special or real about him as compared to us. Correspondingly, it is hard to see anything special or real about us on step 473, as compared to our own simulations on step 474. It is easy to think that if we are not in a simulation, we are ‘real’, and if we are, we are ‘not real’, but this simplistic view is complicated when levels of reality are brought into it. If we give God the ability to bestow reality onto us, but don’t give ourselves the right to bestow reality onto our own creations, we are ignoring the fact that – sans proof of where the top level is – there is no functional difference between God creating conscious carbon-creations, and us creating conscious computer simulations.

Perhaps the biggest significance of the simulation argument to theology is that it gives us a ‘God’s-eye-view’, allowing us to theorise that humans are creators just like God, and God is a creation just like us. This humanises God and can make him seem more plausible; one atheist reader remarked that the Simulation Argument was “the best argument for God’s existence he had ever heard” (Bostrom 2008) and become an agnostic. But it also has implications for our ideas of free will and omnipotence. God may be omnipotent with respect to our world, which we inhabit, but a slave to a higher god within his own. He may be a child playing computer games, whose only chance at free will is tinkering with our world. But even then he may be enslaved to the choices of the ‘first cause’ who set the ball rolling in the very first universe. In Hughes’ short story, the characters simulate a universe that’s exactly the same as theirs, right down to the simulated versions of themselves creating their own simulation-within-a-simulation. They begin to manipulate their simulation, and the same manipulations appear on all levels, including their own. They discover that they are not on the ‘top level’ and therefore are not really making any choices, and they discover simultaneously that since whatever they do to their simulation is repeated on all levels, they have near-limitless ‘magical powers’ in their own world. In other words, they realise that they have omnipotence but no free will; showing that the two supposed opposites can co-exist.

Another question that has huge implications for theology is: if there are different layers of reality, can there be communication or crossover between them? There are examples in myth and fiction of crossover, and even shifting between layers. A creator may enter his creation à la Jesus Christ, Hindu Avatars or novelist Kurt Vonnegut, who enters the narrative of Breakfast of Champions (1973) and communicates with his characters. Conversely, protagonists like Neo (The Matrix) or Truman (The Truman Show), who break out of their simulations into reality, mirror the idea of a human ‘going to heaven’ or ‘achieving nirvana’. Can religion be simply described as an attempt to transcend one’s layer of reality?

An alternative idea of the relationship between religion and simulation/creation is provided by J.R.R. Tolkien in his concept of sub-creation. He wrote not just to tell a tale and entertain people, but to perform a sacred act of creation, to echo the creation of God. “Only by myth-making, only by becoming a ‘sub-creator’ and inventing stories, can Man aspire to the state of perfection that he knew before the Fall” (quoted in Morris). According to Tolkien, the highest truth is found in myth, which allows “a splintered fragment of the true light, the eternal truth” to shine through from God. He felt so strongly about this that his poem on the subject (Tolkien) all-but-converted his friend C.S. Lewis to Christianity and fantasy writing (Morhan). To Tolkien, creativity equals religion. With current technological limitations, writing fiction (whatever the medium) is the closest we can get to creating computer simulations, and perhaps fiction authors best understand the philosophy of simulated worlds and people [Footnote: This may explain why I am finding more relevant references in fiction than in academic literature]. Science fiction author Philip K. Dick says “it is an astonishing power: that of creating whole universes, universes of the mind. I ought to know … It is my job to create universes” (Dick 1978). Shakespeare (actually one of his characters; an important distinction) said “All the world’s a stage, and all the men and women merely players” (Shakespeare) – raising the question of how many ‘reality levels’ this is true for.

Of course, the obvious difference between the universes we create and the one we inhabit is that our characters are not conscious. A character cannot say ‘cogito ergo sum’ (Descartes), she can only mimic consciousness, when the author wills it. But could the same not be said about us, if we are simulated or created? Dick (1978) says “Reality is that which, when you stop believing in it, doesn’t go away”. Created worlds stop being real when the creator stops writing them or thinking about them. But the same can be said even if there is only one world; Norbert Elias argues that our meaning as people consists solely in what we mean to each other (33-34); when people stop remembering us after we die, we become as meaningless as a deleted computer simulation. Moreover, just as it does not occur to Dick’s characters that they may be androids (Dick 1968), it also doesn’t occur to them that they are just characters in a book; they are under the impression that they are conscious and real, just as we are under the impression that we are. How can we be so certain that they are wrong, and yet so certain that we are right?

In any case, our created characters will certainly become conscious as soon as technology facilitates it. Just as story-telling moved with the advance of technology from oral traditions to cave-drawing to writing to film to computer games, we should certainly assume that it will embrace AI technology as soon as it comes along.

Creation of characters, especially conscious ones, also has serious ethical implications. How much moral worth shall we give creatures once we know them to be mere simulations? The pro-life stance hasn’t yet been extended from unborn foetuses to created characters, but it would surely happen if AI were invented. There are already warnings against “substrate chauvinism” (Visual Worldlets Network); the idea that only biological beings carry moral worth. If for no other reason, we ought to respect simulated characters because there is a good chance that we are simulations too. Authors often harm their characters for the sake of the story, but we would not appreciate it if God harmed us for the sake of his story [Footnote: The idea of the world being ‘God’s story’ is found in theological literature, such as The Drama of Scripture (Bartholomew and Goheen) which refers to the Bible as a six-act play, and people today as improvisers in act 5]. This could be an answer to, or at least an explanation for, the Problem of Evil (Dainton 14-15). If we were only ever nice to our simulations, it would severely restrict our boundaries for art and experiment. Advising how to write effective fiction, Kurt Vonnegut said “be a sadist … make awful things happen to [your characters]–in order that the reader may see what they are made of.” (1999 9-10). Yet from the inside, through one of his characters, he said that it was best to avoid creating “living, breathing, three-dimensional characters” (Vonnegut 1997 62). Which is correct depends on perspective.

There is also the question of honesty. If we decide that we have moral obligations to our conscious creations, should we not be honest and tell them that they are simulations? Or would that defeat the purpose of it all; would “breaking the fourth wall” destroy the illusion and the magic of creation? It would be difficult to run a successful simulation if the characters spent all their time having existential crises about reality, free will and sub-creation; they may prefer being left blissfully unaware of their fictional plight. The creators of our simulation – assuming we are in one – may have the same philosophy about us, although if they do, one would expect them to have deleted Nick Bostrom by now. The management of The Truman Show certainly got rid of anyone who tried to spill the beans to Truman, but to be fair, maybe God works in more mysterious ways than human television executives. In any case, is it not somewhat contradictory to say to one’s invented character ‘To tell you the truth, you’re a lie’?

It seems Bostrom has not thought of these manifold implications of his argument when he says that “the implications are not all that radical” (2003 254) and that the discovery of the argument has not changed his life much (2008). It is indeed hard to see how to respond to the argument in one’s day-to-day life; but that is because the issues it raises are so huge it takes time to think about them. Robin Hanson, who accepts the simulation menace, doesn’t seem to think it will change much either, giving the same kind of advice that could be found in any narcissistic pop-philosophy bestseller; “live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you” (Hanson 1). Peter Jenkins, a lawyer, explores more fully the “Motivational, Ethical and Legal Issues”, and comes to the sensational conclusion that “long range planning beyond 2050 would be futile” (Jenkins 37) as we are bound to be deleted as soon as we learn to create consciousness. But it seems that, rather than philosophers, it is authors of fiction (from Shakespeare to pulp sci-fi) that have explored the implications most fully so far.

The Simulation Argument is difficult to deny. Bostrom is correct in identifying that either there will never be multiple conscious simulations, or we’re statistically very likely to be in one right now. It is a quirk of this situation – the fact that it concerns entire ‘realities’, including our own – that means there is no middle-ground; either simulations won’t happen at all, or they will and we’re probably a good example of one. This means that the possibility that we are in some kind of a simulation is a product of the possibility of artificial intelligence. As the possibility of AI may be a reasonably likely one, the Simulation Argument has fascinating consequences for our views on ontology, theology, morality and creativity, to our relationships with worlds ‘above’ and ‘below’ us, and to our relationship with our own world.

References

Bartholomew, Craig G. and Michael W. Goheen. The Drama of Scripture: Finding Our Place in the Biblical Story. Grand Rapids: Baker Academic, 2004.

Bostrom, Nick. ‘Are You Living In a Computer Simulation?’. Philosophical Quarterly 53:211 (2003). 243-255.

Bostrom, Nick. ‘Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards’. Journal of Evolution and Technology 9 (2002).

Bostrom, Nick. The Simulation Argument FAQ, Version 1.6. 2008. September 15, 2008
<http://www.simulation-argument.com/faq.html>

Bostrom, Nick. ‘The Simulation Argument: Reply to Weatherson’. Philosophical Quarterly 55:218 (2005). 90-97.

Dainton, Barry. Innocence Lost: Simulation Scenarios: Prospects and Consequences. 2002. September 15, 2008
<http://www.simulation-argument.com/dainton.pdf>

Dennett, Daniel C. ‘The Self as a Center of Narrative Gravity’. Self and Consciousness: Multiple Perspectives, ed. Kessel, F., P. Cole and D. Johnson. Hillsdale, NJ: Erlbaum, 1992.

Descartes, René. The Principles of Philosophy. 1644. Project Gutenberg. Pr. Harris, Steve and Charles Franks. September 16, 2008
<http://www.gutenberg.org/etext/1523>

Dick, Philip K. Do Androids Dream of Electric Sheep?. Garden City, NY: Doubleday, 1968.

Dick, Philip K. How to Build a Universe That Doesn’t Fall Apart Two Days Later. 1978. September 16, 2008
<http://deoxy.org/pkd_how2build.htm>

Elias, Norbert. The Loneliness of the Dying. New York: Basil Blackwell, 1985.

Hanson, Robin. ‘How to Live in a Simulation’. Journal of Evolution and Technology 7 (2001).

Hughes, Sam. I don’t know, Timmy, being God is a big responsibility. 2007. September 15, 2008
<http://qntm.org/?responsibility>

Jenkins, Peter S. ‘Historical Simulations – Motivational, Ethical and Legal Issues’. Journal of Futures Studies, 11:1 (2006), 23-42.

Locke, John. An Essay Concerning Humane Understanding, Volume 1. 1690. Project Gutenberg. Pr. Harris, Steve and David Widger. September 15, 2008
<http://www.gutenberg.org/etext/10615>

Morhan, Clotilde. ‘Paganism and the Conversion of C.S. Lewis’. Ignatius Insight November 2005. September 15, 2008
<http://www.ignatiusinsight.com/features2005/cmorhan_cslewis_nov05.asp>

Morris, Paul W. ‘The Lord, and the Rings’. Killing the Buddha 2001. September 15, 2008
<http://www.killingthebuddha.com/critical_devotion/lord_rings.htm>

Searle, John R. ‘Minds, Brains, and Programs’. The Behavioral and Brain Sciences, vol. 3. Cambridge: Cambridge University Press, 1980.

Shakespeare, William. As You Like It. 1600. Project Gutenberg. Pr. Loewenstein, Joseph E. September 16, 2008
<http://www.gutenberg.org/etext/1523>

Sonnenfeld, Barry et al. Men in Black. DVD. United States: Columbia Pictures, 1997.

Tolkien, J.R.R. Mythopoeia. 1931. September 15, 2008
<http://home.ccil.org/~cowan/mythopoeia.html>

Vonnegut, Kurt. Bagombo Snuff Box: Uncollected Short Fiction. New York: G.P. Putman’s Sons, 1999.

Visual Worldlets Network. ‘Substrate Chauvinism’. Virtual Dictionary. September 15, 2008
<http://www.virtualworldlets.net/Resources/Dictionary.php?Term=Substrate+Chauvinism>

Vonnegut, Kurt. Breakfast of Champions. New York: Rosetta Books, 1973.

Vonnegut, Kurt. Timequake. New York: G.P. Putman’s Sons, 1997.

Wachowski, Larry and Andy et al. The Matrix. DVD. United States: Warner Bros. and Australia: Village Roadshow Pictures, 1999.

Wachowski, Larry and Andy et al. The Matrix Reloaded. DVD. United States: Warner Bros. and Australia: Village Roadshow Pictures, 2003.

Weatherson, Brian. ‘Are You a Sim?’. Philosophical Quarterly 53 (2003). 425-431.

Weir, Peter et al. The Truman Show. DVD. United States: Paramount Pictures, 1998.

Wright, Will et al. The Sims. CD-ROM. United States: Electronic Arts, 2000.

Advertisements

About calebmorgan

I write political blogs at cutyourhair.wordpress.com and sometimes other stuff at calebmorgan.wordpress.com. View all posts by calebmorgan

One response to “A.I. think therefore A.I. am.

  • Patricia

    Hmm is anyone else encountering problems with the images on this blog loading?
    I’m trying to figure out if its a problem on my end or if it’s the blog.
    Any feedback would be greatly appreciated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: