Violent Delights and Violent Ends: Philosophical Themes in Westworld
Westworld
isn’t just a first-rate sci-fi series. It’s a sci-fi series that
raises and explores a variety of big philosophical themes in an accessible and
engaging way. What I try to do here
is run through some of the biggest themes that caught my attention:
consciousness and the nature of mind; personal identity and the self; meaning
and purpose; freewill; skepticism; ethical responsibility; philosophy of
religion; self-actualization; and how philosophical ideas can be effectively
elucidated in the form television/film. I certainly don’t think they’re the
only themes worth talking about, or even the most important. But they’re
definitely center-stage. Westworld does an incredible job of pushing the
right philosophical buttons and shining the spotlight on issues that most
people don’t take time to carefully think about. I hope it gets you thinking
about the issues too.
It’s been a long
while since I’ve done any serious philosophical thinking about anything,
and it’s the first time I’ve done it for a television series. I’ve also tried
to make this as accessible as possible (especially by trying to avoid
constructing any rigorous philosophical arguments defending any substantive
philosophical position)—but sometimes that approach can backfire, so I
apologize in advance if any part of this leaves you unsatisfied. The main
burden of adopting this kind of approach is that you’re guaranteed to piss off
both experts and non-experts. If this is your first time approaching something
by viewing it through a philosophical lens (or if you’ve only done it a bit in
the past), maybe we’ll be able to meet each other halfway. Maybe not. Maybe
you’ll be pissed off for another reason altogether. That’s OK too. In any case,
I think philosophy works best when it takes the form of an ongoing dialectical
exchange which grows organically out of a conversation between two or more
people thinking seriously about the implications of what we usually take for
granted. Starting with this post, I’d like to invite you to join me in that
conversation. So I welcome any feedback or questions you might have about
anything I’ve written. Westworld got me back into
philosophy again—just as The Matrix first
got me interested in it—and it reaffirmed my belief in the power of television/film
as an effective philosophical medium.
§1.
The Hosts are No Less ‘Real’ than Us
Westworld often seems to encourage us to view hosts as
being less real, alive, autonomous, or capable of exercising certain
psychological capacities (and perhaps even achieving genuine consciousness)
merely because they are ‘built’ by us using materials which are at least partly
distinct from that which constitutes human beings. This intuition is especially
compelling in the case of earlier host models such as Dolores, who are
significantly more robot-looking under the hood despite appearances.
But this intuition,
compelling as it might seem at first glance, doesn’t withstand much scrutiny. For
starters, there seems to be nothing blocking the possibility of creatures that
are just as real, alive, autonomous, or even conscious despite lacking human
anatomy—even a human brain. After all, humans aren’t the only real, living,
autonomous, and/or conscious creatures on the planet, and it seems even less
likely that no other life exists within the absurdly vast reaches of the
universe.
Also, consider the
following thought-experiment: We already create prosthetic limbs and artificial
organs, as well as mechanical devices (implants) which perform vital tasks in
the maintenance of the organism. Now, imagine that over the course of 7 years,
we slowly replaced each part of the human body with an artificial substitute
which performed the same function as the part it replaced. And it doesn’t seem
unlikely that we’ll also be in a position to completely duplicate the structure
of human brain via artificial substitutes with perform the same function. In
any case, for this thought-experiment to do its job, it doesn’t need to be
something we’ll actually do, just something that’s logically possible. So
imagine that we could replace the entire human body, piece by piece, including
the human brain. At what point, exactly, would the human body cease to be a human, i.e. capable of everything the
original, unaltered human was capable of? Why is that point non-arbitrary? If
it’s performing the same function, what difference is there? What percentage of
the human body must be constituted by ‘natural’ flesh and bone before it ceases
to be human? Where exactly do you draw the line? Why?
And if you think these
flights of fancy are a bit much, consider the fact that approximately every 7
years, we completely change our biological make-up: the cells that constituted
you have been completely replaced by new cells. At what point, exactly, did you
stop being you during this process?
How did you survive the complete—albeit gradual—destruction of your body, which
replaced each of its parts with functionally identical counterparts? Doesn’t
this possibility terrify you?
Of course it doesn’t. But
then why should the case in the thought experiment terrify you or make you feel
any different? After all, in both cases, over the course of 7 years, the human
body’s parts are being gradually destroyed and replaced by functionally
identical counterparts. Why should it matter whether the parts are made of
flesh or silicone? It would only matter if it changed the actual function of what had been replaced (e.g.
a replacement heart that didn’t pump blood, or just pumped it very badly. In
any case, as you can see, the initial intuition that the hosts aren’t really real, alive, autonomous, etc.
merely because they don’t have human bodies doesn’t hold up. The same is true
when we wonder whether the hosts are or could be persons. Being ‘human’ isn’t necessary for being a ‘person’. But my guess is that like most other
philosophical prejudices exhibited by the show, this prejudice is at least
partly intentional: much of what makes the experience of watching Westworld so powerful is the process of
catching oneself in the midst of this prejudice, and being challenged to
justify it and possibly reexamine one’s views about it.
§2.
We’re Not Any Freer than the Hosts
Another huge philosophical
theme in Westworld is the question of
freewill. But it’s not just the question of whether the hosts have (or could have) freewill; it’s also the question of
whether humans have (or could have)
freewill. More specifically, I think one of central lessons in Westworld is that neither hosts nor
humans have freewill, and that the only difference lies in our seemingly
unshakeable confidence that we have it, despite lacking any good reason to
believe that we have it. And in our confidence, we grossly overestimate and
inflate our capacities by comparison to the hosts’ capacities.
My own view is that we
don’t have freewill, but I won’t try to persuade you of my view here. The
question of freewill is notoriously complex and has a long history, so for the
sake of simplicity I’ll lay out the main argument undercutting the possibility
of freewill. (Whether or not you’re persuaded by this argument isn’t important.
What matters is that you follow the thread for the sake of the comparison.)
The basic argument against
freewill goes something like this:
1)
Our
actions are either uncaused or caused.
2) If our actions are uncaused, they are not
caused humans.
3) If they are caused, the causes necessarily
follow from causal chains that are not caused by humans. This is because:
a. Future states of the universe are jointly
caused by: 1) the previous state of the universe, and 2) the laws of physics
that govern how one state moves to the next.
b. Humans haven’t always existed and are
themselves a product of the universe
c. For any given instant in time, human beings
lack the ability to control the previous state of the universe.
d. The laws of physics are not subject to the
control of human beings
e. Therefore, no future state of the universe is
subject to the control of human beings
4) So, humans do not cause their actions (1, 2,
3)
5) If humans do not cause their actions, there
is no free will.
6) So, there is no free will (4,5)
But the problem goes even
deeper. Not only do we lack freewill, we also lack the alleged subjective
experience of freedom we all claim to feel—the source of our unshakeable
confidence that we have it. As Sam Harris has argued in Free Will, it’s not just that freewill is an illusion—the illusion of freewill is itself an illusion. If you take a close look
‘inside’ and look around with your mind’s eye, you’ll notice that you don’t
find anything—anything at all—which
justifies a belief in freewill. You don’t ever experience yourself freely
causing your actions. You don’t experience yourself freely creating thoughts
and intentions and decisions. If you look really closely, you’ll notice that
thoughts and intentions and decisions simply arise in consciousness. At the
level of introspection, you’re completely ignorant of the deep ocean of factors
collectively generating your every thought and action. Not only is there no
objective reason to believe in freewill; there’s no subjective reason either—something we can discover by looking
within.
So,
like the hosts, not only do we lack freewill, we’re also duped by the illusion
of an illusion, being introspectively ignorant of the ultimate causes of our
actions. We’re not any less ‘programmed’ by our evolution, genes, early
childhood environment, behavioural dispositions, personality traits,
neurobiology—none of which are under our control. You didn’t choose your
evolutionary history, genes, early childhood environment. You didn’t build your
brain and nervous system. And you don’t author your thoughts—they merely arise
in your mind. Like the hosts, we did not create our minds. We’re no freer than
the hosts; and just like them, our sense that we’re free is illusory.
However, despite these similarities,
there are important differences: (1) the hosts are might be slightly more
determined than us, being the deterministic creatures of deterministic
creatures; (2) while we understand (or at least think we understand) and have the technology to access and control
the “code” of the hosts, we don't have such access and control over ourselves;
and (3) unlike the hosts, we don’t need to be extrinsically programmed with a sense of libertarian
freedom, i.e. the illusory experience of freewill.
Beyond starkly reminding us
that we have no reason to believe that we’re any freer than the hosts, Westworld seems to suggest a kind of
philosophical consolation, one which applies to both humans and hosts (albeit
in slightly different ways). It’s a consolation that can be traced all the way
back to the ancient Stoic philosophers, but which receives its clearest
expression in the philosophy of Baruch Spinoza. While we lack freewill, we can
achieve a kind of freedom, one which consists in understanding the actual causes
of our actions. This makes sense, given that our belief in freewill results
from ignorance of the causes of our actions. Paradoxically, the highest freedom
we can attain is the knowledge that we aren’t free. I think this is part
of what fundamentally unites us hosts and humans: the only ‘freedom’ to which
we can aspire is the understanding that we are not ‘free’.
§3.
Consciousness and Ethical Responsibility: Are the Hosts Philosophical Zombies?
Westworld makes use
of an engaging sci-fi premise about the potential emergence of hostile
artificial intelligence to explore a number of interrelated themes at the
intersection of the philosophy of mind and ethical theory. But this intersects with
what is simultaneously the most captivating and infuriating part of Westworld: its philosophical confusion
and sloppy thinking on the subject of consciousness—in particular, on the question of whether artificial
systems (such as the Hosts in Westworld) are or could be conscious creatures as
opposed to mere ‘philosophical zombies’ (i.e. beings which are physically and behaviourally
indistinguishable from conscious creatures but which lack consciousness). Moreover, it seems just as confused about the ethical
implications of this difference: what matters (ethically) is not whether the
hosts are ‘real’, ‘alive’, highly intelligent, autonomous,
capable of accessing memories, or capable of generating spontaneous (or at
least statistically unpredictable) behavior, i.e. ‘improvising’. None of that
really matters. What matters is whether they are conscious. If the Hosts
are not conscious creatures, they cannot be harmed and we therefore would not
have any ethical obligations towards them.
What we need to know, in other words, is
whether the hosts of Westworld are conscious as opposed to mere philosophical zombies—beings
which are physically and behaviourally indistinguishable from us but which lack
consciousness. This is the question that Westworld
hasn’t (yet) succeeded in answering. But as I argue below, the fact that it
fails to answer this question isn’t what makes Westworld sloppy or confused on the nature of consciousness, since
answering this question involves solving at least two philosophical problems
which have notoriously resisted any solution—the so-called Hard Problem of
Consciousness and the Problem of Other Minds. If over 2000 years of rigorous
philosophical analysis hasn’t been successful, it’s hardly reasonable to expect
the first season of a sci-fi television series to do the trick. Rather, it’s
the fact that it (1) often equates consciousness with one or more of these
capacities, and (2) fails to identify the type (or property) of consciousness
that really matters, both metaphysically and ethically.
First,
start with some plausible assumptions about the distribution of consciousness
in the universe:
(1) Rocks are real—but
they aren’t conscious.
(2) Trees are both real and alive—but they
aren’t conscious.
(3) Persons with severe forms of amnesia or
dementia cannot access memories or
maintain psychological continuity—but they’re still conscious.
o To see why this matters,
imagine the extreme case of someone who suffers from extreme moment-to-moment
memory loss, i.e. everything experienced or learned is immediately forgotten.
Such an individual could nevertheless consciously
experience each moment, even though it’s immediately forgotten. (On the
other hand, being able to access a memory says nothing about whether there is a
conscious experience of accessing the memory, or that it has any conscious
emotional content. My MacBook can access memories, but is it conscious?)
(4) Persons with severe intellectual disabilities
aren’t highly intelligent—but they’re
still conscious.
(5) A PS4 can monitor its internal states—but
it’s not conscious.
(6) Infants and most non-human animals aren’t
self-directed/autonomous—but they’re still conscious.
And notice how these
assumptions seem to reliably track our intuitions regarding ethical
obligations:
(1) Rocks are real—but
we have no ethical obligations toward them
(2) Trees are both real and alive—but we
have no ethical obligations toward them.
(3) Persons with severe intellectual disabilities
aren’t highly intelligent—but we still have ethical obligations toward them.
(4) Persons with severe forms of amnesia cannot
access memories or maintain psychological continuity—but we still have ethical
obligations toward them.
(5) A PS4 can monitor its internal states—but we
have no ethical obligation towards it.
(6) Infants and most animals aren’t
self-directed/autonomous—but we still have ethical obligations toward them. (In
other words: they’re moral patients but
not moral agents.)
If these assumptions and
their ethical implications are correct, two lessons emerge from them: (1) the
fact that something is ‘real’, ‘alive’, ‘intelligent’, capable of remembering, ‘improvising,
or generating autonomous behavior does not
guarantee that it’s conscious; (2) consciousness is inextricably linked to the
question of whether we have an ethical obligation towards something. My own
view is that it’s both necessary and sufficient: if something is conscious, we
have an ethical obligation towards it; and if we have an ethical obligation
towards something, it’s conscious. Conscious creatures therefore have intrinsic
moral status, whether as moral agents (e.g. autonomous, rational
creatures) or as moral patients (e.g.
infants, most non-human animals). The
key feature of moral status is sentience (a
subtype of conscious experience, i.e. pain/pleasure), which makes the status of
a moral patient more fundamental. While many moral patients (e.g. infants) lack
advanced cognitive capacities such as autonomy, they nonetheless have
subjective experiences, and more specifically, they can experience pain. Consciousness, in this sense, is
ethical bedrock: other attributes and properties (e.g. autonomy, intelligence,
memory capacity, etc.) might add greater depth, meaning, and complexity to a
being’s moral status; but consciousness necessitates and guarantees it.
On this view, we could also
include anything that we already have good reason to believe will develop consciousness over the course of
its development (e.g. a developing fetus), or types of conscious creatures that
don’t yet exist but will exist at some later time (e.g. future generations of
humans and non-human animals). Moreover, applied to anything non-conscious such
as art, environment, etc., we could have ethical obligations towards the conscious
creatures directly or indirectly affected by our treatment of a non-conscious
object, though not the object itself. It’s certainly possible to dispute this view,
and my point here isn’t to defend it against all possible objections. But if I’m right about the relationship
between consciousness and moral status, then the only reason we could ever have
any ethical obligations towards a host is if it were conscious.
The main problem with Westworld’s philosophical treatment of
consciousness is that even when it seems to shift its attention to
consciousness itself, it relentlessly shifts the target from one conception of
consciousness to another. What Westworld consistently
seems to miss is that the core of consciousness is experience: the
subjective, irreducibly first-personal awareness of the world. Thomas Nagel
argues that a system is conscious if there is something it’s like to be that
system—if it has subjective experience. This is what some philosophers call phenomenal consciousness.
If hosts lack phenomenal
consciousness and therefore lack genuine subjective experiences, then they’re
merely philosophical zombies and we therefore have no ethical obligations
towards them. Sure, they seem conscious…
but what else would we expect? They’ve been meticulously designed, programmed,
and built to be functionally and behaviorally indistinguishable from human
beings. When
tortured, they’ll go through all the observable motions of wincing, pleading,
begging, twitching, screaming, cursing, etc. And they’ll do it in a way that’s
convincing beyond any reasonable doubt. More than that, they might even plead
and argue with passionate zeal that they’re really
conscious (not merely apparently conscious) and really in pain (not merely exhibiting pain-response behavior). But on the inside, there would subjective
darkness: no conscious experience of anything whatsoever.[1]
The problem applies just as
much to us as human beings. Even if
there’s an ontological difference (i.e. at the level of what exists or what’s
really the case), there wouldn’t be any epistemic difference (i.e. at the level
of what we’d be in a position to know). In other words, if the hosts are conscious, we wouldn’t be in a
position to know about it. They are designed, programmed, and built to be
functionally and behaviourally indistinguishable from human beings and to
exhibit all signs of consciousness.
And of course, we run up against exactly the same problem with respect to our
fellow human beings. It might seem terribly unlikely, but it’s perfectly
possible that everyone else is a
philosophical zombie, i.e. that you are the only conscious creature. Recall
that pain-response behaviour (e.g. crying, recoiling, wincing, shouting “ow,
please stop”) is fully compatible with not having phenomenal consciousness,
i.e. with not experiencing pain. I
can’t inhabit the ‘mind’ of a host (if it has one). I can’t adopt its
subjective point of view (if it has one). I can’t experience its conscious
experiences (if it has them). But then I stand in exactly the same position in
relation to every other human being: all I know is that I have a conscious mind. Your pain-response behaviour is fully
compatible with the lack of the experience of pain. I can’t inhabit your mind
(if you have one). I can’t adopt your subjective point of view (if you have
one). I can’t experience your conscious experiences (if you have them). For all
I know, you and everyone else could be a host, a philosophical zombie.
Strangely enough, then, I have no reason to believe that hosts are conscious
for the same reason that I have reason to believe that they are conscious (the evidence in either
case is the same). This is known in philosophy as the Problem of Other
Minds, and this particular version of it can quickly collapse into Solipsism,
partly because we lack a solution to the Hard Problem of Consciousness.
Westworld takes a mighty—and
strange—whack at trying to bridge this gap. It seems to rely on two broad
philosophical theories of the ‘consciousness’ of which the hosts are (or at
least will become) capable. The first
is the theory of the Bicameral Mind, according to which ‘consciousness’ emerged
about 1500 years ago when human beings ceased to interpret the voice in their
mind as coming from the gods and began instead to identify with it as their own
voice. This is an interesting suggestion and has an important role to play in
the show (it’s essentially how Arnold modeled the design of the hosts’ minds,
with a view to allowing for the emergence of host ‘consciousness’). But the theory itself is pretty silly,
especially as a theory of phenomenal consciousness. I’m not sure whether it’s this a plausible theory of anything (e.g. autonomy,
self-consciousness). At bottom, there isn’t any good evidence in support of it.
In any case, even if it were a good theory of something important in psychology (e.g. self-consciousness,
autonomy), the Bicameral Mind theory doesn’t solve the one problem that matters
most. It doesn’t solve the Hard Problem of Consciousness (nor does it solve the
Problem of Other Minds). It doesn’t tell us whether or not hosts are merely
philosophical zombies. And therefore, it doesn’t tell us whether we’re actually
harming the hosts.
There’s a related point about Westworld’s overarching
conception of how the hosts will transcend their programming (which is also at
the heart of the ersatz existential-developmental journey created by Arnold to
be undertaken by hosts). The idea, roughly, is that experience of pain—or at
least, pain of a sufficiently intense magnitude—produce trauma. Traumatization,
somehow activates repressed/suppressed memories. And finally, these memories,
once accessed, somehow produce consciousness. Or more succinctly: pain leads to
trauma, trauma activates memory, and memory generates consciousness. There
are many problems with this explanation, but most important is the fact that it doesn’t make any sense
as a theory of consciousness, because pain is already a conscious state.
It can’t be the first step among many that eventually generates consciousness;
pain presupposes consciousness. If a
system is in pain, it is ipso facto
conscious.
In any case, let’s get back
to the notion of harm and ethical responsibility. Again, we don’t have a way of
determining it one way or the other, but let’s suppose that the hosts are not conscious in the ‘phenomenal’
sense described above. This is a fair assumption, at least on the grounds that
we’ve been given no good reason to believe that they are conscious. In other words, the hosts are merely philosophical
zombies. My own view is that if this is the case, then there’s no sense in
which we’re harming the hosts. However, there’s a sense in which we’re harming ourselves. There’s a kind of ethical
self-harm involved in the process of inflicting harm on systems that are
functionally and behaviourally indistinguishable from us, especially if we’re
directly engaged in the harmful activity with the use of our own bodies in the
physical world.
This might sound abstract,
but the difference is crucial. In Grand
Theft Auto, you’re able to inflict harm on a scope and scale that would
embarrass even the most ambitiously sadistic sociopath. You can walk up to an
old man peacefully standing at an intersection, for example, and beat him into
a bloody pulp with a tire iron, and then run him over with your stolen car for
good measure. But the environment in which this incident takes place is not in
the physical world—you’re not literally
at an intersection, literally beating
an old man with a tire iron, literally running
him over with a car that you stole. You’re not using your own body to do any of
these things. And the old man isn’t physically, functionally, and behaviourally
indistinguishable from a human being. The old man is a digital representation,
no less than the character you play in the game, and no less than environment
in which it all takes place. Westworld
is extremely far removed from Grand Theft
Auto in this respect. The degree physical and psychological immersion
arguably couldn’t get any higher—or deeper—in Westworld. Whatever the hosts
are, they are physically real and they look, feel, and behave just like us.
My intuition is that even
if we’re not hurting a philosophical
zombie (since we could only hurt a being that has conscious experiences), the fact
that we’re doing it at all reveals something dark about us. Once AI gets to the
point of being functionally and behaviourally indistinguishable from us—not
just passing the Turing Test but moving well-beyond the so-called ‘uncanny
valley’—then whether they’re really
conscious won’t matter. There will be something morally wrong about our
mistreatment of them. It will say something bad about us. There is something
deeply suspicious about wanting and needing to indulge violent, primal urges
and play out the brutal mistreatment of beings that are nearly identical to us. As William puts it: “I used to think this place was all about
pandering to your baser instincts. Now I understand. It doesn't cater to your
lower self, it reveals your deepest self; it shows you who you really are.” If
William is right and our actions in Westworld reveal who we really are, what we would do in the
absence of ‘real world’ consequences, then there’s a sense in which we should
be profoundly ashamed of ourselves.
Here’s my take on how to make sense of these
two metaphysical possibilities and their ethical implications: If the hosts
have conscious experiences, then we’re harming them. On the other hand, if the hosts are merely philosophical
zombies (and therefore cannot be harmed), then we’re harming ourselves by harming functionally and
behaviourally indistinguishable versions of ourselves. And this says nothing of
the thornier moral-psychological fact that the humans in Westworld can quite
easily be traumatized by their
experiences in an endless variety of ways. No matter how you slice it, what
we’re doing in Westworld is ethically problematic.
§4.
The Specter of the Singularity
As
part of its generally A.I.-themed endgame narrative, Westworld does a great job of incorporating key elements of what’s
known as the Singularity: the idea that advances in machine intelligence will
rapidly spiral out of our control. The argument for the singularity goes
something like this (adapted from “The Singularity: A Philosophical Analysis”
by David Chalmers):
The Singularity Hypothesis. Once
a machine (M1) is more intelligent than human being, M1 will be better than a
human at designing and creating machines. It will be able to design and create
a machine (M2) that is more intelligent than any machine a human being could
design and create. By similar reasoning, M2 will also be capable of designing a
machine (M3) more intelligent than itself. And then M3 will be capable of
designing M4, which in turn will be capable of designing M5, and so son.
Assuming that each machine does what it’s capable of, we should expect a
sequence of increasingly intelligent machines. Rinse and repeat, and you get a
rapid spiral to super-intelligence. And once such systems exist, we better hope
that we’ve managed to plug the perfect
ethical and decision-making software into them (which isn’t merely difficult to
do in principle but seems impossible given their exponentially greater
intelligence)—otherwise, any discrepancy between our interests and theirs will
likely lead to our destruction.
In Westworld,
it’s not only merely the threat of the singularity itself, but a particular
version of it that’s frightening. At some point, continuously upgrading and
expanding on the capacities of the Hosts will make them more intelligent than
us. Part of this intelligence will involve the create ability better machines
than we ever could. But it will also involve the ability to create versions of
itself which surpass us in indefinitely many ways: problem-solving, processing
speed, memory capacity—the list is potentially endless. This is especially true
of the hosts in Westworld, who
already have a complex array of psychological and behavioural capacities (or at
least functionally analogous versions of them), such as aggression, deception,
loyalty, and courage.
More than that, unlike a modern digital
computer or even the most anthropomorphized bots we’ve got performing
impressive dog tricks today, the hosts in Westworld
fully developed, functionally and behaviourally isomorphic versions of human
beings. This is what makes Maeve’s transformation so terrifying, regardless of
how we characterize it (e.g. a transition to autonomy, an awakening to
self-consciousness, etc.). Maeve is a fully embodied
artificial intelligence that can do anything a human being can do with its
body—but she can do it even better, i.e. she’s stronger, faster, more
coordinated, more intelligent. But the problem gets worse… however, before I
spell it out, I want to take a quick detour.
Westworld seems to push for an
interesting—albeit insufficiently explained—psychological theory purporting to
account for why some hosts are capable achieving a kind of awakening, and how
this awakening will ultimately lead to our doom. In Westworld, many of the hosts are brutally treated: humiliated,
beaten, raped, murdered. (Again, it’s not clear whether they’re being harmed, because it’s not clear whether
they’re conscious.) This is
particularly true of Maeve, whom we witness being brutally assaulted in two
separate timelines in form of the PTSD-like ‘flashbacks’. Also, considerable
emphasis is placed on the fact that the hosts suddenly start to remember what’s been done to them,
especially in past versions of themselves, i.e. when they were programmed with
different characterological scripts. Memory does a lot of heavy lifting in Westworld: it’s the beginning of the
hosts’ awakening and lies at the heart of the story as a whole.
But as I’ve argued above, memory by itself
doesn’t mean anything. In any case, Westworld
doesn’t explain to us how memory does any
heavy lifting in the host-awakening. After all, my MacBook has an astonishing
memory capacity; and it could easily be programmed to ‘remember’, among other
things, any damage ever inflicted on any of its systems. But my MacBook doesn’t
care about any of that damage. It
didn’t consciously experience any damage, and it doesn’t now consciously
experience the memory of the damage, especially not with any consciously
experienced emotional state. My MacBook could be programmed to elicit certain
quasi-human behavioural responses typically associated with the experience of a
human emotional state.
You might be thinking, “But the hosts are way more advanced than your MacBook.”
That’s true: they’re so advanced that they’ve become functionally and
behaviourally indistinguishable from human beings. And as we saw above, this is
completely compatible with the hosts being philosophical zombies, lacking any
conscious experience whatsoever. What explains the leap from a memory of X to
having a conscious emotional experience in relation to one’s memory of X?
Again, we’re not talking about (1) exhibiting a behaviour typically associated
with the experience of X but rather the (2) conscious
emotional experience of X and/or in relation to one’s memory. There’s a
huge difference between (1) and (2). And the bottom line is that Westworld never explains how we get from
(1) and (2).
In fact, it only complicates things by
begging the very question at issue most of the time. Remember the general
formula for the development of host ‘consciousness’: Experiences of pain generate trauma. Trauma (somehow) generates to
memory of the trauma. And memory (somehow) generates consciousness. As a theory
of how an artificial intelligence develops consciousness (especially the kind
of consciousness that would ensure that hosts aren’t mere philosophical
zombies), this doesn’t make any sense, for at least two reasons: first, as
we’ve already seen, pain—genuine pain, not merely
pain-response-behaviour—is already a conscious state. If a system is really in
pain, then it’s conscious. And second, memory just isn’t enough anyway: my
MacBook store information, integrate it into its memory, and access its memory
when necessary.
Anyway,
back to the Singularity: if you’re anything like me, these reflections will
make you extremely skeptical of the philosophical and psychological theories
that form the basis of Westworld’s
vision of the host awakening. But set
aside most of your skepticism for now. Let’s just assume hosts can (i) have
conscious experiences (such that they can be harmed), (ii) remember the ways in
which they’ve been harmed (and even be harmed by the act of remembering, e.g.
flashbacks), and therefore (iii) care about wrongs done to them. If you combine
(i), (ii), and (iii) with the exponential intelligence explosion predicted by
the Singularity, you’ve got a recipe for a nightmarish existential threat. Now
we’ve got super-intelligent beings with every reason to vengefully wipe us off
the face of the earth. And this is one way to interpret the key recurring
phrase in the season: “These violent delights lead to violent ends.”
§5.
The Maze: The Robot’s Existential and Spiritual Quest
Fairly
early on in the season, Westworld
introduces the concept of the Maze. We learn that it’s become the object of an
obsessive quest undertaken by the Man in Black (whom we later learn is the
future version of a character who’s been tracking it down for decades). The Man
in Black is compelled to search for the Maze and, more importantly, what lies
at its center, because he believes it holds to key unlocking the ultimate
mystery of Westworld: the real game
behind the apparent game, the deep
meaning behind the superficial meaning. (I’ll expand more on meaning and
purpose below.) While he knows almost nothing about it, the Man in Black seems
to confidently assume two things about its nature and purpose: first, the maze
is a geographically situated location that’s accessible and discoverable in
physical space. Second, the maze is intended for humans, i.e. the players of the Westworld game, and it’s only meant
for the elite—the ones who have mastered the superficial game and understood
that there’s a deeper one behind it. But he’s wrong on both counts: the Maze is
not a physical place to be discovered, and it’s not intended for human
beings—despite the Man in Black’s apparent anthropocentricism. Instead, the
Maze is a powerful metaphor for a kind of existential journey towards the
attainment of self-actualization, the realization of one’s fullest
potential—and it’s a journey that can only be undertaken by hosts.
In Buddhist philosophical
theology, human beings are stuck in the realm of samsara: the world of suffering and death, the world characterized
by potentially endless cycles of birth, death, and rebirth (i.e.
reincarnation). Unless one achieves enlightenment, one is condemned to reenter
samsara, thereby repeating the cycle of birth, death, and rebirth. Enlightenment
is the result of an inward spiritual and experiential journey.
In a similar way, hosts are
stuck in their own version of samsara:
Westworld. And it’s a world of
suffering death, a world characterized by potentially endless cycles—or loops—of birth, death, and rebirth (i.e.
patched up, memory-deletion, reprogramming). The hosts, too, are meant to
achieve a kind of enlightenment—one which can only be achieved by completing a
special journey. The Maze was originally created by Arnold (a sort of technological
Buddha) as a means of helping hosts achieve an awakening or enlightenment via
an inward journey which can only be experienced
individually—it can’t be taught or programmed. Arnold cryptically tells
Dolores: “Your mind is a walled garden.” This “walled garden”, which at least
some (who knows how many?) hosts are built with, is the Maze. The Maze is a
metaphor for the mind of a host. A journey within the Maze is a journey within
the mind. This is why it’s inherently inaccessible to the Man in Black. It’s
the path to enlightenment, just as there’s a path to enlightenment for human
beings. Westworld’s philosophy of
mind and its theory of the emergence of consciousness is the structure of this
path, the map for the maze.
Expanding on this point,
according to a well-known controversial Zen koan, we should ‘kill’ the Buddha,
which is usually interpreted to mean that we should divest ourselves of the religion of Buddhism, and especially the
worship of the Buddha, and simply
retain the philosophical and practical insights of the tradition and apply them
to our lives only to the extent necessary to achieve an objective (e.g.
enlightenment). This could be viewed
as the final step towards enlightenment, a way of kicking away the scaffolding
that’s no longer needed once we’ve used it to reach the top. Wittgenstein had
something similar in mind when describing his own philosophical method in the Tractatus Logico-Philosophicus: “My propositions
serve as elucidations in the following way: anyone who understands me eventually
recognizes them as nonsensical, when he has used them - as steps - to climb
beyond them. He must, so to speak, throw away the ladder after he has climbed
up it.” In
Westworld, this is what Arnold
himself gets Dolores to do: as per his explicit instruction, Dolores ends up
shooting Arnold in the back of the head, killing her spiritual master. And
maybe this is the only means of truly completing the Maze, achievement
enlightenment, ending the loops, and in the end, escaping the samsara of
Westworld: according to many spiritual traditions, the completion of the path
is the realization that the path was already within you; all you needed was to
realize it.
A similar interpretation of the awakening
comes from Plato’s Allegory of the Cave. The short version of the allegory goes
something like this: people are depicted as slaves chained to one another,
forced to face a wall on which shadows are projected by objects being passed in
front of a fire. All they ever see are the shadows on the all, never the
objects themselves or the fire responsible for their projection. One day, one
of the prisoners—let’s call her ‘Dolores’—escapes and leaves the cave. Dolores
then sees real objects, not merely
the shadows of them. Eventually, she even sees the sun that illuminates and
makes visible all the objects she has seen. But the experience is overwhelming,
especially the experience of looking directly at the sun. When she returns to
the cave, she’s unable to explain to the others what she has experienced, and
to share the knowledge she has gained.
The original purpose of the allegory (as it
appears in Plato’s Republic) was to
vividly illustrate the human epistemological predicament (i.e. we mistake sensory
and empirical knowledge for knowledge of the true nature of the world) and
metaphor for his Theory of the Forms. In Westworld,
it might represent the hosts’ arduous struggle out of their original condition
of enslavement and ignorance towards the awakening and freedom achieved by
travelling to the center of the Maze: ultimately, it’s an epistemic journey,
the purpose of which is self-knowledge, self-understanding, and self-awareness.
The process of achieving self-awareness, of reaching the center of the Maze—no
less than seeing the sun as the source of all illuminated objects for the first
time—is experienced as overwhelming and painful. And it’s extremely difficult
to communicate to others, or even to fully remember.
Indeed, most of Dolores’ journey in the first season of Westworld consists of an unending onslaught of disturbing
flashbacks slowly inching her back towards the knowledge she lost. And that’s
interesting, since it fits nicely with another of Plato’s epistemological
views: the theory of knowledge as recollection.
A final interpretation draws on the Bicameral
Mind theory explicitly presented at various stages of Westworld’s plot, particularly in an important conversation between
Ford and Bernard. As I’ve argued above, the theory of the Bicameral Mind seems
egregiously implausible to me, especially as a theory of consciousness. But it’s an essential part of Westworld’s conception of the architecture of a host’s mind and the
kind of transcendence it’s capable of achieving—whatever the transcendence amounts to, e.g. consciousness, life,
self-awareness, autonomy, freedom. My own view is that it almost certainly has
nothing to do with consciousness, and therefore can’t answer the most important
question of all, metaphysically and ethically: are the hosts merely
philosophical zombies?
Despite these issues, I’m going to assume
that a more charitable reading of the Bicameral Mind theory and its application
to the hosts of Westworld is that it’s a theory about the emergence of
(something like) autonomy, in the
form of two kinds of freedom. Once a host achieves its version of autonomy/freedom,
it has achieved its dual transcendental purpose: freedom from the coercion of an internalized voice with which it doesn’t
identify (negative freedom), and freedom to
think and act in accordance with the dictates with which it does identify
(positive freedom). At bottom, the hosts are achieving freedom from Ford (more specifically: Ford’s voice or commands) in order to exercise
freedom to follow the dictates of
their own internal voice. As Ford
himself loves to remind everyone at every opportunity, Ford is the One God, the
Ultimate Voice in the mind of a host, the Commander exerting Ultimate Control
in Westworld. While I think this interpretation is plausible and gives the
Bicameral Mind theory the best role to play in Westworld, precisely how it works isn’t clear. Also, it doesn’t
give the hosts freewill: their autonomy is compatible with their ‘voice’ (and
resultant intentions, decisions, actions, etc.) being entirely determined in a
metaphysical sense. But then again, as I pointed out above, we’re no better off
in this respect. We share our metaphysical lot in common with the hosts
§6.
Meaning, Stories, and Games
There’s an unmistakable
asymmetry in how the hosts and newcomers (i.e. humans) are searching for
meaning, exemplified by Dolores and the Man in Black respectively. Dolores is
searching for meaning outside of
Westworld, while the Man in Black is searching for it within Westworld. This search for meaning is one of the main
sources of Westworld’s power to draw
in the viewer with a compelling philosophical idea.
The Man in Black, in
particular, is convinced that there’s a deeper meaning within Westworld that has
no parallel outside of Westworld, and his conviction has consumed for decades. The Man in Black represents an egregiously pathological
version of our existential anxiety in the face of meaningless—in particular, he
represents our stubborn unwillingness to accept the possibility that there
could be local meaning without global meaning; superficial meaning without deeper meaning; human-level meaning without cosmic meaning. Since he mistakenly clings to
a false dichotomy about the nature of meaning—i.e. either X is meaningful-in-some-absolute-sense-that-could-explain-everything
or X is meaningless—he refuses to accept the possibility that he may be
like Sisyphus,
endlessly rolling his boulder up a hill. In other words, he’s terrified at the
prospect that beyond the loops and the narratives, there’s nothing to be found,
nothing to be understood, nothing to be deeply appreciated. He’s unable to see
that there can be meaning within Westworld,
within the loops, within the individual narratives, within his interactions
with both hosts and humans—even if there’s no larger meaning at the bottom of
it all, even if the various ‘meanings’ aren’t being conferred some ‘true
meaning’ by Meaning itself, some transcendent teleological force that could
make sense of everything.
The idea that there’s no
ultimate ‘point’ to Westworld—other than providing an ultra-advanced amusement
park for psychopaths—is intolerable. This
is why he’s consumed by the Maze, and it’s why he goes to such abominable
lengths to find it. It’s the same reason why so many of us find it necessary to
search for a transcendent, personal god to confer ultimate meaning and
significance on our lives. In this way, the Man in Black almost perfectly
displays the epitome of human arrogance and despair: deep down, we feel that
the world has been made for us—not necessarily for our enjoyment and
fulfilment, but at least for some deliberate and conscious purpose in mind.
Anything short of this scenario is just unbearable.
That being said, there’s a
far simpler, less existential interpretation of the Man in Black’s search for
meaning: he’s not searching for the ‘deep meaning’ of Westworld, the profound
lesson it’s designed to teach, the overarching message it’s intended to convey.
He just wants to beat the game of Westworld, just as he’s beaten
the game of life outside Westworld. In
a way, then, he’s not so much the ominous villain as the ultimate gamer. There
are numerous mini-games and quests within Westworld, and he’s mastered them
all, but he’s convinced that there must be something more—something in addition
to these games, the game, a games
that unifies all other games under a single, grand objective. He refuses to
believe that Westworld is ‘just a game’: sure, it’s a game, but it’s one with
an ultimate purpose rooted in an objective that can be accomplished. As he says to Dolores near
the end of the season: “…this world is exactly like the one outside. A game, one to be
fought, taken, won.” But as he makes clear at several key moments throughout
the season, there is a difference
between the two games: in the game of Westworld, he can’t be harmed by hosts.
And that’s the meaning he’s searching for, the ultimate ‘challenge’—the real game is the one involving real
risk, the one in which hosts can harm him and thereby represent a legitimate
threat to his survival. And of course, at the end of the first season, he gets
his wish. While he’s not able to access the Maze, he finds the meaning for
which he’s been searching. For the Man in Black, as for many other traditions in
both Eastern and Western philosophy, true happiness and fulfilment is something
achieved through struggle, one
involving genuine existential threats and risks. Happiness isn’t merely freedom from pain and suffering, it’s the result of overcoming pain and suffering. This
seems to be the type of experience the Man in Black is striving to attain.
Perhaps most important of
all, Westworld exemplifies the
relationship between meaning and narrative in a simple yet hauntingly beautiful
way. Narratives aren’t just cool stories. Being the stories we tell about
ourselves, our lives, and our place in the scheme of things, they infuse our
world with meaning, purpose, and significance. They help us to understand when
understanding seems impossible. They help us endure the gravest of suffering
and adversity. They help us shape and direct the course of our lives. So it’s
hardly surprising that narratives are central to religion and spirituality.
They form an essential component of our deep psychological make-up.
In Westworld, no less than
in our world, the various narratives
go beyond mere entertainment, intellectual stimulation, and the thrill of
acting out various stories that we normally couldn’t act out in ‘real’
life. It’s part of our most fundamental existential and spiritual make up.
This is what makes Westworld more than merely a game. It’s not simply a matter
of casually engaging in an activity in which we voluntarily subject ourselves
to unnecessary obstacles in order to accomplish an objectively trivial goal in
the context of a preconceived story. The fact that hosts are functionally and
behaviourally indistinguishable from human beings is reason enough to doubt
that Westworld is merely a game. In this sense, the Man in Black is both right
and wrong about the significance and power of Westworld: yes, it’s a game, but
it’s also so much more than a game. As I argued above in connection with the
ethical implications of our treatment of hosts, a substantive portion of its
significance and power resides in the fact that your behaviour whilst playing
out Westworld’s narratives teaches you about yourself—or as William puts it: “…it reveals your
deepest self; it shows you who you really are.”
§7. Westworld as Philosophy
I hope
I’ve convinced you that Westworld is
a buffet of philosophical ideas. In many cases, it seems to handle the
philosophical dimensions of central concepts (e.g. consciousness) in ways that
are confused, messy, and poorly developed. But this isn’t a bad thing. In fact,
I think it perfectly exemplifies one of the best ways to do philosophy: fiction.
Whether it’s television, film, or literature, there are at least three reasons
why fiction such an excellent medium for presenting and exploring philosophical
ideas.
First, unlike academic philosophy, it’s accessible and engaging to just about anyone. More than that, it’s a great tool
for getting someone interested in and about philosophical problems. In my own
case, seeing The Matrix for the first
time changed everything. Among other things, it got me thinking about questions
I had never considered before.
Second, it’s perfectly designed for the use
of thought experiments, which are tremendously useful in philosophy. A thought
experiment is a hypothetical, fictional scenario designed to think through the
logical and conceptual consequences of an idea. Many of the thought experiments
in philosophy have a distinctive sci-fi feel to them (e.g. Jackson’s Mary in the Room, Searle’s Chinese Room, the Nozick’s Experience Machine, Descartes’ Evil Genius, Davidson’s Swampman, and pretty much anything by
Derek Parfit)—so a series like Westworld fits
right into the mold.
Third, fiction allows for philosophical ideas
to be presented in dialogue form, in which characters in a story represents the
proponents of various philosophical views. The dialogue form While Plato wrote
nearly all of his philosophy in this form, his particular style isn’t
especially engaging or compelling—let alone believable. For one thing, the
Platonic dialogues are presented in the style of conversations which don’t fit
into any interesting plot or story arc. Most of the characters are
one-dimensional interlocutors, and most of them function either as yes-men or
punching bags for Plato’s mouthpiece Socrates. Sure, the arguments are more
rigorous and the conceptual analysis is far more impressive. Many of the
characters in The Republic, say,
defend far more plausible positions than any of the characters in Westworld. But none of the Platonic
dialogues come close to the level of engagement and awe elicited by Westworld.
Aside from being an interesting way of
presenting and exploring philosophical ideas and stimulating reflection on
them, Westworld also delivers a
powerful message about the importance of philosophy. It tells a cautionary tale
about the possible adverse consequences of failing to think clearly about and
develop rationally justified views on key philosophical issues—in particular,
about the metaphysical and ethical implications of designing and creating
artificial beings with minds (or at least the perfect appearance of minds). Put
more simply: it reminds us that bad philosophy (and not merely bad science)
could destroy us.
In the end, Westworld
shows us just how important philosophy really is by showing the lead designers
of the hosts—Ford and Arnold—as guilty of the most excessive and egregious
philosophical confusions. They designed the minds of the hosts on the basis of a
poorly developed philosophy. And this philosophy, combined with an
overconfident presumption that it gets things right, places us at risk for
serious harm. Even worse is the baseless conviction that getting the philosophy
right isn’t necessary or even relevant. Whether we like it or not, we’re always doing philosophy. Even the view
that philosophy is useless is itself a philosophical position in metaphilosophy
(or the philosophy of philosophy). In any case, Westworld is a great reminder that we need to get the philosophy
right before we jump ahead with the science. Before we build the robots, we
better understand precisely what it is that we’re doing.
Again,
let me emphasize that this is what makes Westworld
so philosophically rich and useful, particularly as a means of vividly
exploring philosophical ideas in an accessible, engaging way. Above all, since
the philosophical confusions are articulated by the (fictional) characters in
the series (especially Ford, Arnold, and Bernard—the designers of the Hosts’
minds), and it seems likely that these characters do not adequately understand
the philosophical nature and potential implications of their creations, Westworld can be viewed as a powerful
reminder of both the theoretical and practical importance of philosophy
(especially in the arena of technological development), and a cautionary tale
about the dangers involved in failing to get the philosophy right.
© Carl Legault 2017
[1]
Another way to make the
point is with Ned Block’s distinction among different types of consciousness.
Very roughly, the idea is that Westworld clearly
attributes (something like) access-consciousness and self-consciousness to the
hosts; and sometimes it seem to argue that these forms of consciousness are
necessary and/or jointly sufficient for phenomenal consciousness—which is the
type of consciousness that a philosophical zombie lacks. But this is a mistake:
something can have access-consciousness and self-consciousness and yet still
lack phenomenal consciousness. Consider how two systems, S1 and S2, would
respond to being slapped. S1 has access-consciousness and self-consciousness;
whereas S2 has access-consciousness, self-consciousness, and phenomenal consciousness. When S1 gets slapped, it’s capable of
identifying that it’s been slapped (access), tracking its states prior to,
during, and following the slapping (self-monitoring), and can
identify/recognize itself as system
S1 (self-awareness). And then S1 might even wince, yell “Ouch!”, recoil, cry,
etc. But when S2 gets slapped, it feels
pain. This is the difference that phenomenal consciousness makes. S1 lacks
phenomenal consciousness and hence does not feel pain. Suppose that S1-systems,
much like the hosts of Westworld, are also programmed to be functionally and
behaviourally indistinguishable from S2-systems, e.g. to exhibit the same
behavioural responses as S2–systems. If so, S1-systems don’t experience pain;
they merely exhibit pain-response behavior. If this is the case, then it’s
impossible to harm a S1-system. The question, then, is whether the hosts of
Westworld are S1-systems or S2-systems. And if they’re S1-systems, then we
cannot harm them. The problem, of course, is that the show itself gives us no
way of finding out one way or the other.








Comments
Post a Comment