Tuesday, June 2, 2015
Building Self-Aware Machines
The public mood toward the prospect of artificial
intelligence is dark. Increasingly,
people fear the results of creating an intelligence whose abilities will far
exceed our own, and who pursues goals that are not compatible with our
own. See Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies for a good
summary of those arguments. I think resistance
is a mistake (and futile) and I think we should be actively striving toward the
construction of artificial intelligence.
When we ask “Can a machine be conscious?,” I believe we often
misses several important distinctions. With
regard to the AI project, we would be better off distinguishing at least
between qualitative/phenomenal states, exterior self-modeling, interior
self-modeling, information processing, attention, sentience, executive top-down
control, self-awareness, and so on. Once
we make a number of these distinctions, it becomes clear that we have already
created systems with some of these capacities, others are not far off, and
still others present the biggest challenges to the project. Here I will focus
just on two, following Drew McDermott:
interior and exterior self-modeling.
A cognitive system has a self-model if it has the capacity
to represent, acknowledge, or take account of itself as an object in the world
with other objects. Exterior
self-modeling requires treating the self solely as a physical, spatial-temporal
object among other objects. So you can
easily spatially locate yourself in the room, you have a representation of
where you are in relation to your mother’s house, or perhaps to the Eiffel
Tower. You can also easily temporally
locate yourself. You represent Napoleon
as am 18th century French Emperor, and you are aware that the
segment of time that you occupy is after the segment of time that he
occupied. Children swinging from one bar
to another on the playground are employing an exterior self-model, as is a
ground squirrel running back to its burrow.
Exterior self-modeling is relatively easy to build into an
artificial system compared to many other tasks that face the AI project. Your phone is technologically advanced enough
to put itself in a location in space in relationship to other objects with its
GPS system. I built a CNC machine in my
garage (Computer Numeric Controlled cutting system) that I ”zero” out when I
start it up. I designate a location in a
three dimensional coordinate system as (0, 0, 0) for the X, Y, and Z axes, then
the machine keeps track of where it is in relation to that point as it cuts. When it’s finished, it returns to (0, 0,
0). The system knows where it is in
space, at least in the very small segment of space that it is capable of
representing (About 36” x 24” x 5”).
Interior self-modeling is the capacity to represent yourself
as an information processing, epistemic, representational agent. That is, a system has an interior self-model
if it represents the state of its own informational, cognitive capacities. Loosely, it is knowing what you know and
knowing what you don’t know. It is a
system that is able to locate the state of its own information about the world
within a range of possible states. When
you recognize that watching too much Fox News might be contributing to your being
negative about President Obama, you are employing an interior self-model. When you resolve to not make a decision about
which car to buy until you’ve done some more research, or when you wait until
after the debates to decide which candidate to vote for, you are exercising
your interior self-model. You have
located yourself as a thinking, believing, judging agent within a range of
possible information states. Making
decisions requires information. Making
good decisions requires being able to assess how much information you have, how
good it is, and how much more (or less) you need or how much better you need it
to be in order to decide within the tolerances of your margins of error.
So in order to endow an artificial cognitive system with an
interior self-model, we must build it to model itself as an information system
similar to how we’d build it to model itself in space and time. Hypothetically, a system can have no
information, or it can have all of the information. And the information it has can be poor
quality, with a high likelihood of being false, or it can be high quality, with
a high likelihood of being true. Those
two dimensions are like a spatial-temporal framework, and the system must be
able to locate its own information state within that range of
possibilities. Then the system, if we
want it to make good decisions, must be able to recognize the difference
between the state it is in and the minimally acceptable information state it
should be in. Then, ideally, we’d build
it with the tools to close that gap.
Imagine a doctor who is presented with a patient with an unfamiliar set
of symptoms. Recognizing that she
doesn’t have enough information to diagnosis the problem, she does a literature
search so that she can responsibly address it.
Now imagine an artificial system with reliable decisions heuristics that
recognizes the adequacy or inadequacy of its information base, and then does a
medical literature review that is far more comprehensive, consistent, and
discerning than a human doctor is capable of.
At the first level, our AI system needs to be able to compile and
process information that will produce a decision. But at the second level, our AI system must
be able to judge its own fitness for making that decision and rectify the
information state short coming if there is one.
Representing itself as an epistemic agent in this fashion strikes me as
one of the most important and interesting ways to flesh out the notion of being
“self-aware” that is often brought up when we ask the question “Can a machine
be conscious?”
McDermott, Drew.
“Artificial Intelligence and Consciousness,” The
Cambridge Handbook of Consciousness, 117-150. Zelazo, Moscovitch, and Thompson, eds. 2007.
Also here: http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment