Monday, June 8, 2015
Artificial Intelligence and Conscious Attention--Jesse Prinz's AIR theory of Consciousness
Jesse Prinz has argued for that consciousness is best
understood as mid-level attention.
Consciousness, Prinz argues, is best understood as mid-level attention.
Low level representers in the brain are neurons that perform
simple discrimination tasks such as edge or color detection. They are activated early on in the process of
stimuli from the sensory periphery.
(a poorly taken, copyright violating picture from Michael Gazzaniga's Cognitive Neuroscience textbook.)
The activation of a horizontal edge detector, by itself, doesn’t
constitute organized awareness of the object, or even the edge.
Neuron complexes in human brains are also capable of very
high level, abstract representation. In
a famous study, “Invariant visual representation by single neurons in the humanbrain,” Quiroga, Reddy, Kreiman, Kock, and Fried, they discovered the so-called
Halle Berry neuron with some sensitive detectors inserted into different
regions of the brains of some test subjects.
This neuron’s activity was correlated with activation patterns for a
wide range of Halle Berry images.
What’s really interesting here is that this neuron became
active with quite varied photos and line drawings of Halle Berry, from
different angles, in different lighting, in a Cat Woman costume, and even,
remarkably, in response to the text “Halle Berry.” That
is, this neuron plays a role in the firing patterns for a highly abstract
concept of Halle Berry.
Prinz is interested in consciousness conceived as mid-level
representational attention that lies somewhere between these two extremes. “Consciousness is intermediate level
representation. Consciousness represents
whole objects, rich with surface details, located in depth, and presented from
a particular point of view.” During the
real time moments of phenomenal awareness, various representations come to take
up our attention in the visual field. Prinz
argues that, “Consciousness arises when we attend, and attention makes
information available to working memory. Consciousness does not depend on
storage in working memory, and, indeed, the states we are conscious of cannot
be adequately stored.”
When you look at a Necker cure, you can first be aware of
the lower left square as the leading face.
Then you can switch your
awareness to seeing the upper right square as the leading face. So you attention has shifted from one
representation to another.
That is the level at which Prinz is located the mercurial
notion of consciousness, and trying to develop a predictive theory based on the
empirical evidence. And Prinz goes to
some lengths to argue that consciousness in this sense is not what’s moved into
working memory, it’s not the contents necessarily that have become available to
the global workspace such as when they are stored for later access. These contents may or may not be accessible
later for recall. But at the moment they
are the contents of mind, part of the flow and movement of attention.
Here I’m not interested in the question of whether Prinz
provides us with the best theory of human consciousness, but I am interested in
what light his view can shed on the AI project.
I’m particularly interesting in Prinz here because it’s arguable that we
already have artificial systems that are capable, more or less, of doing the
low level and the high level representations described above. Edge detection, color detection, simple
feature detection in a “visual” field are relatively simple tasks for
machines. And processing at a high level
of conceptual abstraction has been accomplished in some cases. IBM’s Jeopardy playing system Watson
successfully answered clues such as, “To push one of these paper products is to
stretch established limits,” answer:
envelope. “Tickets aren’t needed
for this “event,” a black hole’s boundary from which matter can’t escape,”
answer: event horizon. “A thief, or the bent part of an arm,”
answer: crook. Even Google search algorithms do a remarkable
job of divining the intentions behind our searches, excluding thousands of
possible interpretations of our search strings that would be accurate to the
letters, but have nothing to do with what we are interested in.
So think about this. Simple
feature detection isn’t a problem. And we
are on our way to some different kinds of high level conceptual abstraction. Long term storage for further analysis also
isn’t a problem for machines. That’s one
of the things that machines already do better than us. But what Prinz has put his finger on is the
ephemeral movement of attention from moment to moment in awareness. During the course of writing this piece, I’ve
been multi-tasking, which I shouldn’t have.
I’ve been answering emails, sorting out calendar scheduling, making
plans to get kids from school, and so on.
And now I’m trying to recall what all I’ve been thinking about over the
last hour. Lots of it is available to me
to now. But there were, no doubt, a lot
of mental contents, a lot of random thoughts, that came and went without leaving much of a trace. I say, “no doubt,” because if they didn’t
go into memory, if they didn’t become targets of substantial focus, then even
though I had them then I won’t be able to bring them back now. And I say, “no doubt,” because when I am
attending to my conscious experience now, from moment to moment, and I’m really
concentrating on just this point, I realize that I’m aware of the feeling of
the clicking keyboard keys under my fingers, then I notice the music I’ve got
playing in the background, then I glance at my email tab, and so on. That is, my moments are filled with
miscellaneous contents. I’ve mode those
particular ones into a bigger deal in my brain because I just wrote about them
in a blog post. But lots of our
conscious lives, maybe most, those
contents come and go, like hummingbirds flitting in and out of the scene. And once they are gone, they are gone.
Now we can ask the questions: Do we want an AI to have that? Do we need an AI to have that? Would it serve any purpose?
Bottom Up Attention
That capacity in us served an evolutionary purpose. At any given time, there are countless zombie
agents, low level neuronal complexes, that are doing discriminatory work on
information from the sensory periphery and from other neural structures. The outputs of those discriminators may or
may not end up being the subject of conscious attention. In many cases, those contents become the
focus of attention from the bottom up.
So lower level system deems the content important enough to call your
attention to it, as it were. So when
your car doesn’t sound right when it’s starting up, or when a friend’s face
reveals that he’s emotionally troubled it jumps to our attention. Your brain is adept at scanning your
environment for causes for alarm and then thrusting them into the spotlight of
attention for action.
Top Down Attention
But we are able to
direct the spotlight as well. We can
focus our attention, sustain mental awareness on a task or some phenomena, to
suss out details, make extended plans, anticipate problems, and model out
possible future scenarios and so on. You can go to work finding Waldo:
gives a more detailed account of the evolutionary functions
of consciousness.
Given what we saw above about the difference in Prinz
between conscious attention and short and long term memory, we can see conscious
attention can be seen as a sort of screening process. A lot of ordinary phenomenal consciousness is
the result of low level monitoring systems crossing a minimal threshold of
concern. This, right here is important enough to take a closer look at.
Part of the reason that the window of our conscious
attention is temporally brief and spatially finite is that resources are
limited. Resources were limited when
evolution was building the system. It’s
kludged up from parts and systems that we re-adapted from other functions. There was no long view, or deliberate
planning on the process. Just the slow
pruning of mutation branches on the evolutionary tree. And it modifies the gene pool according to
the rates at which organisms, equipped as they are, manage to meet survival
challenges.
Kludge: Consider to
different ways to work on a car. You
could take it apart, analyze the systems, plan, make modifications, build new
parts, and then reassemble the car. While
the car is taken apart and while you are building new parts, it doesn’t
function. It’s just a pile of parts on
the shop floor.
But imagine that the car is in a race, and there’s a bin of
simple replacement parts on board, some only slightly different than the ones
currently in the car, and modifications to the car must be made while the car
is racing around the track with the other cars.
The car has to keep going at all times, or it’s out of the race for
good. Furthermore, no one gets to choose
which parts get pulled out of the bin and put into the car. That’s a kludge.
Resources are also limited because evolution built a system
that does triage. The cognitive systems
just have to be good enough to keep the organism alive long enough to bear its
young, and possibly make a positive contribution toward their survival. The monitoring systems that are keeping track
of its environment just need to catch the deadly threats, and catch them only
far enough in advance to save its ass.
It’s not allowed the luxury of long term, substantial contemplation of
one topic or many to the exclusion of all others. Furthermore, calories are limited. Only so many can be scrounged up during the
course of the day. So only so many can
be dedicated to the relatively costly expenditure of billions of active neural
cells.
The evolutionary functions of consciousness for us give us
some insight into whether it might be useful or dangerous in an AI. First, AIs can be better planned, better
designed than evolution’s brains. An AI
need not be confined to triage functions, although we can imagine modeling
human brains to some extent and using them to keep watch on bigger, more
complex systems where more can go wrong than human operators could keep track
of. An AI might run an airport better,
or a subway system, or a power grid, where hundreds or thousands or more
subsystems need to be monitored for problems.
The success of self-driving Google cars already suggest what could be
possible with wide spread implementation on the street and highway
systems. So bottom up indicated
monitoring could clearly be useful in an AI system.
Top down, executive directed control of the spotlight of
attention, and the deliberate investment of processing resources into a
representational complex with longer term planning and goal directed activity
driving the attention could clearly be useful for an AI system too. “Hal, we want you to find a cure for
cancer. Here are several hundred
thousand journal articles.”
The looming question, of course, is what about
the dangers of building mid-level attention into an AI? Bostrom’s Superintelligence has been looming
in the back of my mind through this whole post.
It’s a big topic. I’ll save that
for a future post, or 3 or 10 or 25.
Subscribe to:
Post Comments (Atom)
1 comment:
First, AIs can be better planned, better designed than evolution’s brains.
I think you mean something stronger than what you're saying here, given that brains were neither planned nor designed. The crappiest neural net that I coded as an undergrad research project was "better" (viz. "more") planned and designed than any brain wrought by natural evolution.
On the other hand, if you're saying "better planned/designed" to mean "better at doing the kinds of things that brains do", you're making a not uncontroversial statement (one I pretty much agree with, mind you).
Post a Comment