System
1
|
System
2
|
Unconscious
reasoning
|
Conscious
reasoning
|
Judgments
based on intuition
|
Judgments
based on critical examination
|
Processes
information quickly
|
Processes
information slowly
|
Hypothetical
reasoning
|
Logical
reasoning
|
Large
capacity
|
Small
capacity
|
Prominent
in animals and humans
|
Prominent
only in humans
|
Unrelated
to working memory
|
Related
to working memory
|
Operates
effortlessly and automatically
|
Operates
with effort and control
|
Unintentional
thinking
|
Intentional
thinking
|
Influenced
by experiences, emotions, and memories
|
Influenced
by facts, logic, and evidence
|
Can
be overridden by System 2
|
Used
when System 1 fails to form a logical/acceptable conclusion
|
Prominent
since human origins
|
Developed
over time
|
Includes recognition, perception,
orientation, etc.
|
Includes
rule following, comparisons, weighing of options, etc.
|
Friday, June 19, 2015
Why Would an AI System Need Phenomenal Consciousness?
In my last post on Jesse Prinz, we learned about the
distinction between immediate, phenomenal awareness in consciousness in
contrast to our more deliberative consciousness that operates with the contents
of short term and longer term memory. From
moment to moment in our experience, there are mental contents in our
awareness. Not all of those contents
make it into the global workspace and become available to reflective,
deliberative thought, memory, or other cognitive functions. That is, there are contents in phenomenal
awareness that are experienced, and then they are just lost. They cease to be anything to you, or part of
the continuous narrative of experience that you reconstruct in later moments
because they never make it to the neural processes that would capture them and
make them available to you at later times.
We also know that these contents of phenomenal consciousness
are also most closely associated with the qualitative feels from our sensory
periphery. That is, phenomenal awareness
is filled with the smells, tastes, colors, feels, and sounds of our sensory
inputs. Phenomenal awareness is filled
with what some philosophers call qualia.
Let me add to this account and see what progress we can make
on the question of building a conscious AI system.
Daniel Kahneman and Amos Tversky got the Nobel Prize for their
work uncovering what they call Dual Process Theory in the human mind. We possess a set of quick, sloppy cognitive
functions called System 1, and a more careful, slower more deliberative set of
functions called System 2.
In short, System 1 makes gains in speed for what it
sacrifices in accuracy, and System 2 gives up speed for a reduction in
errors.
The evolutionary influences that led to this bifurcation are
fairly widely agreed upon. System 1 gets
us out of difficulties when action has to be taken immediately so we don’t get
crushed by a falling boulder, fall from the edge of a precipice, eaten by a
charging predator, or smacked in the head by a flying object. But when time and circumstance allows for
rational deliberation, we can think things through, make longer term plans,
strategize, problem solve, and so on.
An AI system, depending on its purpose, need not be
similarly constrained. An AI system may
not need to have both sets of functions.
And the medium of construction of an AI system may not require tradeoffs
to such an extent. Transmission time for
conduction across neural cells is about 150 meters per second. By the time the information about the
baseball that is flying at you gets through your optic nerve, through the V1
visual cortex, and up to the pre-frontal lobe for serious contemplation, the
ball has already hit you in the head. Transmission
time for silicon circuitry is effectively the speed of light. We may not have to give up accuracy for speed
to such an extent. Evolution favored
false positives over false negatives in the construction of many systems. It’s better to mistake a boulder for a bear,
as they say, than a bear for a boulder.
A better safe than sorry strategy is more favorable to your contribution
to the gene pool for the species in many cases.
We need not give up accuracy for speed with AI systems, and we need not construct
them to make the systematic errors we do.
The neural processes that are monitoring the multitude of
inputs from my sensory periphery are hidden from the view of my conscious
awareness. The motor neurons that fire,
the sodium ions that traverse the cell membranes, the neurotransmitters that
cross the synaptic gaps when I move my arm are not events that I can see, or
detect in any fashion as neural events.
I experience them as the sensation
of my arm moving. From my perspective,
moving my arm feels one way. But the
neural chemical events that are physically responsible are not available to me
as neural chemical events. A particular
amalgam of neural-chemical events from my perspective tastes like sweetness, or
hurts like a pin prick, or looks like magenta.
It would appear that evolution stumbled upon this sort of condensed, shorthand
monitoring system to make fast work of categorizing certain classes of
phenomenal experience for quick reference and response. If the physical system in humans is capable
of producing qualia that are experiencable from the subject’s point of view (It’s
important to note that whether qualia are even real things is a hotly debated
question
then presumably a physical AI system could be built that
generates them too. Not even the
fiercest epiphenomenalist, or modern property dualist denies mind/brain
dependence. But a legitimate question
is, do we want or need to build an AI system with them? What would be the purpose, aside from
intellectual curiosity, of building qualia into an AI system? If AI systems can be better designed than the
systems that evolution built, and if AI systems need not be constrained by the
tradeoffs, processing speed limitations, or other compromises that led to the
particular character of human consciousness, then why put them in there?
Subscribe to:
Post Comments (Atom)
1 comment:
There's no such thing as "unconscious reasoning" because when one is unconscious one if "out of it." They meant "subconscious reasoning."
Post a Comment