Wednesday, June 3, 2015

Turing and Machine Minds





In 1950, mathematician Alan M. Turing proposed a test for machine consciousness.  If a human interrogator could not distinguish between the responses of a real human being and a machine built to hold conversations, then we would have no reason, other than prejudice, for not admitting that the machine was in fact conscious and thinking.  http://orium.pw/paper/turingai.pdf 

I won’t debate the merits or sufficiency of the Turing Test here.  But I will use it to introduce some clarifications into the AI discussion.  Turing thought that if a machine could do some of the things we do, like have conversations, that would be an adequate indicator of the presence of a mind.  But we need to get clear on the goal in building an artificial intelligence.  Human minds are what we have to work with as a model, but not everything about them is worth replicating or modeling.  For example, we are highly prone to confirmation bias, we have loss aversion, and we can only hold about 7-10 digits (a phone number) in short term working memory.  Being able to participate in a conversation would be an impressive feat, give the subtleties and vagaries of natural language.  But it’s a rather organic, idiosyncratic, and anthropocentric task.  And we might invest substantial effort and resources into replicating contingent, philosophically pointless attributes of the human mind instead of fully exploring some of the possibilities of a new, artificial mind of a different sort.  Japanese researchers, for example, have invested enormous amounts of money and effort into replicating subtle human facial expressions on robots.  Interesting for parties maybe, but we shouldn’t get lost up side tributaries as we move up the river to the source of mind. 

One of the standard objections to Turing’s thesis is this:  But a Turing machine/Artificial intelligence system can’t/doesn’t have _______________, where we insert one of the following: 

a. make mistakes.    (Trivial to build in, but inessential and unimportant.)
b. have emotions  (Inessential, and philosophically and practically uninteresting.)
c. fall in love  (Yawn.)
d. care/want  (Maybe this is important.  Perhaps having goals is essential/interesting.  It remains to be seen if this cannot be built into such a system.  More on goals later.)
e. freedom  (Depends on what you mean by freedom.  Short answer:  there don’t appear to be any substantial reasons a priori why an artificial system cannot be built that has “freedom” in the sense that’s meaningful and interesting in humans.  See Hume on freewill.  http://www.iep.utm.edu/freewill/
f. produce original ideas.  ( What does original mean?   A new synthesis of old concepts, contents, forms, styles?  That’s easy.  Watson, IBM’s jeopardy dominating system is being used to make new recipes, and lots of innovate, original solutions to problems.)
g. creativity  (What does this mean?  produce original new ideas?  See above.  Complex systems, such as Watson, have emergent properties.  They are able to lots of new things that their creators/programmers did not foresee.)
h. do anything that it’s not programmed to do.  (“Programmed” is outdated talk here.  More later on connectionist systems.  Can sophisticated AI programs do unpredictable things now?  Yes.  Can they now do things that the designers didn’t anticipate?  Yes.  Will they do more in the future as the technology advances?  Yes.) 
i. feel pleasure or pain  (I’ll concede, for the moment, that building an artificial system that has this capacity is a ways off technologically.  And I’ll concede that it’s a very interesting philosophical question.  I won’t concede that building this capacity in is impossible in principle.  And we must also ask why is it important?  Why do we need an AI to have this capacity?)
j. intelligence
k. consciousness
l. understand
m. qualitative or phenomenal states  (See Tononi, Koch, and McDermott)

I think objections a-h miss the point entirely.  I take it that for a-h, the denial that a system can be built with the attribute is either simply false, will be proven false, or the attribute isn’t interesting or important enough to warrant the attention.  i through m, however, are interesting.  And there’s a lot more to be said about them.  For each, we will need more than a simple denial without argument.  We need an argument with substantial principled, non-prejudicial reasons for thinking that these capacities are beyond the reach of technology.  (In general, history should have taught us to be very skeptical of grumbling naysaying the form of “This new-fangled technology will never be able to X.”  But one of the things I’m going to be doing in the blog in the future is caching out in much more detail what the terms intelligence, consciousness, understand, and phenomenal states should be taken to mean in the AI project context, and working out the details of what we might be able to build.   

But more importantly, I think the list of typical objections to Turing’s thesis raises this question:  just what do we want one of these things to do?  Maybe someone wants to simulate a human mind to a high degree of precision.  I can imagine a number of interesting reasons to do that.  Maybe we want to model up the human neural system to understand how it works.  Maybe we want to ultimately be able to replicate or even transfer a human consciousness into a medium that doesn’t have such a short expiration date.  Maybe we want to build helper robots that are very much like us and that understand us well.  Maybe a very close approximation of a human mind, with some suitable tweaks, could serve as a good, tireless, optimally effective therapist.  (See the early AI experiments with a therapy program.) 

But the human brain is a kludge.  It’s a messy, organic amalgam of a lot of different models and functions that evolved under one set of circumstances that later got repurposed for doing other things.  The path that led from point A to point B, where B is the set of cognitive capacities we have is convoluted, circuitous, full of fits and starts, peppered with false starts, tradeoffs, unintended consequences, byproducts, and the like.

A partial list of endemic cognitive fuckups in humans from Kahneman and Tversky (and me):  Confirmation Bias, Sunk Cost Fallacy, Asch Effect, Availability Heuristic, Motivated Reasoning, Hyperactive Agency Detection, Supernaturalism, Promiscuous Teleology, Faulty Causal Theorizing, Representativeness Heuristic, Planning Fallacy, Loss Aversion, Ignoring Base Rates, Magical Thinking, and Anchoring Effect. 


So with all of that said, again, what do we want an AI to do?  I don’t want one to make any of the mistakes on the list just above.  And I think that we shouldn’t even be talking about mistakes, emotions, falling in love, caring or wanting, freedom, or feeling pleasure of pain.  What these things show incredible promise at doing is understanding complex, challenging problems and then devising remarkable and valuable solutions to them.   Watson, the Jeopardy dominating system built by IBM, has been put to use devising new recipes.  Chef Watson is able to interact with would be chefs, compile a list of preferred flavors, textures, or ingredients, and then create new recipes, some of which are creative, surprising, and quite good.  The tamarind-cabbage slaw with crispy onions is quite good, I hear.  But within this seemingly frivolous application of some extremely sophisticated technology, there is a more important suggestion.  Imagine that Watson’s ingenuity is put to work in a genetics lab, in a cancer research center, in an engineering firm building a new bridge, or at the National Oceanic and Atmospheric Administration predicting the formation and movement of hurricanes.  I submit that building a system that can grasp our biggest problems, fold in all of the essential variables, and create solutions is the most important goal we should have.  And we should be injecting huge amounts of our resources into that pursuit.  

No comments: