Wednesday, March 13, 2013

The Demise of American Religiousness?

"Religious affiliation in the United States is at its lowest point since it began to be tracked in the 1930s, according to analysis of newly released survey data by researchers from the University of California, Berkeley, and Duke University. Last year, one in five Americans claimed they had no religious preference, more than double the number reported in 1990." http://eurekalert.org/pub_releases/2013-03/uoc--aar031213.php

Friday, March 1, 2013

Machine Intelligence and Machine Ethics

There are lots of interesting articles here:

Singularity AI Research



But the first article, Intelligence Explosion and Machine Ethics, is embarrassing it's so fun. It's by Luke Muehlhauser and Louie Helm.  Muelhlhauser ran the remarkable blog CommonSenseAtheism.com for years before moving over into artificial intelligence research.

The abstract (makes it sound dryer than it is):


Many researchers have argued that a self-improving artificial intelligence (AI) could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI’s goals differ from ours, then this could be disastrous for humans. One proposed solution is to program the AI’s goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortunately, it is difficult to specify what we want. After clarifying what we mean by “intelligence,” we offer a series of “intuition pumps” from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating
an intelligence explosion or “technological singularity.”


And a choice passage:


5. Cognitive Science and Human Values

5.1. The Psychology of Motivation

People don’t seem to know their own desires and values. In one study, researchers showed male participants two female faces for a few seconds and asked them to point at the face they found more attractive. Researchers then laid the photos face down and handed subjects the face they had chosen, asking them to explain the reasons for their choice. Sometimes, researchers used a sleight-of-hand trick to swap the photos, showing subjects the face they had not chosen. Very few subjects noticed that the face they were
given was not the one they had chosen. Moreover, the subjects who failed to notice the switch were happy to explain why they preferred the face they had actually rejected moments ago, confabulating reasons like “I like her smile” even though they had originally chosen the photo of a solemn-faced woman (Johansson et al. 2005).

Similar results were obtained from split-brain studies that identified an “interpreter” in the left brain hemisphere that invents reasons for one’s beliefs and actions. For example, when the command “walk” was presented visually to the patient (and therefore processed by the his brain’s right hemisphere), he got up from his chair and walked away. When asked why he suddenly started walking away, he replied (using his left
hemisphere, which was disconnected from his right hemisphere) that it was because he wanted a beverage from the fridge (Gazzaniga 1992, 124–126).

Common sense suggests that we infer others’ desires from their appearance and behavior, but have direct introspective access to our own desires. Cognitive science suggests instead that our knowledge of our own desires is just like our knowledge of others’ desires: inferred and often wrong (Laird 2007). Many of our motivations operate unconsciously. We do not have direct access to them (Wilson 2002; Ferguson, Hassin, and Bargh 2007; Moskowitz, Li, and Kirk 2004), and thus they are difficult to specify.

5.2. Moral Psychology

Our lack of introspective access applies not only to our everyday motivations but also to our moral values. Just as the split-brain patient unknowingly invented false reasons for his decision to stand up and walk away, experimental subjects are often unable to correctly identify the causes of their moral judgments. For example, many people believe—as Immanuel Kant did—that rule-based moral thinking is a “rational” process. In contrast, the available neuroscientific and behavioral evidence instead suggests that rule-based moral thinking is a largely emotional process (Cushman, Young, and Greene 2010), and may in most cases amount to little more than a post-hoc rationalization of our emotional reactions to situations (Greene 2008).

We also tend to underestimate the degree to which our moral judgments are context sensitive. For example, our moral judgments are significantly affected by whether we are in the presence of freshly baked bread, whether the room we’re in contains a concentration of novelty fart spray so low that only the subconscious mind can detect it, and whether or not we feel clean (Schnall et al. 2008; Baron and Thomley 1994; Zhong,
Strejcek, and Sivanathan 2010).