Saturday, April 4, 2009

The Forbidden Conclusion

Human beings are psychologically incapable of simultaneously holding two contradictory propositions in mind at once and having an attitude of assent or belief towards both of them. That is, we can't believe an outright contradiction. Nor do we have much voluntary control over which claims we believe. We can't, through the force of our will, start believing something new or stop believing something that we previously assented to. Try it. Get yourself to actually believe that Barack Obama is not the president of the United States, 2 + 2 = 5, or that Matthew McConaghey is not the sexiest man alive. Ok, just kidding on the last one.

"But people believe contradictory, irrational things all the time!" will be the objection. Here's what's really going on. An occurrent belief is one that is in conscious awareness now. In many cases, we have an occurrent belief that contradicts with some other claim that we believe or believed, but it is not occurrent. There are all sorts of things buried in my mind that I am not currently thinking about that could conflict with something that I am thinking about, but I just haven't made the connection and seen the problem. In other cases, a pair of beliefs we have are only implicitly contradictory. The two claims are not an outright contradiction, but were we to supply some other beliefs and draw out some of the logical implications of them, a genuine contradiction would emerge. But again, if I haven't thought it all out, I won't have an occurrent contradictory belief pair.

There are other sources of cognitive tension. Some groups of propositions are probabilistically contradictory--one asserts what another declares to be exceedingly improbable. If I won the lottery three days in a row, it would strike me as exceedingly improbable that the lottery was really a fair, million to one game.

Now some relevant points from neuroscience and philosophy of mind. The current picture of the world that occupies consciousness from moment to moment is a construction where what I am seeing, hearing, feeling, or thinking now is threaded together with my memories of the recent and not so recent past. What I am sensing now with the keyboard under my fingers is not the same as what I was feeling 10 minutes ago, but consciousness builds a continuous narrative that bridges those sensations and makes causal and psychological sense of the transitions. There was an entity, the I, who was in the kitchen then, and after that, I walked up the stairs and sat down at the computer. A world of continuous, causally regular objects that is also inhabited by the continuous subject is constructed (unbeknownst to me) out of the meaningless raw feels of thought and sensation.

This fleeting, perpetually forward rolling window of consciousness is discontent, as we have seen, with a picture of the world and the self that doesn't make sense. It abhors contradictions in the story of the world and the self that it creates. Concerning the world, this disposition manifests as a set of expectations that events and objects are regular, predictable, and sensible. The aversion to contradiction creates a forbidden conclusion about the self. It is profoundly disturbing for us to think of our selves as irrational. Seeing contradictions in the words and actions of others comes easy--consider how starkly some conflict between two claims from your husband or girlfriend leapt out at you the last time you were in an argument. Now consider how it felt when she accused you of first saying X was true and then contradicting yourself 5 minutes later by saying not-X. When you heard those two claims put together, your mind scrambled for an account that would diminish the seeming conflict. You quickly found an explanation whereby saying both of those things makes perfect sense. See? It is profoundly difficult, if not impossible to acknowledge about oneself that your own beliefs are at deep logical odds with each other. "I am irrational," is the forbidden conclusion that we cannot face, even when it is plainly obvious.

Experimentally, the refusal to accept it displays itself in revisions, confabulations, memory editing, and misrepresentations. In one study, subjects were asked to pick the most attractive person from a stack of pictures. Then the subject's choice was swapped for another picture without their knowledge. The researchers then asked the subjects why they picked that picture instead of the others. Without missing a step, and without even knowing they did so, subjects promptly confabulated a justification for why the new picture was of their most attractive original pick. The fact they cannot accept: I made a mistake--the picture I picked is not the most attractive one in the stack. Goethals, G. R., & Reckman, R. F. (1973). The Perception of Consistency in Attitudes. Journal of Experimental Social Psychology, 9(6), 491-501.

In another study, high school students were tested to determine their attitude about a topic. Then a confederate in the study discussed the topic with them and subtley affected an attitude change in them. The subjects were assessed again and it was found that as a result of the confederate, they had a different view about the topic. But when asked to recall their view from before, they recounted their original position to make it consistent with their new one. Unknowingly, they made recall errors that rendered the new views as the one that they had believed all along. The perception of consistency in attitudes, George R. Goethals and Richard F. Reckman

In a now famous study by Nisbett and Wilson, shoppers were asked to evaluate various clothing items for quality. Multiple trials and randomization revealed that no matter what the arrangement of the articles of clothing, the subjects had a bias for the right hand items. When they were asked to justify their choice, however, they would construct an explanation on the basis of various features of the item. That is, subjects tended to pick the right item no matter which one was put there, and then they would make up a story about why it was the best one. Nisbett, R.E. and Wilson, T.D. "Telling more than we can know: Verbal reports on mental processes".Psychological Review, Vol 84 pp 231-259.

Other studies show that when people have a strong conviction that a claim is true, they will heavily filter the evidence to make it fit. They will accept evidence that appears to confirm their view readily and with little scrutiny, but they will subject disconfirming evidence to high levels of criticism and analysis. So when they encounter contrary evidence to what they believe, it tends to actually reinforce their view and polarize them further into it. Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence, Charles G. Lord, Lee Ross, and Mark R. Lepper. Journal of Personality and Social Psychology, 1979, Vol. 37, No. 11, 2098-2109

Ziva Kunda, a psychologist from Princeton, describes two impulses that are at odds in us. We have a motivation to be accurate--to form correct views about the world. But we also possess substantial motivation to arrive at particular favored conclusions. She says, "There is considerable evidence that people are more likely to arrive at conclusions that they want to arrive at, but their ability to do so is constrained by their ability to construct seemingly reasonable justifications for these conclusions." The Case for Motivated Reasoning, Psychological Bulletin 1990, Vol. 108, No. 3, 480-498.

What's the relevance of all of this to religious beliefs? Nowhere in our lives is the powerful conflict between a set of desires or psychological needs on one side and the goal of having an accurate, coherent, justified description of the world on the other more evident than it is about God. We want God to exist. We want the religious doctrine of our childhood to be the correct picture of reality. From the numbers, it is nearly impossible for us to shake off the transcendental temptation. But we can't accept about ourselves that this belief is the product of some invisible forces in our nature, or that it is driven more by desire than by reason. So we go through logical gymnastics and contortions of reason to fabricate "seemingly reasonable justifications." If we find ourselves believing it, we can't help but construct a back story that render that belief a reasonable one. If we didn't, we'd have to face the forbidden conclusion. (Thanks to Randy Mayes for the idea and the title.)


Teleprompter said...

Another excellent writing.

Matt McCormick said...

Thanks dude. I've been thinking about this one for a while. MM

M. Tully said...


I had often wondered why abandoning theism was so easy for me and not others (I knew I was missing something but for the life of me I couldn't point to it). The studies you cite and your relating them to religious associations really help to make that clear. I was raised in a religious tradition, but I was also raised in a tradition that valued truth and honesty.

In fact, it was only when a conflict between the two arose, did I ever begin to question either (It was a short journey once begun).

I agree with Teleprompter, I think this one is excellent and I think it deserves its own chapter.

"How much do you value truth (or honesty or reality or ...)?"

Luke said...

Great stuff. Thanks for the citations. Citations citations citations!

Ketan said...

Instinctively, I had an idea why people still do stick to their belief, but by giving those examples, you really made things so clear. The results of the experiments you cited almost seem like impossible--humans can't be that dishonest with themselves (!), except that they've to be true, otherwise I'd have to draw the forbidden conclusio ;) Really nice article. Thanks, again!

Bryan Goodrich said...

I really enjoyed the selection of articles. It provides a nice gateway into some of these human foibles of which we tend to remain unaware.

Now, I know this is my mathematics background, but one thing I rarely see brought up is the fact humans tend to be linear thinkers. Stochastic processes, nonlinear dynamics and vague values are not intuitive to humans, at least not without a whole lot of training. What gets me is that we rarely see these in our common way of perceiving things.

Take the idea of inconsistent beliefs. Logically we could easily throw out the excluded middle, and work with any number of many-valued logics (e.g., some forms of trivalent logics are used in quantum mechanics or fuzzy logic to capture "vague" meanings). The question is, do humans actually think binary or something else? I would find it rather hard to believe we actually operate like a binary computer processing "yes" and "no" of some sort. We can have inconsistent beliefs because we weigh beliefs differently (significance or risk, say), our justifications vary and as research shows, can be rather ad hoc and completely insufficient to the actual reality of the situation.

Do you know, in any research on the philosophy of mind, epistemology and neuroscience, of research or theories about how we formulate beliefs, ideas and reasoning that operate upon a more robust and dynamic framework from, say, classical logic? I'm aware of these kinds of approaches in other sciences, but the brain isn't my forte. I thought I would ask.

Richard T said...

Clearly, you never had an Edinburgh grandmother who could reconcile any number of contradictory beliefs by the use of the magic word nevertheless. She was thus able to manipulate your behaviour according to her wishes.

Matt McCormick said...

Bryan, recent research on this question has focused on connectionist networks as models for human neural tissue. Paul Churchland and others have some really interesting work about what a belief is when it forms across the distributed weighted nodes of a parallel processing network.
Good question.


Bryan Goodrich said...


Thanks, I have heard of Churchland and some of the work in that area. Do you happen to know what a network model of the brain would signify a belief to be (ontologically)? Or at least an idea in that direction? Then, and only then, would we really be able to compare that entity with one that we claim may be inconsistent, and how the mind handles it.

Matt McCormick said...

Paul Churchland and the connectionists are arguing, reasonably I think, that what we call a belief is actually a set of activation potentials distributed across a neural network that behaves like a connectionist system. The term "belief" I've been arguing for a while, just needs to be jettisoned. It maps as well onto the empirical information we have about the human nervous system as "demon" maps onto modern germ theory of disease. That is, "belief" as we use it just doesn't seem to be a natural kind that lines up neatly with anything that we find in the world.

Back to your question: If we adopt a parallel, distributed processing model to describe minds, as we should, and if we think of beliefs as sets of activation potentials across a neural network, then it's a lot easier to reconcile what we know empirical about the brain and people's verbal and physical behavior. And seeing a mind as built up with these activation potentials (that morph and develop over time) fits nicely with some of the multi value or fuzzy logic points you were making earlier. Brain functions don't map cleanly onto Aristotelian logic, as you suggested. Our language and our folk psychology are still behind the curve on all this, however.

I'll get flak for this, but the eliminative materialist program in philosophy of mind is the right direction. There really aren't any such things as beliefs, minds, consciousness, will, etc. Or at least when the dust settles from what neuroscience is doing right now, what those terms will end up meaning will be radically different than what common sense tells us, just like what happened with "heat," "phlogiston," "demons," "caloric," and so on.

All of this has incredibly interesting implications for religious belief of course.


Ketan said...

Hi, Matt!

I'm very tempted to point out that what you referred to as "ACTIVATION potential" is very likely to be "ACTION potential". Action term is a characteristic very fundamental to excitable tissues--nervous and muscle tissue. It means a propagated disturbance in the potential difference across and along a cell membrane. Well of course, the exact terminology doesn't make much difference as long as the basic idea gets conveyed. TC.

Ketan said...

Matt, I don't disagree at all with what you've openly speculated to be the nature of what we call beliefs, will, emotions, etc. I'm no expert in psychology/neurology, but what we call the "mind", is not a structure, but a complex FUNCTION of the human brain. In light of the above fact, no wonder, the mind would come to be understood more as a mesh of complex neural networks bathed in chemicals (neurotransmitters and ions) rather than an abstract incomprehensible entity. But where I disagree is that these findings will have implications ONLY on religious beliefs. To think of it, I'd long back drawn the very same (nihilistic--lack of free will, beliefs being nothing but play of chemicals all point to just one thing--lack of extrinsic purpose to life) conclusions that
you are waiting for neurologists to announce some day.
I'd concede that that realization was one of the factors in my considering nonexisuence of God. But that made me more of a deist rather than an out and out atheist. Embracing Atheism is more about the courage to accept what one is convinced to be the truth rather than being convinced of the truth (which of course has been the subject of numerous of your posts, and more so the previous one). I'm only trying to say that "such" facts will not just affect those who are religous, but also those who're totally irreligious. Believe me, for me it was much more difficult to accept the possibilities of lack of purpose to life and genuine free will rather than that of a supreme manager and a creator of life. TC.

Matt McCormick said...

TC, I'll assume that you're not merely tempted to point out the mistake, but you're actually pointing it out. And you're right. It's "action" not "activation." That was composed in haste.

There's a pattern of people misunderstanding a simple point here. If I claim that a study, or argument, or experiment has an implication for religious beliefs, that clearly does not imply that it ONLY has an implication for religious beliefs. I did not say the latter. Look, if I say that winning the lottery is a way to get rich, it isn't an objection to come back with, "But you can get rich through Internet marketing too!" The two claims are obviously compatible.

Having taught logic for decades, you won't find me using the term "only" lightly or carelessly unless it's a typo. And in the case in question I didn't use it at all.



Ketan said...

I'll be honest on two counts here--I was pretty sure of the action potential thing. Just didn't want to sound snobbish.

The second thing is harder to be honest about--I thought your comment about "religious implications" had an element of sadistic pleasure, as in the pleasure one gets on knowing they were right and their rival, wrong, but totally forgetting in the process, how much pain the truth could have caused to the one learning the truth de novo. "Sadistic" may not be the most appropriate adjective here, and excuse me for an inability to think of a better one. My reluctance to put this suspicion in perspective should be obvious--I'd be assuming too much about your nature as a person from a mere statement, and accusing someone of deriving a sadistic pleasure is something very serious, and hence wanted to avoid it. But realized because of your pointing out, that in the process, I inadvertantly ended up charging you with something as much, if not more grave (at least to a logician)--of using "only" loosely. I think I made the mistake of reading too much into the concluding lines about religious implications, when possibly you were stating things in a matter-of-factly manner, without any emotion. Sorry, if you felt hurt, and even otherwise for assuming that statements in personal matters like purpose of life and liberty (free will) cannot be made dispassionately. TC.

Ketan said...

Once again, two things more. My name is not "TC" :) . It's a mere acronym I use for "take care". You can all me "Ketan".

Second, I'm no longer a deist. I turned an atheist, the moment I realized that the complexity of the Universe was no reason for an intelligent creator to have created it.

One last thing, which you may not answer if you find it too personal--why don't you include anything remotely emotional/personal in your blog? It is after all a blog! I for one would be very curious about your intellectual journey from whatever you were to begin with to this rigorous rationalist (being atheist being one of the many consequences of it). Again, I've taken a liberty of dwelling on something personal to you, which I hope doesn't backfire (upset you). TC.

Teleprompter said...

Wow! I have really learned a lot from all of these comments. Philosophy of mind intrigues me.

I agree with Ketan that I have had many restless moments considering a few of the implications under discussion here.

This issue is probably the most difficult thing I am facing as a relatively new (within the last year) deconvert from Christianity.

What is meaning? What is purpose? What is motivation? What is responsibility?

Perhaps these are the wrong questions. Perhaps I have insufficient information. Nevertheless, I eagerly look forward to trying to answer these questions (or any others which may arise) more explicitly.

Bryan Goodrich said...


Yeah, that sounds about what I recall seeing. I have to disagree with part of your conclusion, however. Eliminative reductions don't seem to make much sense if the manifest entity has an efficacious nature.

(e.g., a rainbow is often exampled as a valid elimination because a rainbow has no substantial existence beyond the reduced--what I'll call structure--substratum which qualifies the term. On the other hand, people like Searle, which I agree with on this manner, do argue that things like consciousness or intentionality have emergent properties qualified by the very reductions that make the term precise, analogously to how we do not eliminate tables or water for the fact their properties that make them what they are exist entirely in terms of the reduction or reduced structure.)

Your conclusion is that the term "belief" doesn't match what our reductive scientific investigations indicate it to be. That simply means our concept is inaccurate, needs refinement and we need a new theory about beliefs. An elimination, as you espoused here, sounds like we're throwing the baby out with the bathwater. The real difference is a difference in description, level of analysis and what I think can be described analogous to structures.

I think any meaningful reductionist programme should make clear when and why something should be eliminated, but most meaningful reductions--especially in the sciences--are not eliminative. I would call them symmetrical. Symmetric, in this sense, because the scientific analysis qualifies the emergent descriptions and properties we talk about at a wholly higher level of description.

(e.g., we talk about pistons in car engines, and solidity of tables and objects, qua engines and objects, while qualifying those meanings based on the science of their reduced structures and physics, which may depend upon our understanding of unobservables as described by, say, quantum mechanics.)

The simple reason I caution against eliminative reductions is that it is the same bias we see when people say that "the natural sciences are more pure than the social sciences" when, point in fact, you will never study economics in terms of physics, chemistry or biology, even though certain facets of behavior are strongly correlated and described by such physical factors. The social sciences are no less "pure" than the natural sciences, it is just a different kind of "critter" to deal with, just as in mathematics we deal with "abstract critters" from the kind, say, biology deals with. It is a difference in content, but not some different ontological kind. There's also a good cartoon that jokes how mathematics is the only real pure science, far removed from even physics!

I wont run on the tangent, but there is no fear of introducing any mystical "mind" or social factors which "transcend" reality in any meaningful way. It is simply the way we deal with those objects as those objects and not their component parts--just as the examples parenthesized above. I'll simply add that in that regard this is where I reference a lot about structures, as I have already, because the model of the reduced "level" and the higher "level" do share properties and we can analyze and inform each of the structures by what we discover about one or the other. That would be made possible by some kind of homomorphism between the two structures. Of course, it requires a more precise definition of structure in this ontological sense, what is meant by levels and how to construct and justify said morphisms. However, those are all things needing to be qualified that go far beyond the scope of this comment.

M. Tully said...


I'll reiterate that I don't know squat about philosophy, but you'll really get flak for asserting that human action is about brain states?

Fascinating. You and Eric got me reading Quine, and the the most striking thing I'm getting from it is, why is this this guy arguing so hard for what the evidence dictates (then I remind myself he was writing decades ago)?

I guess I have hard time understanding why people view neuroscience so differently from chemistry or physics. Given it is a younger specialty, but with that comes the advantage of being able to build on an already existing well developed framework.

Originally I felt surprised that I hadn't come across Quine as a philosopher of science. The more I have read, I no longer feel surprised. He wasn't a philosopher of science, he was scientist of philosophy.

So now it's decades later, with confirming data building up day after day, and you're still trying to justify it.

O.K., I officially rescind my previous conviction that the philosophers' purpose is only to come to up with the right questions to ask. After those of us in the data collection world find the answers to the questions, philosophers then must explain it to humanity. Upon reviewing the data, that latter function demonstrates itself to be much more difficult.

Thank goodness you guys are there (my end is much more fun).

Luke said...

Matt, are you on reddit? If so, I want to friend you. If not, sign up real quick and I will friend you. :)

Jon said...

Considering the first sentence your almost right. Those who have had their corpus callosum "Split-Brain" patients. cut tend to have P on one lobe while having a ~P on the other. A good one is "I'm Male" on the right hemi, and an "I'm female" on the other hemi - in some experiments.

Your right if we consider people in those cases to have become two people in one body after the operation.

Anonymous said...

"Human beings are psychologically incapable of simultaneously holding two contradictory propositions in mind at once and having an attitude of assent or belief towards both of them"

I think this may be false...

"""Kripke invites us to imagine a French, monolingual boy, Pierre, who believes the following: “Londres est jolie.” (“London is beautiful.”) Pierre moves to London without realising that London = Londres. He then learns English the same way a child would learn the language, that is, not by translating words from French to English. Pierre learns the name “London” from the unattractive part of the city he lives in, so he comes to believe that London is not beautiful. If Kripke’s account is correct Pierre now believes both that London is beautiful and that London is not beautiful. This cannot be explained by coreferring names having different semantic properties. According to Kripke, this shows that attributing additional semantic properties to names, will not explain what it is supposed to explain."""

Ketan said...

Anonymous, the contradictions you cited could be explained by one simple reason--the boy, in the first place had no basis to believe "Londres est jolie". What mental effort did he put before drawing that conclusion? Had he tried to look up in the map where Londres and London were, he'd have been left with only one conclusion--London (=Londres) is NOT beautiful (as far as his experience would be concerned).

Extending this analogy to the question of existence or nonexistence of God, and its nature, looking up a map would be analogous to taking a detached view of one's conclusions. Meaning, trying to look at the Universe (more so the world around) as if from some faraway galaxy, rather than from "within the self". I just hope my last sentence makes sense, if not, the problem is with my explanatory power. When one is able to do this, duality of possible conclusions (about the SAME Universe) will become apparent, and one would supplant the other, for it would provide a comprehensive, and consistent account of the Universe.

*Universe=everything that verifiably lends itself to human perception.

Take care.

Anonymous said...

ketan I don't think you understand the dilemma. Saul Kriptke struggled with providing a solution to Pierre having contradictory beliefs. Pierre has absolutely a basis for his contradictory beliefs i might add...

Gawd where's that bryan guy who understands logic here i am sure he can explain what's going on...

Matt McCormick said...

Anonymous, the alleged contradictory belief account you cite says that IF Kripke's account is correct, then Pierre believes a contradiction. The simple way to answer this is just to assert that Kripke's account of beliefs is not correct, and I wouldn't be alone in doing that, and then we don't have a problem. Furthermore, if the alleged contradictory belief story is misrepresenting Kripke's view, then it also won't work. It's also easy enough to say that Pierre has got one thing in mind when he is thinking that London is not beautiful, and that he has another in mind when he is thinking about the French sentence. My claim is that people cannot simultaneously belief something and believe that denial, as they grasp the claim and its denial, at the same time.

But I'm actually not especially wedded to the point. If it turns out that in some circumstances we have convincing empirical evidence that people do believe contradictions, then I'd just accept that. The point, of course, would be that they SHOULDN'T because doing so is irrational. Surely you don't think that's especially controversial. And the real point of the post, which you're not really addressing, isn't about whether or not people believe contradictions.


Bryan Goodrich said...

The problem is not epistemological. Kripke is showing a semantic weakness in certain logics, especially when it comes to names and reference. One can have a referent explaining certain semantic properties one accepts, like "Clark Kent is a dork." On the other hand, he might have a certain referent explaining other semantic properties he accepts, like "Superman is not a dork." The problem is in trying to evaluate the semantics extensionally because it just so happens that "Clark Kent" is the same person as "Superman" and yet we do not have "x is a dork" and "x is not a dork" because the referents are different to the person and their beliefs.

I agree with Searle's speech acts approach to this, and we have to assess the reference to the person that is uttering it, or at the very least obtains said intentional stance. There are other logics, e.g., intensional logic, which deal with these kinds of issues since classical logics have no way to address them.

The point is that the contradiction comes as a semantical contradiction and not any sort of real (or ontological) contradiction, i.e., London is not, in fact, beautiful and not beautiful no more than the sun exists and the sun does not exist. The issue is whether or not the belief in said facts of the matter are taken to be semantically true or not.

I would say that the problem is simply resolved, say in the Clark Kent/Superman case, or any other intensional conflicting case, by showing that there does not exist a model which satisfies both statements at the same time, i.e., we do not have the intensional content or interpretation or model that makes semantically true--at the same time--both X and ¬X, even if independently the intensional content may be the same thing.

Can people hold (semantically) contradictory beliefs? Sure, because they're ignorant about certain intensional relations that would be weeded out with an adequately robust intensional logic that addresses those relations. One simple "trick" is to see if it is possible to interpret extensionally. In the case of Superman/Kent, we know that is not the case, because if we treated them both, as they really are, the same person, we do not keep the same intensional truth status since then Superman/Kent is both X and ¬X. If someone is ignorant of those relations, however, then they will have contradictory beliefs. As long as there is ignorance there is such a possibility.

I pose a more challenging question. How does reasoning under uncertainty provide a basis for saying, as MM does, that holding such contradictory beliefs come as irrational? Such a standard of assessment begs the question. At any given time it only comes up against a set of knowledge we already collectively agree upon and would assess the semantics based on that, but it is never complete, homogeneous in quality nor static. (Ir)rationality is, at best, a relative and fleeting concept.

It would be irrational, iff, someone accepts that "x is P" and "x is ¬P" while also understanding that A satisfies the first statement, B satisfies the second statement and "A=B". But we don't know the status of that last condition which is absolutely the condition which would make or break this status. Thus, we have propositional content P and we have models, M, which may satisfy those statements P. But they are always going to be interpreted based on certain conditions C. Thus, we cannot just look at if "M ╞ P" but need it to be "{M,C} ╞ P". There's heavier concepts I could introduce, but they get far more abstract than needed, and the notation required to make it easier is not exactly "nice" in ASCII or anything blogger would permit.

I recommend my truth blog linked above, since it is general enough to apply.

Anonymous said...


"Anonymous, the alleged contradictory belief account you cite says that IF Kripke's account is correct, then Pierre believes a contradiction. The simple way to answer this is just to assert that Kripke's account of beliefs is not correct, and I wouldn't be alone in doing that"

Right, you wouldn’t be alone. Just about every philosophy student who first encounters this puzzle makes the same claim. However this doesn’t buy you much credibility as a philosophy professor.

There is a gang of philosophers who have pondered over Kripke's dilemma and they aren't beginning Phil students.

It is also self refuting and senseless to claim that a person cant have a belief that they know is false. This is nothing more than expressing the law of exclusive middle in a linguistic utterance other than its raw form.

A lecture from Philosophy of language course:

"""The well known puzzle is based on the assumption that our speaker is normal non omniscient, sincere, reflective and not conceptually confused. The two principles used are the Disquotational Principle (DP) and the Translation Principle (TP):

If a speaker of a language L assents to p and "p" is a sentence of L, then he believes that p.

If a sentence of one language expresses a truth in that language,
then any translation of it into another language also expresses a truth in that other language

Pierre, a Frenchman, heard in Paris about London's beuty. He therefore assents to the sentence:

(1) Londres est jolie.

Emigrated in England he learns English by exposure, takes up residence in London and, after observing the surroundings, he assents to:

(2) London is not pretty

He does not realize that the town where he lives is the town depicted in the nice pictures he saw in Paris; he has not updated his earlier belief expressed once as "Londres est jolie". Then, given that "Londres" and "London" (just as the old-fashioned "Hesperus" and "Phosphorus") have the same reference - or the same semantic value (the object referred to by the names which are rigid designators), it follows that Pierre believes that London is pretty and he believes that London is not pretty.
A similar puzzle may arise also with the homophonic case, when Pierre meets on two occasions Paderewski, once in a music hall and another time at a political conference. He does not realize that he met the same person and he assents to two different sentence:
Paderewski has musical talent
Paderewski has no musical talent

In both cases, we are compelled to admit that this supposed rational person holds contradictory beliefs, therefore this person is not as rational as supposed. Is it a real puzzle? If it is, either we have to reject the causal theory of reference, or we have to find an answer to the puzzle. Some answers could say that if the puzzle works, then it is worse for the causal theory of reference. A more precise answer could be that, as in the case of the reductio ad absurdum of Mill's theory via the traditional argument, we may have a reductio ad absurdum criticizing the validity of the disquotational principle. Beyond the difficulty of abandoning an apparent acceptable principle, it has been suggested (by Sosa) that even that principle may be dispensable in building up the puzzle. Before rejecting such a principle, or rejecting the causal theory of reference, it should be shown that other theories can solve the puzzle.
But it is not so clear that a descriptive theory of reference can do better. A Fregean could say that the contents of Pierre's beliefs (thoughts) are senses: the way in which London is presented to Pierre the first time fits with the way in which the concept "pretty" is given; the way in which London is presented to Pierre the second time does not fit. This could be correct. The problem is that we have no idea of what these "modes of presentations" are. We could try something like this: " Pierre believes that the town depicted in a nice picture he heard about in Paris is pretty" and "Pierre believes that the town he lives in is not pretty". Even given these expressions, we cannot avoid the fact that in both cases Pierre believes of London that it is pretty, and he believes of the same town that it is not pretty."""

Anonymous said...

Nice well thought out response Bryan. However, Kripke's dilemma differs from previous semantic dilemmas in that the properties of the two beliefs are identical in extension and intension. whereas Superman = Clark kent are two beliefs that are not intensionally and extensionally the same. Quine showed this the case in his
solution to a previous semantic dilemma that I cant recall the name - it has to do with a spy and a guy he know's or something...

Bryan Goodrich said...


That depends on what we mean by intension. Intensional content is provided -in the same language- while what we have in Kripke's example is two different languages. The assumption is that any model for a given sentence requires that the structure shares the same language for which the sentence is uttered (or is a wff).

The two statements are not extensionally the same because they come under two different languages. However, the difference is that there is a translation from one language (formal system) to another. Likewise, we can transfer (by, say, a homomorphism) the model of one language to another and maintain the truth of the sentence between languages.

The problem is you're assuming the disquotational property. While useful, I think it is utterly lacking. I bring that up in my truth blog.

Disquotation: "p" is true iff p

There is no scientifically sound basis for this rule that if one "assents that p" and "'p' is true in some language L", then one "believes that 'p'" Why?

The reason is that "assent" and "belief" are non-truth functional as well as not even scientifically founded, and yet we're making empirical claims with these logical statements. If we view it purely abstracted, though, there is no basis for some metalogical principle like that.

One error in it is that "p" may be constructed in L, but "q" is constructed in L*. While p=q intensionally (and we have two models M and M* which satisfy them in L and L*, respectively), there is no reason to suppose that one's assent to q and assent to ¬p should pose a problem. The reason is that the person never assents to p. Therefore, even on the same principle it does not apply, but we still haven't qualified what this "assent" relation or operation is.

Therefore, the confusion only arises in what one assents to. However, as I said, this is a language issue, not a logical issue. Assent in this case is immediately adjoined to the semantics because as the principle was even used it basically says "if one assents to p then there is a model for 'p'" where 'p' is comprised of the language the person understands that p. You cannot really separate assent to p and belief in "p" without the language for which p is described. Now, a person may understand more than one language, and can easily translate between L and L*, but if they do not, or only have a partial understanding, then that is the error. It is still semantical because the assent and semantics are intertwined. The issue is in the language and what one assents to in a language.

If we are to take anything important here, it is however you qualify assent, you need to make it language-contextualized otherwise you'll get a lot of nonsense, because you'll take the semantics and statements and "move them around" as if they were independent of the language when clearly they are not.

Mike :D said...

I would say that what MM is trying to explain is that the mind has an intuitive sense of the law of non-contradiction (at the risk of putting words in his mouth). In order to know anything, we must be capable of sorting propositions in our current consciousness into their property category. The ability to discern between p and ~p then is fundamental to our minds ability to sort said propositions/information.

You might also note the "accidental nature" of Kripke's case, which can well be characterized by Alsten's critique of justified beliefs and cast light on by the series of papers which followed attempting to explain this (such as those by Goldman).

Instead, try to focus on the point that in order to sort information we must be able to sort between P & ~P, this is the only point that is required for the above assertions.

Anonymous said...

Eeeep, by Alsten, I mean Gettier. =p.

Bryan Goodrich said...


To presume our brains work with bivalence is a pretty big presumption. There is absolutely no reason I can see why we ought to (epistemic norm) semantically qualify our propositions with the law of non-contradiction. There is absolutely no reason I can see why our brains would (empirically) qualify our propositions with the law of non-contradiction.

It just does not make sense as an empirical fact, nor be logically necessary. Hell, we can reason with fuzzy logic just fine, much less with a trivalent logic.

Now, it might be true that we categorize classes of propositions so that the law of non-contradiction holds at the level of those generalizations or more general statements, but then those would be classes of propositions. By these classes I mean that they are higher-order objects than the propositions or the details of their content themselves. Think of the difference between atomic particles or chemical compounds to their aggregate as some sort of object we experience like a table or water. The properties of the aggregate are clearly not those of the lower-level substances.

The analog is that our propositions can work in much the same way. We can have a whole host of relations around our propositions that do not operate with bivalence, but we can class them together (make more compound statements or generate terms as composite of other terms and meanings) in ways such that our propositions emerge to be more "simple" and obtain the law of non-contradiction.

One way to recognize this is to consider that in just about any proposition, short of being an analytic tautology, every word in our speech act will have "deeper meaning" that we can break down. In that sort of reduction we are doing precisely what I was getting at in the analog.

Of course this doesn't mean there is a clear objective separation between these classes and some supposed "bottom level" of semantical content. The point is that our statements generally can be broken down to the point that what was said only obtains a bivalent truth condition when we remove the vagueness of the semantic content at the reduction by moving up in the propositional class. It would be precisely the class at which the semantical content has become bivalent by ignoring the variation in terms or vagaries that cause a separation from bivalence.

(Noting that fuzzy logic is the idea of capturing "vagueness" in terms. Though, we certainly do not need to go as far as a continuum of truth values, we might suppose at some "bottom level" all terms end up that vague or at least are presupposed to certain degrees in that kind of "hierarchy" we might imagine.)

So, to your point about accidental knowledge. I don't know if the Gettier problem fully applies here because the issue wasn't so much that someone "got it right on accident" and then we question whether he really knew what he claimed to know. The person can be considered justified in their belief, but what is lacking is the information that ties certain propositions or terms together with other information. If we find that to be 'accidental' then we would have to make almost every bit of knowledge fall under a Gettier problem, because it is so generic--so broad--a use that unless we had absolute knowledge, then we are always having gaps or missing information or missing relations between propositions and semantic content that we could be misinformed.

I would propose a more abstract model of what is going on like this:

x believes that P, where P obtains when the set {a, b, c, ..., p} is satisfied. In other words, x has a belief about P because x has some stance affirming a, b, c, ... and p. The degree of strength that he asserts these propositions varies. For instance, x might strongly believe that P while only weakly believing that p.

Also, x believes that ¬Q, where Q={a', b', c', ..., ¬q}.

What is not known to the person is that we have a translation rule φ:Q→P, by translating each element in Q one-one to each element in P. Of particular interest here is that "φ(q)=p". Thus, since x does not know or does not believe that φ, x does not know that he believes ¬p and p. We might also say that the strength of these assertions carries over. Thus, if we want to give an arbitrary quantification, we might say x believes that p with 0.2 confidence. Also, x believes that ¬q with 0.6 confidence. Thus, unknown to x, x believes that ¬p, by φ(¬q)=¬p, with 0.6 confidence.

This model can provide us with some semblance of how non-contradiction will fail because we're not dealing with bivalence. In fact, we're dealing with inductive reasoning and can use a number of logical frameworks for this (such as fuzzy, where we can group propositions with a characteristic function detailing their relative confidence in the set--though, I presumed the translation carries over the same confidence, it does not need to since it is in a new set, i.e., it went from 0.6 in Q to something else in P).

Also, it is not particularly a gettier problem because x's belief is justified to a degree of confidence--however we measure it. Furthermore, x's belief is not justified incorrectly. It is justified! The problem is lacking other relevant information, such as a translation rule for knowing that q in Q is equivalent to p in P (by an injective transformation from Q to P). x is justified for the belief that p and the belief that ¬q, but x is not believing one of those, but being mistaken about that belief. The mistake is in supposing there is no translation equating them or that they say approximately the same thing.

What we can argue is that x has a skewed view of reality by not understanding these relation between P and Q, and in talking about P and Q in general, as a higher-order class, we can talk about them with bivalence and get rid of all the vagaries of p and q. Thus, we might say that x believes that P and believes that Q, but Q, in all relevant points here, is equivalent to ¬P. We can make this generalization iff we know that all relevant model (interpretations) of P do not satisfy Q. In that respect, we can then solve the dilemma, i believe, and reintroduce bivalence. Of course, to make that move requires a lot of qualification, and recognizing the missing information that causes the mistake. Without it, we have nothing, and one is justified in their skewed view of reality. Thus, recognizing a mistake leads to correcting the skewness.

逆円助 said...


精神年齢 said...


メル友募集 said...

最近仕事ばかりで毎日退屈してます。そろそろ恋人欲しいです☆もう夏だし海とか行きたいな♪ 連絡待ってるよ☆

家出 said...


Deloceano said...

I like this piece.

this sentence:
There are other sources of cognitive tension. Some groups of propositions are probabilistically contradictory--one asserts what another declares to be exceedingly improbable. If I won the lottery three days in a row, it would strike me as exceedingly improbable that the lottery was really a fair, million to one game.
Reminds me of the first scene in the fantastic Tom Stoppard play, Rosencrantz and Guildenstern are dead. After spinning ninety odd heads in a row, Guildenstern (or Rosencrantz?) is starting to have his faith in the laws of probability tested. His companion can't see a problem with it.

A play (and movie) worth seeing.