Just another weblog

31 Jan 08 – Neal Miller, Learning and Logical Learning.

31 January 2008

On this day in 1969, Neal E. Miller’s article entitled, “Learning of Visceral and Glandular Responses,” was published in Science.  This article described instrumental conditioning of autonomic responses, essentially setting the groundwork for biofeedback.  This article was heavily cited in the years subsequent to its publication.

Neal Miller was an American psychologist, trained in psychoanalysis in Vienna.  Likewise, his close associate, John Dollard was trained in psychoanalysis.  The two developed a theory of personality that essentially blended psychoanalysis and behaviorism.  Unfortunately, the blending involved removing all choice from individuals and replacing choice with environmentally governed determinants.  That is, though these individuals spoke of motivation and mediator variables, these elements of “response” were created through a linear process of prior stimulus-response connections.  For example, I may be motivated to write this blog, based on Dollard and Miller’s conceptualization, but this motivation is determined by my past behaviors, which were determined by past input in the environment.  I really have no choice in the matter: I (and my motivation to write) are all the result of past input from the environment. 

As Joseph Rychlak stated in his 1977 article entitled “Logical Learning Theory: Propositions, Corollaries, and Research Evidence,” such theories bear the meaning of “motivation” as an effect of previous causes.  Rychlak further points out that causation theory can be traced back to Aristotle, who developed four terms that subsume the meanings of all experience: 1. material cause: the substance that makes up things; 2. efficient cause: the impetus that brings events or things together over time (with past being the most important time factor in this impetus); 3. formal cause: the pattern or form of events or the various shapes that things take on; and 4. final cause: that “for the sake of which” events happen and things occur (e.g., reason or intention).  Dollard and Miller’s conceptualizations were entirely within the material cause and efficient cause framings.  Psychoanalysis, alternatively, took on elements of all of these forms of causation (see the section on personality on my web site: for more on this topic).  Dollard and Miller, then, circumscribed their perspective when they “combined” psychoanalysis and behaviorism.  Essentially, they added a concept and conformed it with behavioral theory.  In other words, it was only a combination of psychoanalysis and behaviorism in theory.  Technically, it was a somewhat beautified behaviorism.

This material and efficient causal perspective, in turn, underlain Miller’s conception of the “learning of visceral and glandular responses.”  In other words, Miller conceived of the ability of individuals (really rats) to control such responses as the product of classical conditioning (clearly presented in behavioral terms).  There is not indication of individual choice in the matter. 

Alternatively, Rychlak has presented a “logical learning theory,” that involves personal choice in which he conceives of individuals as acting “for the sake of” premises, purposes, reasons, goals, etc., that are not the result of past input.  Instead, as Brent D. Slife puts it in his book entitled Time and Psychological Explanation, Rychlak:

           “views the learner’s cognitive organization and the organization of the
            information to be learned as being analogous to syllogistic principles (or parts of
            the whole)…this relation takes place concurrently; the environment is not
            chronologically first…the mind is logically precedent because it formulates the
            intention ‘for the sake of which’ behavior is carried out…Aspects of the
            environment that are relevant…to the person’s internal cognitive organization are
            those that are learned most readily…meanings…related to the person’s goals are
            the most meaningful…Learning is an elaboration of what one already
            knows…the “already known” can be inborn or even cognitively invented in the
            present…[it] is implicit in…the cognitive organization of the learner.”

As can be noted from this, Rychlak took a clearly “Kantian” perspective on “learning” believing that we have a priori abilities.  Essentially, what I mean by “a priori abilities” is that Rychlak believed that we were born with the innate capacity to organize structural information in the environment (following a formal causal perspective) and act for the sake of these formulations (following a final causal perspective).  The past is not primary in this formulation because such ability is innate.  There is no past to precede the initial ability.  This perspective is summed up in Rychlak’s six theoretical propositions related to logical learning theory (from his 1977 paper referenced above):

     1. In place of the efficient-cause construct of stimulus-response, logical learning
         theory employs a final-cause construct of “telosponsivity” to conceptualize behavior.
         A telosponse involves affirming the meaning premise, whether it be a visual image, 
         language term, statement, or judgmental comparison, related to a referent (some goal) 
         that acts as a purpose for the sake of which behavior is intended.
     2. Human thought is dialectical (meaning dual/bipolar – involving both the thought and its
         opposite) as well as demonstrative (meaning singular/unipolar – involving only the thought
         itself), so the person must always “take a position” on life: choose one from among many
         alternative meanings open for framing as initial assumptions, etc. (By the way, this was
         also the grounding for Rychlak’s mentor’s – George A. Kelly’s – personal construct theory).
     3. Meanings encompassed by the premises of telosponsivity are brought forward to endow/
         enrich experience with understanding in a tautological fashion. (A tautology is a relation
         of identity between to thought concepts).
     4. Once a meaning is selected from among the many dialectically possible affirmations open
         to the person, this premising frame acts as a precedent on the basis of which tautological
         extensions of meaning occur sequaciously (e.g., following in logical sequence that flows
         from the meaning of precendents – without time considerations).
     5. Telosponsivity begins from birth (i.e., from the outset of whatever we take to be the
          beginning of organismic existence). Before they develop language, infants behave for the
          sake of affective assessments, and although later language terms are associated to
          experience and used in framing premises, the unlearned affective side to learning never
          leaves the human being.
     6. Telic considerations of behavior, such as agency, choice, and decision-making, are
          encompassed directly.

While most empirical research follows a sequence of developing data with a theory implicit and interpreting the data based on the implicit theory, Rychlak and his colleagues conducted over 30 years of research with the theory explicitly informing the research.  Rychlak was well aware of the fact that we often confound theory with method, assuming a connection that is not perfectly evident, and that alternative theories may also apply to any given set of data.  Given this, Rychlak used logical learning theory to develop eight testable corrollaries (theories, by the way, are not generally testable because they are assumptive):

     1. Tasks that are predicated positively, including self-predications of a positive
         nature, should reflect meaning-extensions facilitating the learning of positive
         materials over negative materials.
     2. Tasks that are predicated negatively, including self-predications of a negative
         nature, should reflect meaning-extensions facilitating the learning of negative
         materials over positive ones.
     3. The role of affective assessment in learning cannot be reduced to or accounted
          for by constructs relying on frequency and contiguity (by the way, Miller, himself,
          indicated this was true in his Presidential Address to the American Psychological
         Association – published in the American Psychologist as an article entitled,
         “Analytical Studies of Drive and Reward”.  In this article, he states
         that, “…contiguity alone [is] not sufficient for learning, while contiguity plus reward [is].”
         Following a logical learning theory perspective, we would substitute “goal” for “reward”).
     4. As it is unlearned and therefore a spontaneously “natural” way in which to order tasks
         meaningfully, affection can be shown to be especially important to those subjects who
         are performing in tasks that either outstrip their capacities or dislodge their personal
     5. Patterns of affective learning style occur between or across tasks as well as within tasks.
     6. Affective assessments are conceptual, occurring instantaneously as patterned organizations
         of meaning.

I would encourage anyone reading this to get a copy of Rychlak’s 1977 paper to see a summary of the works that empirically demonstrate these corrolaries of logical learning theory.  I would also recommend picking up a copy of Rychlak’s The Psychology of Rigorous Humanism.  Rychlak had a clarity of mind to analyze learning from a very essential alternative perspective, which allows for human agency to be involved in the process of life, that is rare in psychology.  If only for this reason, I would recommend these readings.

Back to topic, however: Given the propositions and corrolaries of logical learning theory, how would one explain the learning of visceral and glandular responses in terms of logical learning theory?  It relatively clear that Rychlak would see such “learning” as the extension of innate capacity to develop meaning/structure in our experience.  The meaning/structure we develop, however, is likely to be different (or at least not identical) between individuals.  So, the changes that occur in visceral organs or glandular activities following feedback of such activities would likely be conceived of, in logical learning theory, as the extension of innate structural capacity for the sake of changing currently experienced psychological phenomena.  Kelly’s statement regarding the development of personality dysfunction is informative here: “any personal construction which is used repeatedly in spite of consistent invalidation” (this, by the way, was likely the origination of the statement often attributed to Einstein that, “insanity is defined as doing the same thing over and over again and expecting different results).  In essence, biofeedback is the extension of innate capacities (for example, breathing in a certain manner to calm oneself) for the sake of making a change in currently maladaptive states of acting.  The biofeedback, itself, merely awakens that knowledge already within oneself to extend the meanings to the goal the individual has.

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 31, 2008 Posted by | In Psychology | , , , , , , , , , | Leave a comment

30 Jan 08 – Just Say No? No! It Just Doesn’t Work.

30 January 2008


On this day in 1986, the then president of the American Psychological Association, Robert Perloff, presented the APA Presidential Citation to Nancy Reagan for her efforts in promoting the “Just Say No” campaign against drug abuse.  This is an interesting blip in the history of psychology given that there was no evidence at the time for or against the campaign.  Furthermore, now that evidence has come back, we have discovered that, just as with all other abstinence movements, it just didn’t work.


What does this mean that it didn’t work?  The anti-drug curricula developed was developed in the 1980s.  There was a steady drop in drug use from the early 1980s to about 1992.  However, this decline in drug use predated the effective implementation of this anti-drug movement.  By the time that it was fully implement (in the early 1990s), the use of drugs was again on the rise.  In 2002, for example, when the movement should have shown progress, 53% of seniors said they had used illegal drugs.  This is compared to 41% in 1992. 

What accounts for this negative trend?  Why didn’t the anti-drug curricula work?  Certainly, there are a number of potential hypotheses.  However, I would surmise it does not work because the pressure comes from an outside source.  The individuals targeted are not developing an internal motivation to follow through with non-use.   Essentially, the message is “conform to my peer pressure that involves non-use” and “don’t conform to other peer pressure that involves use.”  This is an odd message to begin with and certainly not a useful one to deter someone who has an inclination, however mild, to rebel against mandates from authority figures.


In fact, research on controlled behaviors versus autonomous behaviors predicts exactly this sort of behavior.  Controlled behavior, such as imposing perspective such as abstaining from drug use or anything else, involves an external perceived locus of causality (e.g., that something or someone other than the behaving individual determines the behaviors) and is experienced as pressured by demands and contingencies (e.g., to use or not use the drugs).  Autonomous behaviors, on the other hand, have an internal perceived locus of causality (e.g., that the behaving individual determines the presentation of the behaviors) and are experienced as chosen and volitional (e.g., that the individual is agentic).  Perceived autonomous, agentic behavior, as opposed to perceived controlled, determined behavior, is related to enhanced performance and persistence (e.g., continuing to abide by that personal choice).  Based on such findings, the discovery that drug use actually rose after individuals were told to abstain makes sense:  they perceived they were under control of outside forces, both forces toward and away from use, which made it easier for them to “change with the winds” of the forces upon them.  They were, in this sense, much like a sail boat without a captain: at the whim of forces not under their control. 

Alternatively, agency beliefs about effort and ability are the strongest and most critical predictors of actual performance.  In this sense, then, a better tack would have been to engage the individuals in discussion.  This discussion would be non-threatening and non-punitive, perhaps led by a respected peer.  The point of the discussion would be to address myths and misconceptions and provide facts but not provide mandates on behaviors.  The discussion would also involve an open discussion of what the individuals motivations for and against drug use (or other concerning behaviors), in order to address ambivalence regarding it.  Finally, the discussion would conclude with the individuals, themselves, stating their reasons for and against use of drugs and for them to make an honest, confidential assessment of their motivation to use or not use.   

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 30, 2008 Posted by | In Psychology | , , , , , , , , | Leave a comment

29 Jan 08 – Allport and the Importance (or Detriment) of Avoidance.

29 January 2008

On this day in 1954, Gordon Allport’s book entitled, The Nature of Prejudice, was published.  This book focuses on various factors involved in prejudice.  It addresses everything from the normative nature of prejudice to theories of prejudice to religion and prejudice.   Today, however, I would like to spend a little time with Allport’s main contribution (in the area of personality) and his intellectual influence on a figure in psychology who is not so well known but who had a great influence on me: Richard Bednar. 

Allport noted that there are a surprising number of individuals who experience high levels of anxiety related to feelings of inferiority.  In response to these feelings, we can either shrug them off or make adjustments in our goals.  If the sense of inferiority happens repeatedly, however, a tension arises that amounts to a feeling of personal deficiency. 

What is important, then, for the development of normal or abnormal development, from Allport’s perspective, is how each of us respond to our own feelings of inferiority.  If we take the feelings as a challenge, exerting greater effort and practice toward overcoming the challenge, we can make the problem a perceived strength rather than a perceived weakness.  Alternatively, we can choose to develop different goals.  Finally, we can attempt to recognizing and facing the problem at all.  For Allport, the act of avoidance or confrontation is what differentiates the “normal” from the “abnormal”:

      “…to confront the world and its problems is intrinsically a wholesome thing to do, 
      because it brings about appropriate adjustment and mastery; to escape from the
      world is intrinsically a dangerous and diseased thing to do. Extreme escape is
      found in the most severe forms of mental disorder, the psychoses…The neurotic
      shows much defense, less coping. In the healthy personality coping ordinarily

This perspective on abnormality (avoidance) versus normality (confrontation/coping) was influential in the formation of Richard Bednar’s theory and practice of psychotherapy.  Before I discuss this theory and practice, let me talk a little bit about Bednar.

Bednar, as I mentioned earlier, is a little known figure in the history of psychology.  He, as one of my main professors and one of my psychotherapy mentors (along with Brent Slife who is also my intellectual mentor), had a tremendous influence on me.  Bednar was a thoroughgoingly bright, uncompromising person.  He was incredibly authentic and would not falsify his own personality for anyone.  He was driven by his own moral convictions and was incredibly gracious.  In fact, as will be apparent after I present his theory and practice of psychotherapy, he very much lived his by his own words.  Dick, as I knew him, was a professor of psychology at Brigham Young University.  Prior to that, he was the director of the Clinical Training Program at the University of Kentucky.  Dick was well published, with a consistent 20 year record of publications that involved chapters in well respected psychology review textbooks (Annual Review of Psychology and Handbook of Psychotherapy and Behavior Change) and articles in such professional journals as Journal of Consulting and Clinical Psychology (for which he was also a consulting editor), Journal of Counseling Psychology, and Journal of Applied Behavior Science.  His knowledge and background in psychotherapy, especially relational approaches and group psychotherapy, was immense.  As such, he was highly sought after as a guest speaker and provided countless psychotherapy workshops.  I am speaking of him in the past tense because he, unfortunately, died in a freak snowmobile accident.  Fortunately, he was doing something that he loved with the person he loved, his wife.  He died shortly after retiring from Brigham Young University.  Even in his retirement, Dick continued to provide supervision to clinical psychology student from BYU. 

Dick’s perspective on the theory and practice of psychotherapy is presented in his book (with Scott Peterson) entitled, Self Esteem: Paradoxes and Innovations in Clinical Theory and Practice.  I, of course, highly recommend this book (which was in its second edition when Dick died and is quite difficult to find now).  A testimony to its functional significance is the fact that the book sold over 35,000 copies!  This is amazing given the fact that most professional resources such as this sell about 700 copies.  He could also conduct a workshop about the theory at any time he wanted to do so (and commanded a quite impressive fee for such workshops).  By the way, his wife, Sandra, informed me that all proceeds from his books went to charity…what a great guy!

There are four underlying assumptions of the Bednar’s model:

     1. People should expect to receive regular amounts of negative feedback from
         their social environment, most of which is probably valid.
     2. Most people receive and enjoy substantial amounts of authentic favorable
         social feedback, but they tend not to believe it.
     3. Self-evaluations are a reality for most people.
     4. Self-evaluative processes can provide a basis for continuous affective
         feedback from the self about the adequacy of the self.

We can, of course, see the obvious connections between Bednar’s model and that of Allport’s theory: feedback from the environment is taken in, with the negative feedback being more apparent, and the feedback is interpreted and emotional responses are developed based on the interpretations.  From Bednar’s perspective, the interpretation that forms the basis of a person’s perspective on himself or herself (which Dick, I think, poorly called “self esteem” – and that I had conversations with him about) is a dynamic attribute.  Furthermore, psychological threat is unavoidable.  The interpretations, however, modify the psychological threat.  Given that the interpretation is now a part of the client’s worldview, then, the psychotherapist must assume that the therapeutic relationship is prototypical of other relationships. 

The psychotherapist’s job is to help the client identify, describe, and experience the self-evaluative intensity of their avoidance patterns (or, what Dick refers to as “image management” – the tendency to pretend to be what we think we are not in order to gain favorable social feedback in the process of which we avoid who we really are).  Finally, the psychotherapist must “catch” the client being authentic (not image managing) and focus on the client’s self-evaluative processes. 

There are four basic steps in dealing with the negative interpretations the client exhibits:

     1. Identifying and clearly labeling the dominant avoidance patterns used in
         anxiety-arousing conflict situations.
     2. Identifying and clearly labeling the self-evaluative thoughts and feelings
         associated with these dominant avoidance patterns.
     3. Learning to realistically face negative self-evaluations and avoidance patterns.
     4. Gradually learning how to cope with personal conflicts.

Essentially, the theory is that people present fake selves because they don’t like who they think they are.  In the process, they live a lie and don’t like themselves for doing so.  So, they are now caught in a place where they don’t like who they thought other people thought they were, based on their own interpretations of themselves, and they don’t like the person that they are being because it is not who they are.  Therefore, the paradox is that they are image managing for the sake of creating more fulfilling relationships but, in the end, have poor relationships because they are not truly involved in them (this “fake self” is).  Therapy, then, is aimed at developing an authenticity in relationships, confronting anxiety provoking interpersonal situations and dynamics in order to develop a more realistic perspective on the self in relationship as well as to truly address and not avoid the shortcomings that we do have in relationships.  In other words, it is focused on taking responsibility for ourselves and our actions!  What a concept!

It is interesting to note, Dick readily admitted how simple this concept is but how difficult it is in practice because of its simplicity.  It is strange that people have such difficulty with just being “real.”  All we really have to do, as Allport noted, is be aware of our limitations and address them…we are, after all human and fallible.  All that is really required of us is to be ourselves.  It’s kind of interesting, given Allport’s theory of personality, that the one prejudice he didn’t write about in The Nature of Prejudice was our prejudice against ourselves!

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 29, 2008 Posted by | In Psychology | , , , , , | Leave a comment

28 Jan 08 – Baldwin, The Baldwin Effect, and Human (Non-Deistic) Teleological Evolution.

28 January 2008

On this day in 1902, Andrew Carnegie endowed the Carnegie Institution. This institution was created in order to support scientific research, including psychological research. A committee, then, was formed to recommend worthy psychological research projects. This committee was headed by James Mark Baldwin. I would like to use this blog to talk a little bit about Baldwin, especially the “Baldwin Effect,” and to talk about an issue that is currently a “hot topic”: evolution.

James Mark Baldwin was an American philosopher and psychologist, trained in philosophy under the tutelage of James McCosh at Princeton University. Similar to the philosopher Thomas Reid, McCosh felt that our beliefs were the direct result of sensation and, thus, not open to question (this belief was central to Gibson’s account of perception in yesterday’s blog post).  McCosh also felt that evolution glorifies the divine designer:

                 “All that science has demonstrated, all that theism has argued, of the order, of the
                 final cause and benevolent purpose in the world is true, and can not be set aside.
                 Every natural law — mechanical, chemical, and vital — is good. Every organ of the
                 body, when free from disease, is good. There is certainly the most exquisite
                 adaptation in the eye, however we may account for its formation, and for the
                 numerous diseases which seize upon it. Agassiz has shown, by an induction of
                 facts reaching over the whole history of the animal kingdom, that there is plan in
                 the succession of organic life.”

We can see from this statement of McCosh’s that he believed that there was a final cause (or teleology) in evolution. That is, from McCosh’s perspective, evolution proceeded in a purposive way. For McCosh, this purpose was determined by God – it was a deistic teleology:

                 “Development implies an original matter with high endowments. Whence the original matter?
                 It is acknowledged, by its most eminent expounder, that evolution can not account for the
                 first appearance of life. Greatly to the disappointment of some of his followers, Darwin is
                 obliged to postulate three or four germs of life created by God. To explain the continuance
                 of life, he is obliged to call in a pangenesis, or universal life, which is just a vague phrase for
                 that inexplicable thing life, and life is just a mode of God’s action.”

Inclined toward consideration of evolution as his mentor McCosh was, Baldwin is probably best known for what has come to be called, “The Baldwin Effect.” In the Baldwin Effect, he drew heavily from his interaction with McCosh in formulating theories on both development and evolution. There is an interesting pseudo-teleology in Baldwin’s theory, evidencing a mix of McCosh’s teleological take on Darwin (while appearing to want to maintain a more deterministic perspective) and a pseudo-Lamarckian perspective (where acquired characteristics were inherited). The Baldwin Effect essentially states that the sustained behaviour of a species can shape the evolution of the species. For example, if learning to create a shelter quickly makes it more difficult for the weather to kill individuals in the species, individuals who learn to do this quickly have an advantage. As time passes, the ability to acquire that skill with be genetically selected for and at some point it will be an instinct.

The “pseudo-teleology” in this is that there appears to be a purpose to the behaviour and the genetic selection of the behaviour. That is, the purpose is that individuals do not want to get killed. So, they engage/develop this behaviour. It is a trial and error sort of mechanism. Unfortunately, it is the mechanistic portion, driven by genetic variability, which underlies the Baldwin Effect: essentially, it is not driven by human choice. Instead, it is driven by random chance of the genetic variation that creates this ability. This is where the Darwinian (and “Spencerian” – Herbert Spencer who used the phrase “survival of the fittest” to describe) “natural selection” enters into Baldwin’s theory: everything is “selected” naturally (with no human interference) from the random variations in genetic code.

Alternatively, the modern counter-argument is that there is some “intelligent design.” Intelligent design can mean many things. One of these many perspectives on intelligent design is that God designed the world with a clear purpose and it is this design that is followed through evolution. Every change in species was predetermined. What we are now and what we will be are similarly determined by something (or someone) outside of human choice determined our current status.

Though this is more appealing to people who believe and accept divine intervention, such intelligent design formulations have the same limitation that evolutionary theory does: it removes responsibility from the individual (and society). How, for example, can we be responsible for our actions if they were either the result of genetic forces outside of our control or of Godly forces outside of our control? We had no choice in the matter and, hence, we cannot be responsible. Given this lack of responsibility, we could therefore not be held accountable for those behaviors, at least not with any credibility. It would be akin to saying, “bad dish,” to a dish that fell out of a cupboard: the dish had no choice in the matter, it was the result of factors outside its control. The only difference between such intelligent design formulations and Darwinian formulations is that the intelligent design formulations are less random and chaotic (its interesting that scientists who are so concerned with prediction and control accept a formulation, which is a deterministic basis of much of their scientific work, that is itself fundamentally unpredictable and uncontrollable…its almost as if they rule out the real ability to predict and control when they accept such a formulation).

Alternatively, a human teleology perspective on evolution would allow for personal choice and responsibility. Under a human teleology formulation, we develop skills for a purpose. That is, we realize that having a shelter (something over our heads) keeps us dry for the rain. For the sake of keeping the rain off of us, we make a shelter. This making of shelters is recognized by others and becomes a social phenomenon. Eventually, it becomes ingrained into the fabric of life. Such a perspective neither rules out the place of genetics or of God. For example, those who do not accept that they should make shelters for the sake of keeping the rain off of them would likely be shunned by those who do. As a result, they would likely mate with those who had similar perspectives. Those with the same perspective, mating with like minded individuals would then continue to present their views to their offspring, thus creating a cultural mindset where the origination of the idea or change was lost to time but the behaviour continued. Eventually, the individuals who chose the perspective that was more adaptive to the environment would continue to thrive in that particular region and those who did not choose that perspective would either migrate to another region or cease to exist.

Similarly, there is nothing to rule out an involvement of God in this process. God could still be conceived as permitting the agency of those humans in either pursuit/perspective. God would still be the creator, much like any other parent, but the actions of the children (or humans) do not necessarily follow the desires (or dictates) of the parent (or God).

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 28, 2008 Posted by | In Psychology | , , , , , , , | Leave a comment

27 Jan 08 – Gibson, Merleau-Ponty – Perceptual Action/Embodied Agency.

27 January 2008

On this day in 1904, James J. Gibson was born. Gibson is known for his research on perception, which has been interpreted as demonstrating that perceptual qualities are not built from simple sensory inputs. Instead, they are directly sensed from the environment. Essentially, this means that we perceive through experience of the world; we are in direct interaction with the world as it is.

This should not be confused with a meditational perspective on our interaction with the environment, such as that offered by cognitive psychology. That is, Gibson was not saying that we are involved in a one-way process, wherein we process value-free information from the surrounding environment, organize it, and then act on our organization (the mediation occurring in the mind). This cognitive mediation perspective assumes a relatively passive mechanism of sensation adapted to random, chance events in the environment. It is a dualistic perspective that separates the subjective mind from the objective environment, the products of the subjective mind being mediated by the mechanisms of sensation (the mechanical registration of bits of sensory information from the environment). This is sometimes referred to as a “representational” view of sensation wherein humans are conceived of as representing the external world through a step by step process, essentially in the physical nervous system.

Gibson, instead, conceived of perception as an attribute of the human and the environment together, in holistic fashion. Perception, in this framing, is not an indirect process carried out within the individual. Instead, it is a direct interaction carried out between the individual and his/her environmental context (this is why it is often referred to as “direct realism” – a term borrowed from the Scottish philosopher Thomas Reid – or “ecological psychology” – attending to the contextual situatedness of the perceptual experience). Perception, then, is not passive; it is active and exploratory – in tune with the living meanings already pregnant in the contextual environment itself (Gibson referred to these living meanings as “affordances” – possibilities ways that the contextual environment makes itself known to the particular individual). The living meaning (or affordance) depends on the interaction that the individual is having with the contextual environment, explaining why a piece of chalk can be afforded the meaning of writing instrument or, as one of my professors in undergrad vividly demonstrated, a foodstuff (should we wittingly choose to take a chomp out of it). These affordances are not inside the mind. Instead, they are living possibilities and properties of the contextual environment itself when the contextual environment is perceived in a ways that is not dualistic, artificially separating a subjective mind from an objective reality. From Gibson’s perspective, perception is an active interchange between the active intentions of the individual and the living, meaningful possibilities (“affordances”) of the contextual environment.


The French existential philosopher, Merleau-Ponty, actually preceded Gibson in developing a very similar formulation of perception. A wonderful, albeit quite difficult book to read, which presents Merleau-Ponty’s philosophy on perception is entitled, The Phenomenology of Perception. In this book, Merleau-Ponty suggests that, unlike Husserl’s perspective that “consciousness is always consciousness of something,” consciousness is perceptual consciousness; perception is of primary important to being conscious at all. From this philosophical grounding, Merleau-Ponty explores perception as the active engagement between the contextual environment and the individual, which occurs through the body. That is, the “lived body” adjusts and acts in response to the active solicitations of the contextual environment. This is conceived of almost like an active conversation between the body and the context. Much like Gibson’s later formulation of the concept of affordances, Mereleau-Ponty conceived of the things that we interact with, such as a mountain, as correlative with our bodily capacities and acquired skills, so that, for example, a mountain affords climbing. This affordance is not merely a cross-cultural phenomenon based solely on body structure, nor a body structure plus a skill all normal human beings acquire. It is an affordance that comes from experience with mountains and the acquisition of mountain-climbing skills.

Embodied Agency 

Though both positions are quite difficult to summarize in the little space I have provided here, I think the best way to conceive of them is as akin to the concept of “embodied agency.” According to the concept of embodied agency, what have been traditionally assumed to be two separate and distinct entities – mind (possessing agenctic qualities) and body (possessing deterministic qualities) – are viewed as parts of a larger system wherein the mind and body mutually constitute one another. Accordingly, the nature of the mind constitutes the nature of the body, and vice versa. This would account for the numerous empirical studies that indicate how agentic factors (e.g., choices) contribute to neurobiological change. As but one example of this, using positron-emission tomography (PET) to measure the neurological effects of certain therapeutic processes, investigations have indicated that conscious withholding of obsessive-compulsive behaviors had the same eventual effect on changes in neural activity as the recommended drug for obsessive-compulsive disorder. In other words, agency and biology interact, wherein agency is associated with changes in biology. Similarly, according to the assumption of embodied agency, biology has affects on agency. For example, the constraints of my current bodily make-up – including my body type and current cardiovascular endurance – prevent me from successfully engaging in certain actions such as running a 6 minute mile. No amount of agency on my part will change my ability with regard to accomplishing this task right now. Furthermore, the constraints of my current biology disallow me from seriously considering that option, thereby constraining my available options and, thus, my agency. Alternatively, many athletes, with appropriate body types and well developed cardiovascular endurance, certainly do have the ability to run a 6 minute mile. As such, their agency is widened, at least with respect to the choices related to this task, by their biology.

The perspective offered by embodied agency, then, means that individuals and their actions are not explainable or understandable without reference to both their biology and their agency: amounting to a truly holistic perspective. As neuroscientist Eliot Valenstein put it, “…it is impossible to understand [biological phenomena such as] consciousness and thought without considering the psychosocial context that not only shapes the content of thought, but also the physical structure of the brain.” Alternatively, as Slife & Hopkins note, agentic acts such as, “…a good deed requires a relatively sound body. Good deeds simply cannot be performed without the biological properties of a relatively healthy body.”

Hence, based on the concepts offered by Gibson, Merleau-Ponty, and the perspective of embodied agency, perception cannot be reduced to either the products of the mind or the body, alone. There is no clean distinction between subject and object. Instead, perception is considered, based on these perspectives, as a holistic, active, interactive process, involving possibilities afforded by both the living environment and the lived experience of the individual.

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 27, 2008 Posted by | In Psychology | , , , , , , , , , | Leave a comment

26 Jan 08 – Selye…What is Stress?

26 January 2008

On this day in 1907, Hans Selye was born. Selye is best known for his work in the area of stress and the development of the concept of the “general adaptation syndrome.” The general adaptation syndrome is Selye’s term for a three-stage process that is carried out in response to prolonged states of stress. First, a state of alarm occurs; next, resistance (attempts to cope) occurs; and, finally, exhaustion occurs. According to Selye, stress is a nonspecific response of the body to any demand.

Cannon Bard Theory

Selye was heavily indebted to Walter Cannon. Cannon was a physiologist who developed both the idea of “fight or flight” (the belief that animals respond to threat either by attacking or running away) and “homeostasis” (the belief in a steady-state condition in all open systems). He also developed, with psychologist Philip Bard, the Cannon-Bard Theory of emotion. According to the Cannon-Bard Theory, people feel emotions first and then act upon them. This can be seen to be the basic feature of Selye’s general adaptation syndrome, wherein the feeling of emotion is characterized by the alarm state and all of its accompanying physiological responses and the action related to the emotion occurs in the resistance stage.

James-Lange Theory

An alternative to Cannon’s theory was the James-Lange Theory. More accurately, Cannon’s theory was an alternative to the James-Lange Theory. In fact, William James was Cannon’s professor. Though Cannon had a great deal of respect for James, he disagreed with this theory. The James-Lange theory is named after both William James and Carl Lange (a Danish physician and psychologist), who both independently developed this theory. According to the James-Lange theory, emotions are the result of changing physiological conditions within the body; they are an effect not a cause.

Two Factor Theory

Another alternative is the Schacter and Singer’s (both psychologists) Two Factor Theory. According to the Two Factor Theory, emotion has two factors (hence the name): physiological arousal and cognitions. Cognitions are used to interpret the meaning of the physiological reactions to outside events.

My alternative

I would like to propose the possibility that stress is simply associated with ignorance, properly construed. That is, being ignorant (not knowing) about various factors (whether they be natural or fantastical) leads one to develop a state of existential anxiety. In other words, faced with uncertainty of the context, the individual is left with a fear of not being: the fear of death. This visceral fear immediately potentiates into the various bodily reactions that are characterized in the general adaptation syndrome. The existential anxiety state is so primordial that it is almost “built in,” much like Kantian pro forma concepts. We all possess this tendency, which is why the general adaptation syndrome is pretty much a universal finding.  In fact, we could conceive as the person as fundamentally inseparable from and continually in interaction  with the context in which he or she exists. The response, then, is nonreflective (not requiring reflection and immediate) because of the inseparable character of the interaction.  The continuation of alarm and failed resistance, characteristic of “disorders” such as post-traumatic stress disorder, however, are the result of our interpretations of the phenomena of which we are ignorant when we are separated from the immediacy of the context in such a way that we continually avoid ever facing the reality of it (similar to the two-factor theory). This interpretation is built on our own “abstracted” (e.g., disconnected, separated) construction of the world/rlity (or worldview) and functions to direct our future actions (for the sake of avoiding the feared phenomena). Unlike the simplistic perspective of the flight or flight response that Cannon developed, humans also have the capacity for confrontation, especially with other people, which does not always mean fighting. Instead, we can dialogue and address problems as they arise, which mitigates the existential anxiety that is the basis of stress.

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 26, 2008 Posted by | In Psychology | , , , , , | Leave a comment

25 Jan 08 – E. G. Boring – Myth Making in Psychological History?

25 January 2008

On this day in 1933, Edwin G. Boring’s book entitled, Physical Dimensions of Consciousness was published. As the title of this book implies, Boring was inclined to conceive of psychological phenomena in biological terms. Given this, it is no small wonder that Boring’s history of psychology (entitled, A History of Experimental Psychology) was revisionistic – putting a revised slant on the history, which made biological theorizing central to the foundation of psychology. There are three examples, which stand out among others, of how Boring went about revising history: the selection of 1879 as the official date of psychology’s beginning, the selection of the “father of experimental psychology,” and the characterization of this “father” as a physiological reductionist.


Boring solidified 1879 as the beginning of experimental psychology. His selection of this date has had such a profound impact that it is now rare to sit through a history of psychology course, especially in undergraduate psychology, and not be required to select this date on an examination. In fact, I think this showed up on my licensing exam (I know it was in the study materials).

How did Boring arrive at this date? Honestly, I am not quite sure. Based on other research (e.g., Robert Watson’s historical research) it is clear that even the suggested location of “the very first formal psychological laboratory in the world” was not actually formally “founded” until about 1894. I think we can be sure that Boring had some justification for choosing this year. Unfortunately, this is lost to “history.”

Father of Experimental Psychology?

What we do know is that Boring had a clear preference for whom he associated with this date and this formal psychological laboratory: Wilhelm Wundt. Based on what we learn in the history of psychology, this is the patently obvious selection as the father of experimental psychology. Wundt, unfortunately, was not the obvious choice. Two other psychologists, at minimum, preceded Wundt in their experimental efforts in psychology: William James and Gustav Fechner.

William James, for example, began teaching at Harvard University in 1875, four years before the chosen date of 1879. There, he brought together threads of psychological experimentation, physiological medicine, and forms of explanation derived from the theory of evolution. He was involved in active psychological experimentation and his seminal work on the topic of psychology, presenting elements of all his early interests, entitled, The Principles of Psychology was published in 1890, 4 years before the formal founding of Wundt’s laboratory.

Alternatively, Gustav Fechner was conducting research even earlier. With a background in physics (and during his career holding the chair of the physics department at Leipzig), Fechner was probably the first “psychologist” to develop experimental methods for the study of psychological phenomena. His work was a conjunction of experimental methods for the study of perception and philosophical debate regarding the fundamental nature of psychological phenomena. Fechner’s views on both, in the are called “psychophysics,” were summed up in his book entitled, Elements of Psychophysics (English translation of the German), published in 1860 – a full 19 years prior to the chosen date for the beginning of experimental psychology!

Why Wundt, then?

Given the foregoing, we are certainly left with the question of why Wundt was chosen to hold the exalted title of “Father of experimental psychology.” Well, there are somewhat apparent responses to this question, both founded in Boring’s perspective on psychology and in his writing on the history of psychology.

Boring (born in 1886), as a professor at Harvard University (where he developed and chaired the Department of Psychology in 1934), would certainly have been familiar with James (not to mention he would have been familiar with him by notoriety alone). Yet, by the time that Boring interacted with Jamesian thought, James had moved from experimentation to investigation of philosophical concerns. James’ perspective on philosophical concerns and the role of religion in psychological (presented in his text entitled, Varieties of Religious Experience: A Study in Human Nature published in 1902) had a decidedly dualistic tone. That is, he conceived of humans as having both a corporeal body and a metaphysical/spiritual mind (his framing of which sometimes made it difficult to determine whether he believed humans were determined or had agency – “free will”). Boring, alternatively, as the title of his book (which started this blog) suggests, was more inclined to view humans as the product of their biology alone.

Similarly, Fechner, while steeped in the physics of his time, also conceived of humans in a different manner than did Boring. In fact, Fechner saw body and mind as two ways of conceiving of the same thing. Fechner’s perspective, then, was a holistic conception, not unlike the Copenhagen Interpretation of Quantum Physics, which conceived of physical objects as having a complementary composition that is both “corporeal” and, for lack of a better word, “inferred.” From both perspectives, it was essential to understand both sides of the whole in order to understand the phenomena under analysis. Boring was quite aware of this aspect of Fechner’s position and chose to see it as prompted by the fact that Fechner suffered through a period of physical and mental illness: for Boring, instead of considering this perspective as possibly legitimate (based on his own biases), he instead considered it the ramblings of a troubled man. It didn’t hurt that it clearly did not fit with Boring’s own conceptions.

However, there was one person he knew he could at least make conform with his own perspectives, through a tricky kind of slight of hand: Wundt. This slight of hand was the fact that Wundt was portrayed by his student Titchener as a physical reductionist (e.g., someone who saw humans as essentially the product of their physical composition).

Physiological Reductionism?

Unfortunately, Wundt’s own writing does not even conform to this physiological reductionism perspective. Wundt, not unlike James, was a dualist. He perceived that conceiving of humans as only the products of their physiology as something that would take away all meaning from the human condition. Still, in his experimentation, he focused on those “objective” elements of the dualism (e.g., the physical), which Titchener took with him to the United States (and to Boring). Wundt, however, continued to present his philosophical analyses outside of the laboratory, where he clearly indicated this dualistic mindset.

In then end, then, Boring revised the history of psychology in a way that fit with his own perspectives. He chose Wundt (and a date that “sufficed”) because, via Titchener, Wundt could be painted to fit Boring’s conception of what psychology (and, specifically, experimental psychology) was all about.

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 25, 2008 Posted by | In Psychology | , , , , , , | Leave a comment

24 Jan 08 – Alcoholism – Where’s the Humanity?

24 January 2008

On this day in 1930, Charles R. Schuster was born.  Schuster conducted behavioral pharmacology studies, which changed the view of drug abuse from a disorder of the will to a behavior maintained and altered by basic mechanisms of operant and classical conditioning. 

As a result of the studies conducted by Schuster, it is not uncommon to read statements such as the following (made by Thomas H. Kelly in an article published in Experimental and Clinical Psychopharmacology):        

            “Drugs of abuse are unconditioned reinforces whose functional effects are

            mediated through neuropharmacological mechanisms.”

I’m actually quite amazed that scientists are willing to give such emphatic statements regarding the causes of psychological phenomena, such as drug abuse.  I’m amazed – and am likely to continually be amazed – because the majority of us have Doctorates of Philosophy but do not appear to understand the philosophical implications of such statements.   The foregoing statement is based on the assumption that research data has “proven” that drugs of abuse develop from operant and classical conditioning means and are demonstrated in their effects on the brain.  Unfortunately, the philosophy of science teaches us that we cannot, in principle, prove anything!  In fact, the closest we can come is falsifying something (showing that it is false).  This, even, is incredibly difficult as those with a particular perspective on something very often come up with arguments that allow them to maintain their perspective, even with disconfirming results.  In fact, what is required is a crucial test or, simply, a change in regime – new people entering the field that are willing to see things in different ways than those who came before them.

Still, individuals such as Victor Stenger, a very thoughtful, intelligent physicist who has made many efforts to combat “pseudoscience” and, especially, “post-modernism” concedes this point.  However, he says that we know the truth well enough from scientific research that we would bet our lives on the findings of research.  I certainly am willing to accept that we have the requisite knowledge for someone to conduct a surgery on me and get it right, for example (even though there are plenty of times, even with the requisite knowledge, that they get it wrong).  Or, to use an example that Stenger gives, the law of gravity has been tested by enough experiments to conclude that it is “real” and will help us predict, with confidence, that after jumping off a tall building we will fall to our precipitous death.

Still, even this example is marred with the “reality” of the philosophy of science.  This reality is that what our data tell us is not the truth.  Instead, they tell as an indicator.  As David Hume, the noted British empiricist, admitted, we must rule out all other possible explanations.  The problem is that we rarely do so.  Even Stenger’s wonderful example regarding the proven construct of gravity has counterarguments.  One of the counterarguments comes from physics, itself.  In fact, it comes from Einstein.  Einstein conceived not of gravity but the curvature of space as the reason why we fall to the precipitous grounding.  Gravity is a metaphysical construct (a theory regarding something that we cannot see) that Newton created even in his attempt to describe things in as physical terms as possible.  Einstein felt that this was unnecessary and developed an alternative. 

Whether the curvature of space (or any other alternative to gravity) is correct or not is not my current concern.  Instead, I use this example to indicate that it is not our data that tell us anything.  It is our interpretations, the theories and philosophies that we apply, that we use to explain and give meaning to the data.  Still, as the example of gravity demonstrates, there are many ways to interpret the same phenomenon. 

Like the perspective on gravity, the operant and classical conditioning with physiological mediation of drug abuse is just one possible interpretation.  From my perspective, while the entire behavioral (e.g., conditioning) perspective is designed to be parsimonious (simplify description and removing superfluous constructs), I don’t even think that it is the most simple explanation of drug abuse.  The reason I do not think that it is is that it does not explain all components of drug abuse.  For example, it does not explain spontaneous remission, people with identical environments not succumbing to drug abuse, the variant effects of drug abuse, etc.  In other words, I believe parsimony requires sufficiency of explanation and the conditioning approach does not give us this.

I think the reason that this is so is that it is restricted to one (or two) aspect(s) of the issue of drug abuse.  That is, it is focused on the environmental (and the biological) aspects of drug abuse.  But, it does not take into consideration the personal choice that is involved.  Even if we are pushed/determined by environmental contingencies and biological mechanisms to drink alcohol, for example, there is always the opportunity to choose not to do this (see Libet’s physiological experiments on free will – even if we do not possess a true choice in doing something, it appears that we do have a choice to not do things…I won’t get into the potential problems of Libet’s experiments negating the original choice right now, however).  In other words, what is missing is the exploration of choice.  The reason it is missing is because behaviorists deny that choice exists; it is a human fabrication (or anthropomorphizing…falsely attributing human characteristics to humans!).

Following this lack of belief in choice, “meaning” is not explored in this research.  That is, what it means for people to engage in drug abuse is not explored.  The reason for this is that meaning requires an individual agent acting; there is no meaning to actions if they are determined.  For example, when a rock rolls down a hill, we don’t say, “Why did you do that Mr. Rock?”  There is no significance to the action: it just happens.  Likewise, when we conceive of humans as having no choice, there is also no meaning (no significance) – and no responsibility – in the action.

Still, choice and meaning would explain some of the findings that clearly speak against the conditioning models and disease models, for example, of alcohol.  The most prevalent contemporary theories of alcohol abuse clearly rely on conditioning and disease model formulations.  An outgrowth of such a conceptualization is that people with alcohol problems should abstain completely form alcohol use because any use of alcohol will re-engage the disease process, which lays dormant until the individual starts to drink again.  However, the findings can certainly be interpreted as not conforming to this model.  For example, a number of studies indicate that the drinking of chronic “alcoholics” is not characterized by loss of control.  Instead, it is frequently goal directed – they do it because they want something out of it (e.g., it is a choice).  Furthermore, alcohol-related problems are quite diverse and, themselves, fluctuate over time, even within the same individual.  This means that different contexts have different meanings at different times.  There also appears to be a social element, with a tendency toward binge drinking in conservative Protestant sects (especially those from dry regions – not desert but those places that prohibit alcohol consumption) and in Irish Americans in comparison to those from Mediterranean backgrounds.  This would indicate not only a drinking culture (perhaps in Irish Americans) but also a somewhat retaliatory drinking culture (in responses to proscriptions – the cultural requirement to abstain).  The latter indication is quite remarkable given the mandate for “alcoholics” to abstain.  It is even more remarkable given the finding that those who are in abstinence programs are even more likely than those in “controlled-drinking” programs to “relapse.”  That is, abstinence does not work (especially force abstinence as the prohibition movement in the United States demonstrated).  Finally, socialization toward moderation in alcohol use appears to be more productive as a treatment.  Changing the meaning and the choices that people make regarding use appears to be more helpful in preventing alcohol problems than requiring them to abstain.

A human with choice and meaning and…responsibility?  This must be fantastical!  It certainly is easier to give that all away to environmental and biological determination.  Still, it would lack sufficient explanatory power…and it is utterly ruinous for society (see my blog on emotional disorders and expulsion a couple days ago).

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 24, 2008 Posted by | In Psychology | , , , , , , , , | Leave a comment

23 Jan 08 – Cognitive Psychology – Revolutionary?

23 January 2008

On this day, in 1970, the journal Cognitive Psychology was first published by Academic Press.

With the advent of cognitive psychology, behavioral theory was, at least theoretically, supplanted.  With a growing belief in psychology about the existence of intervening variables or mediator variables, cognitive psychology appeared to permit a true involvement on the part of human beings in determining their behaviors.  That is, unlike behaviorism that conceived of humans as determined by the environment or past reinforcements, cognitive psychology appeared to present a conceptualization of human beings as agents in their own behaviors.   

Was the “cognitive revolution,” however, really a revolution? The long and short of the answer is, “probably no.”  Cognitive explanations have come to dominate not only the basic aspects of the discipline, such as learning and language, but also the more applied aspects of the discipline such as school psychology and clinical psychology/psychotherapy.  This domination of the discipline means that explanations are now based less on observable behaviors and more on inclusion of the mind. 

Although mainstream psychologists have been more willing to theorize about non-observables, they have maintained their Newtonian heritage and their models have been instrumental in preserving the Newtonian paradigm for modern use.   

During Newton’s time, the universe was conceived of as operating like a great clock (he was, of course, a clock maker), flowing continuously and objectively along the line of time.  Today, the analogy used by cognitive psychology is the modern digital computer – but the concept has remained fundamentally unchanged.  While the new computer analogy places more emphasis on the software, it has not changed the characteristics of the machinery itself.   

Time – one parsimonious way of summing this up is to point to their common temporal metaphysic.  A Newtonian, linear approach to time, as Slife points out, remains a primary assumption in cognitive psychology.  Events are conceived as taking place across time and cognitive processing of these events is itself subject to temporal constraints.  Objective, linear time relations, wherein the past is the determining factor in present events, still organizes input from the environment.   

Cognitive psychology does, on first blush, appear to represent more rationalistic theorizing than did behaviorism, given the representation of mind in its theorizing.  However, when analyzed more closely, the cognitive turn is, in fact, just as empirical in its perspective as behaviorism was.  Past input governs all cognitive systems.  Even cognitive explanations that seem to emphasize present constructive or reconstructive aspects of cognition are often reducible to conventional linear, Newtonian, theorizing, just as behaviorism before it. 

While cognitive psychology has liberalized behavioral method and theory to include the “software” of the mind, it still relies exclusively on mechanistic metaphors and preserves every characteristic of Newton’s temporal framework.  Thus, the cognitive revolution was not, in fact, very revolutionary at all. 

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 23, 2008 Posted by | In Psychology | , , , , , | Leave a comment

22 Jan 08 – Mental Illness Commitment, Deinstitutionalization – Public Crisis?

22 January 2008

On this day in the history of psychology, a number of occurrences related to the development of mental institutions occurred.  For example, in 1821, the Ohio legislature authorized construction of the state’s first mental hospital. In 1825, the Virginia legislature authorized Western State Hospital in Staunton, the nation’s fifth public mental hospital.   In 1836, the cornerstone of the Pennsylvania Hospital for the Insane was laid, at the hospital’s site in West Philadelphia.

Today, I want to discuss not the opening of mental hospitals but their closing.  This closing has been referred to as “deinstitutionalizing” and has both supporters and detractors.  There are interesting and sometimes poorly constructed arguments against deinstitutionalizing.  There are also some fair arguments against.  I want to combat some of the more poorly framed ones, discuss a little bit the ones that are well-framed, and offer an alternative to both involuntary hospitalization and deinstitutionalization.

Deinstitutionalization officially began some time in the 1950s.  While some have suggested that it was the result of psychiatric medication availability, the truth is that talk of reducing the number of people hospitalized for “mental illness” had begun many years earlier and due to overcrowding of hospitals as a result of a huge number of such individuals holding the beds.  From 1955 to 1989, it has been estimated that the number of such hospitalized individuals decreased from 500,000 to about 150,000.   Essentially, what happened was that doors were opened to these hospitals and then closed…it was “discharge and be damned.” 

Many of the individuals were maintained in somewhat institutionalized, state-financed community arrangements.  Sometimes, that phrasing a mere anachronism for jail, prison, nursing homes, half-way houses, etc.  Others were simply left homeless or destitute, without the requisite skills to hold jobs and maintain productive relationships.

The fact that so many were left without treatment has led some to say that deinstitutionalization has been bad because of the effects on society.  This, in and of itself, perhaps is not a bad argument.  However, the argument is maintained by an appeal to emotion that is simply a logical fallacy, in itself.  Furthermore, it is an argument that the data does not support.

The argument is that untreated mentally ill were released to the streets.  Given the fact that they were not forced to be treated, through such involuntary means as commitment, and due to their psychopathology leading to believe they were fine and did not require treatment, these unmedicated people acted on violent urges and committed murder.  This argument, put simply, states that such mentally ill individuals are more likely to commit murder, leading to a public crisis.

It is interesting, however, reading such statements.  Invariably, the supporters of this notion note that such individuals commit 1,000 murder per year nationally (across the United States).  Certainly, this is too many.  However, consider the fact that, according to the Bureau of Justice statistics from 1976 to 2005 somewhere between 15000 and 35000 murders are committed per year.  This means that at most 1% of murders are committed by this population.  Furthermore, the year they are drawing this data from (1994), the Department of Justice report indicated that there was some history of mental illness in 4.3% of those homicides but did not indicate the severity of the mental illness (which could be as simple as depression or anxiety).  (BTW, 4.3% in 1994 amounted to about 727 homicides not 1,000 and, by 1999, this was about 645).

In fact, using the same data, it is conceivable to make a counter-argument: given that the estimated prevalence of diagnosable mental illness is about 22% (at most) of the United States population, even if we were to say that this entire population was killing people, this would mean that 22% of the population was responsible for 4.3% of homicides – and 78% of the population is responsible for 95.7% of homicides.  Hence, to address homicides, it makes more sense to look at the rest of the data related to homicides.  Again, according to the Bureau of Justice statistics from 1976 to 2005, blacks were 7 times more likely than whites to commit homicides (and 6 times more likely to be victims of homicides) and males were 90% of the homicide offenders (and 77% of the victims).  Half of the offenders were under the age of 25 and rates peaked between the ages of 18 to 24.  Hence, borrowing the logic of those arguing the ridiculous position that we are unsafe due to mentally ill being on the streets and their potential dangerousness to others, we should take this data and say that only white females over the age of 25 should be allowed to be free!  This is insanity!  (and a downright poor argument)

The other part of the argument is that by deinstitutionalizing, these people are not getting the treatment (specifically psychiatric medications) they need to prevent them from commit crimes.  This ignores a tremendous amount of literature that indicates at least some increased anxiety and aggression when on psychotropic medications, the increased comparative violence of inmates who are on medications (compared to even themselves when not on medication), crimes committed by people who have been on medication, and the fact that the only good longitudinal study we have (published in 1978 in the American Psychologist) indicates that treatment really is not helpful in preventing crimes – in fact, there is some indication that treatment, itself, is related to recidivism (committing a crime a second time or more).  Given the foregoing, such arguments are basically groundless. 

Another argument, however, has some merit – though, tenuously.  The other argument is that deinstitutionalization is equally as coercive as involuntary hospitalization and commitment to insane asylums.  This argument follows the libertarian perspective that humans have a right to choice in treatment.  I do not necessarily disagree with this.  I also think that the “discharge and be damned” mindset certainly followed this perspective.  Furthermore, I believe that there is likely a social (or contextual) element involved in the process (e.g., there is a fair amount of “social construction” or theoretical haggling, so to speak, in the diagnostic process). 

Still, I think this position misses the point, even while arguing it, that there are frequently problems in these people’s relationships, both with themselves and with others, that do need to be addressed.  That is, by saying that these people should just be given the implicit (not necessarily explicit) option of treatment misses the point that such individuals (and even individuals who are not “mentally ill”) do a great job of denying that they have problems.

As such, I would recommend a two-fold change, that probably could not happen in this society. 

One, I would recommend that we all stop treating the word “responsibility” like it is a four letter word.  We, as a community, have a responsibility to help these people.  These people have a responsibility to help themselves.  We also have a responsibility to hold them accountable for the (poor) choices they make.  Yes, there are values involved in this – but that is the basis of our legal system (and, while implicit to our medical system, the medical system is about treating illnesses not about holding people accountable for their value infringements). 

Second, I recommend a relational rehabilitation program, which involves the development of a sort of “therapeutic community.”  This community would be a community, like others, but designed for the purpose of addressing problematic relationships both with others and with self.  It would be based on a principle of promotion of authentic relating, accepting responsibility, and taking responsibility.  It would function, to use a phrase Yalom made popular, as a “corrective recapitulation of the primary family group.”  Instead of marginalizing these individuals, by placing them in hospitals, jails, prisons, etc., we give them the opportunity to become part of something greater than themselves and to create meaning in their lives that they otherwise lack. (The Alldredge Academy is a good analogue exemplar for what I am talking about here)

Perhaps this is simplistic (in my mind, it is actually quite a complex process – which is why I question its feasibility).  Anyway…

Fuel for thought, I guess… head to my website for more fuel for thought regarding psychology.

January 22, 2008 Posted by | In Psychology | , , , , , | 2 Comments