Psych Your Mind

Sunday, December 4, 2011

Psi Your Mind?

source
Earlier this year, Daryl Bem, a Professor at Cornel University, published a paper on Psi phenomena (also known as psychic phenomena). Bem's Paper was published in the premier journal of social-personality psychology, the Journal of Personality and Social Psychology (JPSP). In the paper, Bem presents results from eight experiments where he finds evidence for precognition (conscious cognitive awareness of future events) and premonition (affective apprehension about future negative events). The results have shocked our field!

The experiments themselves are among the most clever I've read and truly--at least in my mind-- are evidence of creative innovation. The experiments use normal psychological interventions (e.g., giving participants a list of words to memorize and then testing their memory for the words) in reverse order. Thus, in a memory study, participants' memory of words is tested before they learn the words. In another experiment, participants choose between sets of neutral images, with the expectation that participants will choose these images to avoid subliminally primed negative stimuli. Once again though, participants are primed after they choose between the stimuli.

What did Bem find? Well, he found that people were better at remembering words that they were about to learn than they were words that they wouldn't learn (precognition). He also found evidence that people tended to avoid neutral pictures that were followed by a subliminal negative stimulus prime (premonition). There you have it, the first experimental studies of psi phenomena!

This is really where the story seems to take off. When members of the field of social-personality psychology learned that this set of studies would be published in JPSP--our premier journal -- many bristled. I noticed three reactions:

(1) The "Balderdash!" Reaction: There are several reasons to believe that psi is impossible (e.g., prevalence of scam artists claiming psi abilities, no way to explain how psi works). Psychologists are skeptical about plausible data by nature, and as a result, the wild psi findings were seen as an impossible fluke set of findings that would surely not stand up to the rigors of the scientific method. More specifically, researchers believed the findings would not replicate in an independent laboratory. For example, one enterprising group of social psychologists went ahead and attempted to replicate these psi findings and failed (their account is summarized here and here). I'm not aware of peer-reviewed replications, but if they exist feel free to write about them in the comments!

"The ice is gonna break!" source
(2) The "Not in Our Flagship Journal!" Reaction: Other researchers were more receptive to the idea that psi could potentially exist, and that it is something that researchers can and should study, as part of the human psychological experience. They did however, have several reservations about this set of studies ending up in JPSP, the premier journal of social-personality psychology. Usually, JPSP papers must have a psychological explanation for why something occurs. As psychologists, the "why?" is our primary question in research: Why do people respond aggressively following rejection? Why do people need appreciation in relationships? Why does compassion improve well-being? In the paper itself, Bem admits that he has no real explanation for why psi might occur (and who would?). This, some would argue, is reason enough to have the psi paper rejected at JPSP.

(3) The "Our Statistics are Questionable!" Reaction:  The final reaction is probably the one that stirs the most concern among psychologists. This reaction operates under the assumption that psi does not exist and suggests that the publication of psi research represents a fundamental problem with our field. Specifically, researchers in social-personality psychology (and perhaps other fields) engage in hypothesis testing techniques that increase the likelihood of results that won't replicate. Many of the specific practices are summarized here. Some examples include not reporting one's failures to replicate findings, churning out studies until the data confirms one's hypotheses, and creating hypotheses after the results are known (HARKing). The New York Times recently wrote an article suggesting that these and other practices (including faking one's data) have damaged the field of social-personality psychology beyond repair (read the depressing piece here).

After reading the psi paper, I'd actually go with a fourth reaction. I trust Bem as a researcher, and believe that he wouldn't publish findings that he doesn't expect to replicate. In the psi paper itself, Bem has an entire section that both encourages researchers to replicate his work, and warns that even the most robust findings don't always replicate. I also actually think it is neat that JPSP decided to publish this paper. Though Bem doesn't know how psi works psychologically, he does discuss how many traits that one would expect to correlate with psi ability (e.g., openness to experience, meditation training, etc...) actually don't seem to predict psi ability. That's good to know when trying to avoid all the psi scams out there. Miss Cleo won't fool you anymore!

I actually don't believe that the psi findings damage our field either, because other fields are full of unexplained effects. Take the notion of quantum entanglements in physics for instance (forgive my public high school physics education): Physicists readily admit that particles that become separate interact in ways that can't be explained by traditional understandings of time and space. Despite this lack of explanation, physics seems to be doing just fine as a science. Like physicists, maybe psychologists should come to terms with the fact that we won't always know why something happens. I'm interested to hear what you think about psi!


Bem DJ (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology, 100 (3), 407-25 PMID: 21280961

11 comments:

  1. Wow. The "psychology works in mysterious ways" response is the worst I've heard yet. Social psych doesn't just have a problem with statistics and publishing bias, apparently there's a critical thinking deficit as well.

    ReplyDelete
  2. Your attempt at flattery won't work on me, anonymous... thanks for reading, though! ;)

    ReplyDelete
  3. ‘Like physicists, maybe psychologists should come to terms with the fact that we won't always know why something happens.’

    True words for any vocation.

    ReplyDelete
  4. I agree Alan. Sometimes we (researchers included) are overconfident in our ability to explain why things happen.

    ReplyDelete
  5. I'm a skeptic, but just this week I was thinking that I need to re-research the issue and refresh my memory (and then you go and write this post, omgprecognition!) Hehe. In all seriousness, your conclusion that we should accept/recognise that we might not always be able to explain something is a fair one, as long as no one sees that as a reason to "give up" trying of course :).

    Nice post!

    ReplyDelete
  6. I'm a skeptic too, but trying to remain cautiously optimistic!

    Thanks Rebecca!

    ReplyDelete
  7. How did you know I was going to be assigned this article and critiques of it this very week in my methods class?! Freaky...

    Great write-up (as always)!

    ReplyDelete
  8. Reilly!!!!

    How did I know? I think the answer is clear: omgprecognition! (thanks Rebecca for the new word!)

    By the way, HAT is almost finished collecting data (maybe 10 more dyads).

    Finally, when are you going to write a guest post for Psych Your Mind on challenge/threat states or oxytocin or TBD?

    ReplyDelete
  9. It seems that at least some of the objections to methods and statistics could be fairly easily overcome. Reviewers are concerned with the variety of different tests. Announce the project before you start. Choose what seems to be the most promising test and replicate it exactly with 10 or 100 times as many subjects or runs and that have been selected to be the most promising. Then release the results, no matter what they are. Some of the tests don't look that difficult to do, but replicating the original test exactly makes this more challenging.

    ReplyDelete
  10. I agree that a lot of what could solve the problem with data skepticism has to do with plain transparency in the process. This blog post by Oregon Professor Sanjay Srivastava has some nice suggestions: http://hardsci.wordpress.com/2011/05/10/how-should-journals-handle-replication-studies/

    Thanks for your comment, anonymous II!

    ReplyDelete
  11. There have already been failed attempts at replication for this one.

    http://blogs.nature.com/soapboxscience/2011/12/19/the-rise-of-anomalistic-psychology-–-and-the-fall-of-parapsychology

    And, if you do a web search, you should be able to find that there are serious problems with the methodology used in the Bem study.

    ReplyDelete