Return Home

Cosmic Habituation

Tuesday, May 03, 2011 - 07:20 PM

Magnifying glass Magnifying glass (Jazzyblue TR/flickr)

In this short, Jonathan Schooler tells us about a discovery that launched his career and led to a puzzle that has haunted him ever since.

In the late 1980s, when Jonathan Schooler was a graduate student in psychology, he did a little study that became a big deal. Schooler asked a group of people to watch a video of a man robbing a bank. After watching the video, he had half of them jot down a description of the robber. And, wait for it ... turns out that the people who took notes were significantly LESS likely to recognize the robber later.

Not surprisingly, this weird effect -- called "verbal overshadowing" -- got a lot of attention. (In fact, we talked with Malcolm Gladwell about verbal overshadowing in our show about Choice). But just as Schooler's big discovery was making it into newspapers and Psych 101 textbooks…something started happening to his data. Each time he repeated the study, the exact same study, his once attention-grabbing effect got smaller and smaller.

In this podcast, Schooler tells Jad and Robert about his journey to figure out what had happened to him, and why it was happening to other scientists too. After considering all the reasonable explanations (statistical quirks and procedural stumbles), Schooler found himself thinking that maybe, just maybe, the laws of nature are less solid than they seem.


Correction: An earlier version of this short incorrectly stated that Jonathan Schooler saw the effect size of his study fall by 30% on two different occasions. In fact, he saw it fall by that amount the first time he repeated the study and saw a general downward trend thereafter. The audio has been adjusted to reflect this fact.

Correction: An earlier version of this short incorrectly attributed a statement to Jonathan Schooler’s advisor. The statement was actually made by his colleague. The audio has been adjusted to reflect this fact.


Jonah Lehrer and Dr. Jonathan Schooler


More in:

Comments [106]


Maybe I'm not looking at the full picture, but wouldn't it be more reasonable to attribute the decline effect in the cited studies to epistemic feedback, shifting cultural attitudes, and observer-expectancy effects rather than to a mysterious collective consciousness and/or quantum woo? Living subjects are moving targets, after all, and experimenters are hardly objective robots. Pop psychology and the infamous memory wars might affect the way participants approach memory tasks; an experimenter's awareness of the decline effect might change the way they handle the studied birds or interpret their behavior in ways that reinforce such an effect; increased distrust in the pharmaceutical industry might contribute to a suppressive effect or lead participants to make more conservative self-reports.

Aug. 29 2015 02:15 PM
stan klein

Two simple comment/questions.

1. When will the decline effect decline?

2. Why is it that I (as well as many others) are able to replicate our studies (and even run them continuously as class demos for 20+ years) without decline? Is cosmic consciousness avoiding me and others?

If the decline effect made any coherent point (and it is an under thought example of what passes for "thought" in psychological "science" (of which I am a member), then every time you turn on a car engine a number of small experiments are being re-enacted. So are our car functions falling short (declining) as a consequence of N years of everyday experiments in cause and effect? Or does Cosmic Consciousness discriminate against folk intent on exploring nature and leave other forms of experimentation in action alone?

This is so reprehensibly irresponsible of you folk to air. Unless, of course, your goal is to show that psychology favors training over education (an effect I could agree takes place with disturbing frequency).

Jan. 24 2015 09:59 AM

I must say, I'm pretty disappointed in your reporting.

The phenomenon was observed using statistics yet you never once actually talk to a real statistician about the effect you observed. You talk to all kinds of psychologists (who might use statistics as a tool but that doesn't mean they really know anything about statistics,) but have you sought out the interpretation of someone who actually understands the tools and the inference we derive from them? Maybe you should try it...

Nov. 22 2013 07:54 PM

Five words: publication bias and selective reporting.

Jun. 05 2013 07:13 AM

I just heard this podcast the other week,and wanted to let you know that thou have completely ruined the Radio lab experience for me.

Up till then, I was always awestruck with every word you said. Now, my brain continually goes back to this show every time you bring up the results of some fascinating new experiment, usually in neuroscience, and I think to myself..."decline effect. What these guys are telling me won't hold up if tested more. It is strongly biased, first order, human science study."

This happened to me just last night as I listened to the Obama effect show. No way do I think that is accurate now! And instead of getting that customary awe, wow I want to go back to school and learn everything about neuroscience feeling that I usually get from Radio lab, I got a, 'mehhh, don't buy it."

Feeling very down, having lost the magic of your whole show, by listening to ONE of your shows. You shouldn't have done it guys!!!

Mar. 13 2013 11:13 AM
Jesse Haley from Camden, TN

Could some1 from the scientific community contact me? I dont know who to contact and been attempting to contact any1 who'll listen. Ive solved the mystery behind the infamous double slit experiment! Theres a reason that makes perfect logical sense why the particles will pop n and out of existence! Theres a portion of all your equations that SHOULD NOT BE THERE! Because it is a man made invention/variable. Once u remove it the experiment will explain itself logically! Plz contact me and I will if youll discuss this w me ill gladly share credit with whomever for the discovery of this new fact! Its not theory it can easily be tested time and time again as fact just by doing the double slit experiment which has been re done millions of times! So its alrdy been repeated to be proven factual if u know the information I know! One tiny element needs removing or adjusting from the equation because its man made and n the world of particles and space it doesnt truly exist! Contact me!

Mar. 03 2013 12:30 PM
T.J.Short from Joliet, IL

I'm wondering if the growth of the Internet and video watching overall could have gradually changed this study. It seems that people wouldn't have seen videos like that of a robbery much before the Internet. But now many of us see home videos and/or security camera videos all the time. Also, we've gradually learned to inspect what we're watching now. Our brains may be adapting to these videos and be able to remember what we're seeing much better. That was my first thought as soon as I heard this.

Oct. 27 2012 12:07 AM
kk from bed still

so, wot do you think mr. Robert, could 'feiwwwwww' explains the preaching of Jesus

Oct. 09 2012 06:48 PM
igr from Claremont, CA

great blog, no need to think we can influence nature's laws by studying them...

Jul. 11 2012 04:02 AM

First of all, the comments here are the most intelligent, cerebral, advanced-vocab-using nerd-fest comments I've ever seen collected together, which is awwwweeesooomme.

Second, I don't think the effect of an experiment showing less and less astounding results over time is weird at all. I think the simple explanation is that when an experiment gets interesting, weird results randomly in favor of a new and exciting theory, then the scientists want to prove it and go on to do many more experiments. But if the original results were normal, average, expected and boring, those scientists are not going to get another grant to replicate it. That's why there are several examples of this interesting results getting less interesting, but not of not very interesting ones becoming more interesting.

Of course in RadioLab fashion, anything they don't understand becomes a huge mystery and a key to the universe and the conversation becomes very melodramatic, awe-struck and philosophical, which is very entertaining and thought-provoking, but maybe a little over the top.

Best regards,

May. 08 2012 01:05 AM
Greg from Georgia

I reckon regression to mean can be applied to our feeling towards people places and things. New and exciting finally goes to an average.

Apr. 29 2012 07:20 AM
Sloppy Boggins from Toronto

I have noticed that, being a more visual person, that if I'm looking for something and picturing it in my head I have a hard time finding it. I would then later find it in a place I had already looked many time but I would only find it when not looking for it.
I don't believe that it is not there when looking for it but that my minds eye and my real vision overlap and thus it does not appear to be where it is.
I think that this happens in the search for results as well. Most studies that provide something truly interesting are revealing something that was not anticipated.
Just a thought


Mar. 29 2012 03:33 PM
Linda from Florida

I think Tom Campbell explains it best in his book My Big Toe - reality is not objective and deterministic - it is subjective and probabilistic.

Feb. 04 2012 01:40 PM

A lot of people have left what sound to me to be very plausible explanations for this "phenomenon". A good meta analysis of randomly selected topics would show whether this "decline" is to be expected statistically or is a signal in its own right. After all, this podcast really focused on anecdote and was highly biased in looking for similar examples.

Still, I actually wouldn't be surprised if the effect was real. My theory, which is only slightly different than one stated several times in the comments, starts off with the publication of a strong result because weak results with small N aren't significant and won't be published. Then, while a few others may seek to disprove the results, most (at least in my experience in neuroscience) will seek to confirm the results in order to explore how they happened or expounding upon the theory in some way. Since they are looking for confirmation, if they get a weak result relative to the original publication, they may figure they did something wrong and so they go back and try again. They either just don't publish the weak data/result (possibly doing a new experiment with some small methodological changes) or increase their N until the result is stronger. A few might publish results that are unbiased. However of all of the results published in the first wave following the original publication, the results will be more modest. On the second wave, it will not seem as unexpected to get an even more modest result, leading to more data/results being published that are closer to the true mean than before. This would continue until the mean has been reached.

So ultimately what we'd have would be regression to the mean, but filtered through the process of human implementation of the scientific method that produces an *apparent* decline.

The nice thing about this is, even though it takes a few years, the truth of the matter is ultimately found through the process. One could possibly confirm this via meta-analysis by choosing some random selection of new phenomena, getting in touch with any lab that might follow up on the experiment, and then somehow (and this would be the tricky part) get them to donate all data (including data that isn't published for whatever reason). If this trend were not happening, then you'd expect the data that gets thrown out to be both high and low relative to the mean. If the trend were happening, you'd see a decline in difference between the mean of the discarded data and the reported data, and you'd see a pure regression to the mean in all of the data provided for the meta-analysis. Any sociologist and philosophy of science takers?

Jan. 30 2012 11:26 AM

With regards to some of the comments about quantum mechanics (specifically the double-slit experiment), and in case this comment hasn't already been made, it's not necessarily about whether you observe a particular particle/wave, but whether you know enough about the *system* to know if it took one path or another. If you had a photon (or electron or other particle) that had a partner, and if determining the path of the partner allowed you to determine the path of the original, then you could force (or not) the original to take a single path by measuring the path of the partner. This makes the issue of observation a bit more general, and makes me wonder about whether there is some coupling effect between systems that is responsible for "collapse". Is it possible that we only notice collapse because we are now joined as a system, and that any other system NOT coupled to ours would somehow see a different result or collapse of the system? Now I'm speculating heavily :)

Jan. 30 2012 10:44 AM
ryan from NYC

I am late to this episode, but here's what I can't stop thinking about... If someone did a series of meta-analyses of studies showing regression to the mean, would they gradually show less regression??? gah!

Dec. 08 2011 10:37 AM
Elizabeth Qian from singapore

Just heard this episode on my way to work this morning! It kinda gives a good reason for my work as I am in the scientific research field. Imagine my boss's reaction if I tell him this theory! Haha. The fact that we can never replicate anyones work and get exactly the same results is interestingly baffling.

Now that you guys have explained it with this theory, I would have to agree that there is such a thing as collective consciousness. However this is beyond human comprehension therefore it is not easy to explain in what little english words we have. The cosmic universe is way too vast for our minds. I have another theory as well to add on to your collective consciousness theory.

In Maya, everything that happens or as is, tends to happen or be in paradoxes. So I believe in addition to this theory, the larger picture would be if everyone in the world ever gets to be sampled, results will probably just give you an equal positive and negative reaction.

Nov. 30 2011 07:56 PM
dan oshea

the observer effect is real, and has an impact in the macro as well as the sub atomic realm, it works now as well as retro actively. @kuze, i'm no expert but, my understanding is the bullet is in all possible states, including passing right through the plate, until the observer makes the observation, which collapses the wave function, resulting in the most statistically probable result. at the sub atomic level, anomalous results are more prevalent, at the macro scale, due to its greater stability as a system, more predictable results follow. this primacy of consciousness has consequences, relevant to topics in the show. it would seem that the more times an event occurs, and is observed, the greater the likelihood it will occur again in the same way for future observations. all of our scientific understanding is the most functional model we currently have, none of it is written in stone. the discoverers of quantum physics were conscious co creators of those physical facts. our conscious intent has an incredible impact on the results of our experiments. there is some very interesting speculation on reality being consciousness interpreting data in a digital simulation.

Nov. 08 2011 03:55 PM

This episode disappointed me. Any relationship drawn between this "decline effect" and the collapse of a wavefunction (by observation) implies a profound misunderstanding of basic quantum mechanics.

Anyone else cringe when they were talking about the decline effect with birds and the sample size was like 13? The arrogance of this psychologist to advocate such an substantial idea without deeper study or thought makes me extremely skeptical about his research as a whole.

I was initially fascinated with the memory experiment since I could potentially apply it to the massive amount of material I must memorize in medical school. I expected more from radiolab. Cosmic habituation sounds like something I would talk about with a friend after a few rips from the gravity bong - not something a serious psychologist would mention to compensate for his failure to account for conflating variables.

Nov. 02 2011 11:36 AM

This has already been seen in the world of quantum mechanics with the double slit experiment. When no one observes a single photon being fired through the slits, it goes through both slits at the same time and interfers with itself causing an interference pattern on the wall behind the slits. When you observe the slits however, the photon 'chooses' one slit to go through and only creates a single point of light on the wall

Oct. 24 2011 01:54 PM
John Stotler from Tralfamadore

I was reading Vonnegut's "Breakfast of Champions" last night and stumbled onto this, which made me think of the broadcast:

I wrote again on my tabletop, scrawled the symbols for the interrelationship between matter and energy as it was understood in my day:


It was a flawed equation, as far as I was concerned. There should have been an “A” in there somewhere for Awareness—without which the “E” and the “M” and the “c,” which was a mathematical constant; could not exist.

Sep. 12 2011 10:07 AM
GordFurth from BC, Canada

This pattern of apparent diminishing treatment effect is an interesting observation. However, I'm certain that the explanation is much more mundane and wouldn't make for good radio (or attract research money). The problem could be as dull as a fundamental flaw in the way we do statistics. Let's face it; there is nothing sexy about statistics.

Sep. 07 2011 12:06 AM

A very interesting show that deserves a follow up. Two other comments... strict fundamentalist expressions of "science" have their place, but are hardly a universal template for discernment. I am not advocating psuedo or junk science, but more of the need to embrace the noetic variety. There are perfectly valid and useful explorations noetic sciences offer, and perfectly obvious limitations in rabid allegiance to the scientific method.

Second, the definition of "science" could be an entire show... not to discredit the value we have received, but to prepare for a more aware, observed, and media/data intensive future. How can things like distributed processing be harnessed for greater good? Is science scalable? What value is there in things on the edge or outside science?

There are effects that are real and even intended that science as rigidly defined can do nothing with. The very resistance to believing things could be a certain way is not only a dangerous bias, but is routinely exploited for nefarious ends. Even things as sensible as the precautionary principle would be hamstrung by purists. Context is everything.

Jul. 20 2011 12:19 PM
Ben from Chicago

Sheldrake's Morphogenetic Feilds, the idea that at the sub-atomic level everything in the universe (biotic) communicates (including emotions, intentions, and awareness) and that in fact, localities can become saturated with subatomic awareness would explain the Cosmic Habituation effect happening across many different scientific platforms... and explains how networks of micro-organisms, even cellular structures, and colonies of organisms fall into sync, it also explains and gradual shifts in everything anthropological. Personally, I believe in collective unconscious, and think it's irrational to assume that we are all impenetrable microcosms... we're part of something bigger. The idea gets dismissed as "nonsense" because it's apparent to be anti-scientific since it questions the assumption about scientifically found truths to be unchanging

Jul. 19 2011 04:56 PM
Kuze hideo from Sydney, Australia

A very interesting podcast, thank you for giving me an excellent subject to ruminate on. I have two points to make, firstly to a number of commentators who have slammed you for 'pseudoscience'. I often come across this kind of fanaticism in scientists or wannabe scientists, and personally I find it incredibly arrogant and egotistical. A healthy dose of "radical conjecture" is an important sign of an open mind, and open minds are what breakthroughs in 'pure' science eventually come from. Taking such offence to a hippie-style explanation not only suggests an unconscious insecurity about one's own belief in the ways in which reality is documented, but also is tantamount to accusations of heresy, and has much in common with witch-hunting.

The second point I would make is to echo just one commentator, who observed that the effect under investigation is related to consciousness. I doubt very much that the behaviour of a steel plate impacted by a projectile such as a bullet would vary significantly no matter how often it was observed. The steel would not soften; the bullet would not slow. To be sure, manufacturing issues relating to the quality of the steel or precision of the bullet's design may show a slight change over time, but essentially, I question whether macro-scale physics would succumb to a similar effect.

There's just my opinion. All in all, an excellent podcast. Thank you for your time.

Jul. 07 2011 06:59 AM

Came here to post the Millikan results, but Adam C. Synder beat me to it. You guys should really do a followup.

There are many possible mechanisms for the slow reversion to the mean. One possibility is that the researchers who try to replicate the results, and fail, simply don't publish their results, fearing that they have made some blunder. Only people who are within a reasonable distance of the first result feel confident about publishing -- or they may adjust their experimental method until they get a similar result!

If this is true, it may be that the defining factor is the prestige of the initial results. I suggest that investigators should look into whether such results cause slower declines.

Jun. 28 2011 12:49 AM

If a truth slowly becomes a lie, then a lie, slowly becomes the truth. Back and forth.

When you point out someone's habit, you make them self conscious to the habit, and they change the habit. You killed their flow. Unless they wanted to keep the habit.

When he became conscious of what was happening, and people read about these test. The public became conscious of this phenomenon. So later, when he re-did the test. What if the people he brought in heard of this test. So what if, they did write down the crooks image, but still kept the image afterwards. Knowing there was a chance to lose what they saw by writing. He may of given them power to overcome the effect.

Same with telling someone the pill that they are taking is a placebo. You kill the mind trick. They don't know what to believe.

The study with the female birds looking at symmetrical feathers to determine a mate. Could be, that the female birds knew that the feathers would provide their offspring with better survive techniques. So thats what they were looking for at the time. Also, it could of just been their latest fashion.

Same with humans. Girls are attracted to a certain type of man at the time. Then seasons change, taste changes, and they want something different.

When we start to become conscious of whats happening, we kill the magic.

Jun. 22 2011 03:49 PM
Fero Kollar from Sydney, Australia

Surely there are 'experiments' occurring in nature repeatedly and do so without the interference of humans and virtually always return the same results. Of course, now and again, there are other results that can be explained by different test conditions etc. It seems 'interpretation' and the fallible human mind is the likely cause of the decline effect, otherwise why wouldn't the decline effect also occur in other regimes where humans are not present? or have I missed the whole point here?

Jun. 21 2011 11:14 PM
Jason Quick from Bellingham, WA

In Biostatistical Analysis, as an undergrad, I learned about how many researchers instill bias into their studies through their own excitement to have significant results. The gradual decline effect may be more about a decline in enthusiasm and therefore a gradual shift toward a more accurate description of the population under consideration.

I don't do research anymore, but I suspect that if our graduate schools were under less pressure to produce profitable or provocative results we might see a gradual increase in the relevance of research.

Jason Quick, M.A.
One-armed Juggler

Jun. 19 2011 03:38 AM
Mark Hansen from Wauwatosa, WI USA

I believe this is simply a cognitive bias in sampling. The sample is no longer a random sample from all the various scientific studies that occur...but only a sample of studies that show this same effect.

Simply said, the researcher noticed a strange result...and then only started noticing other experiments that have a similar strange result...while ignoring the ones that don't.

It was disappointing to have this podcast venture into non-science.

How about a show on how biases like this work and affect our ability to do good science? Look up "List of cognitive biases" on Wikipedia.

Jun. 13 2011 02:37 PM

Come on Radiolab, this is pseudoscience.

What's the more likely explanation for psychology studies losing force over time: the first conclusion wasn't true, or that the universe is conspiring against the psychologists (despite no physical evidence in 300 years of hard science) to ruin their results?

The psychologists in question probably used the wrong statistical method, or designed the experiment imperfectly.

Please don't continue down this path, Radiolab. Real science needs you.

Jun. 12 2011 06:59 PM
Larry Curcio from Pittsburgh PA

So you attempt an intricate and complicated experiment the first time. You can't believe the number of little mistakes you make. You probably get a broad array of results, but you always find something you did wrong, and so you try again. At some point you get better and you get good results that seem solid. You publish, but you're still new to this experiment. You have still made mistakes here and there. They aren't as obvious as the first ones.

You try repeating the experiment. Every time you do so, you get a little better at controlling things and doing it right. You get better slowly, and your results change slowly.

Some people do this and actually get better reuslts. They're happy. The ones who get worse results are disappointed and puzzled.

Meanwhile, other people are trying to replicate the original results in slightly different experiments. At the end of all this, the effect better be pretty strong or no one will know what's happening.

This goes on all the time. Leave the universe out of it.

Jun. 12 2011 01:04 PM
john from Santa Cruz

This was an interesting show on an interesting phenomenon in scientific publication. It was great to think about various factors that could give rise to diminishing effects. One big problem with the story is that you made it seem like 'most' research projects were showing this effect, rather than just a very few select ones among the millions of research studies. This might be an example of the effect itself. If we look more closely at this effect (such as looking at what percent of studies show this effect, we'll probably see that it doesn't happen that much.)
Also, to start talking about the laws of nature changing because we observe them - that earns my most heartfelt OH BROTHER. I'm sad to see you guys doing the new age quantum mechanics thing. This is an interesting subject, so I was sad to you 'punt' with the silly quantum thing.

Jun. 08 2011 02:48 AM
runcyclexcski from Virginia

As a scientist myself, I find it easier to believe that the original result was an artifact, rather than to believe that the law of reality change when we make observations. I think it's scientifically more useful to accept a faulty experiment/interpretation, rather than to try to come up with metaphysical explanation of why the experiment was not faulty, but the laws of nature changed instead. Especially in the field experimental psychology which is arguably the field of science with the highest number of uncertain variables.

Jun. 06 2011 07:09 PM
Paul Harder

Benjamin E. makes a similar point, but at least the cases mentioned in the show all involve the operation of 'minds' of one sort or another - whether bird minds or human minds. These sorts of studies observe at the level of behavior, but not at the level of the mechanisms that produce it. My impulse on hearing about Schooler's first experiment is to ask what is happening in the brain that gives rise to the "verbal overshadowing" effect. "Cosmic habituation" sounds a lot less mysterious (and a lot less far-reaching) when it's taken to describe only the operation of minds, which are notorious for forming habits that lead to radically diverse kinds of behavior.

Jun. 06 2011 09:17 AM

Well, I came all this way (google search and everything)--

So, even though previous commenters have covered it: this thing is all about the HUMAN side of scientists, i.e. confirmation bias, publication bias, etc.

I don't believe it has anything to do with larger patterns, in fact I have a hard time believing it's even sampling error. I think the guy just believed his own BS.

Schooler deserves credit for trying to reproduce (debunk?) his--deservedly-- incredible results. I think that the publicity of his false/dishonest results got away from him.

Sadly, he then came up with some BS theory to cover his tracks amongst the cocktail party crowd. Even more sadly, he got people to believe his bull AGAIN!!!

I would encourage him to give up past hypotheses as quickly as data allows, and I would encourage radiolab to push the skeptical side of science as the most consistently mind-blowing.

Er, I don't mean that. The science that is based on the actual natural world is the most mind-blowing.

Jun. 05 2011 09:50 PM

As some others have mentioned, aspects of this remind me of sheldrake's morphic resonance idea that once something has been learned, discovered or understood it is easier for another independent individual to learn or understand it afterwards.'s an idea from the 'fringe', but still interesting to contemplate

Jun. 04 2011 07:28 AM

A mention of the infamous double slit experiment is conspicuously absent here.

Jun. 03 2011 06:42 PM
Steve from Boston

This is unfortunate.

There are many reasons for the so-called decline effect, most notably regression to the mean, differing biases of experimenters under replication, changes in effect size when replicating with additional factors that test boundary conditions of the effect, and publication bias (e.g., toward experiments with greater effects sizes.)

Any combination of these (et al. not cited above) can fully account for this so-called decline effect.

No need to talk about cosmic habituation and other such nonsense.

Jun. 02 2011 01:03 PM
Haikook from Colorado

I must agree with nerdpocalypse and Ted from Columbus, and I am mystified why the experts cited in this piece did not bring up entropy. I see that the behavioral sciences have not changed much since my college days (don't ask). Personally, I don't see why any attempt to pigeonhole what are essentially software-based aspects of life on earth would be consistent over time. To the experts: get thee to an fMRI machine. Yay life!

May. 30 2011 12:02 PM
Benjamin Evans from Nicaragua

That was quite an interesting and mind boggling podcast. I do have one question that I feel may shed light on the nature of decline... Could these observances eventually make a rebound, or fluctuate? It seems these observances are found mostly with living organisms (as extrapolated by the mentioning of this NOT being noticed in physics). Has anyone studied those changes long enough to observe a regression toward the original data? For instance, has anyone considered that maybe theories that fell into dismissal may have come-back? And is it possible that this happens because we only take notice of results when they are on the up swing and that some theories were abandoned because they were simply

May. 28 2011 11:35 AM
Eileen from Austin

Fascinating story, one of so many. I kept waiting for the punch line I was sure was coming, but it never did. You started the story by telling how the students who studied the bank robber by writing a description after seeing him were 30% poorer in their identification. When Schooler (sp?) continued to study this effect, it declined by 30%. Doesn't the students' experience look like a corollary to what happened to Schooler?

Secondly, you said that we don't see this effect in physics, but you've had Brian Greene on your show several times, and he has describes a great experiment where light beams behave differently when they are being observed.

May. 27 2011 10:29 PM
JerL from

As soon as you're talking about published studies, the nature of how studies are selected to be published can explain this easily. At first novel findings on a new topic are preferably published over non-novel findings but once the novel studies become established it becomes more novel to publish studies contradicting the initial novel findings and on and on.

Out of the thousands and millions of effects that people studied, we would expect that we would observe something that looked like a decline effect for some of them by chance. Show me that the examples of the decline effect aren't either due to the way science is published or simply from cherry picked examples and I might start getting interested in this but right now this is rampant speculation.

Also, the nature of some research might cause something that looks like the decline effect. In epidemiology, a lot of topics start out being studied with poor data where confounders may no be properly controlled for. As the quality of the data increases and confounding decreases the effect can decrease slowly (if the initial confounding was causing an overestimation of the effect) until they do an RCT and discover that effect was small to begin with.

Point is, I think we have a long way to go before 'collective consciousness' becomes the most likely explanation. A long, long way. Long way.

May. 27 2011 01:08 PM
Curt from Minneapolis, MN, USA

As Carl Sagan paraphrased Hume and LaPlace, "Extraordinary claims require extraordinary evidence." Schooler's leap to 'collective consciousness' from a proposed decline effect requires way more analysis and stronger evidence than he has provided. This might have been a good piece to include in a full program, with other views in other segments, but as a stand-alone piece, it's somewhat embarrassingly inadequate.

Some folks (Jad?) like the mystery for its own sake and are unwilling to give it up; some of us like the mystery for the exercise of solving it.

May. 26 2011 04:13 PM
Steve from Chicago

Craig from Vancouver...

Not "omnimedia" but "on the media". See:

May. 26 2011 12:26 AM
Amy from Dayton, OH

This podcast really missed an opportunity to explain probability theory and how it applies to studies that get published, as well as the importance of replication in the scientific process. I really admire RadioLab and the work the show does to make difficult ideas easier to understand and changing the way we think about the world around us. However, I think this podcast short did the show and the listeners a disservice by the ending the show by explaining away Schooler’s conflicting results with what amounts to “and then magic happened” instead of addressing some of the other (more plausible) statistical explanations. I was also disturbed that the unethical advice given by Schooler’s advisor was so casually added into the story.

May. 25 2011 02:03 PM

Simple confirmation bias. Any controlled study is vulnerable.

That's one reason science is moving towards data intensive research. See "The Fourth Paradigm"

Compare a study that tracks a few hundred patients observationed by people running the experiment and one that mines vast amounts of data that was collected with no particular survey in mind.

Which is more likely to yield surprising insights?

May. 23 2011 12:49 PM

I need to second Bazmati's reminder of Rupert Sheldrake's Morphic Resonance concept. In what I recall assuming were very well-performed studies, he observed that different rats would run an identical maze in decreasingly short times, as well other unintuitive phenomena that are very similar to the stories in this podcast. I'm surprised that Sheldrake's name didn't come up during the production of this piece.

May. 22 2011 05:51 PM

I wonder, when I heard this I instantly related it to the double slit experiment of physics. But then after some thought, that would be quite a pandora box. By definition though, the very act of observing renders the phenomenon mute. That is to say, there would be nothing left to view. So to say the reduction existed-- by itself is contradictory to physics. So that association I am weiry of.

May. 20 2011 12:52 PM
Hank from NYC

Might there not be an alternate theory of Progression to the Mean? Take a drug trial. Researchers experiment with a new drug and find it is 90% more effective than the old. The success of attempts to replicate the experiment gradually decline. 80%, 70% and finally settle around 50%. Still a good result, just not as dramatic as initially thought.

Suppose the initial experiment disclosed only a 10% improvement. Would subsequent experiments gradually have crept up to 50%? A progression to the mean. We would probably never know because 10% might have been considered a failure and there would have been no follow-up experiments. Negative results tend to discourage further experiments. We tend to only follow our apparent successes.

May. 18 2011 09:42 AM

No, I've seen this one and you have it all wrong.
It is the xerox effect. (entropy in action)
If you keep xeroxing a paper you regress toward a non-descript grey blur. There are several reasons for this in reproducing experiments that do not require magic.
1) the first time we do experiments it is the most highly trained people in the lab who do them with incredibly close attention paid. The next time it is lower ranking people with less attention paid to doing a decent job. Finally, we contracted it out to some zookeepers (literally).
2) Back in medical school, the depressed people were so bad they could not live outside of a hospital due to suicidality and psychosis. Now, someone has a bad day, they get put on antidepressants. If you start at the mean, you will regress to it more than if you start far away from the mean.
3) Bracket creep. We used to include only very severely affected people into the definition of autism. Now, cerebral palsy, tuberous sclerosis, and traumatic brain injury is included into the definition. Heteroskedacticity--look it up, it will mean more to you if you look it up yourself.

May. 18 2011 08:34 AM

This exact thing just happened to me!

I was at the laundromat, listening to the podcast, and I became fascinated by the idea of a universe shifting according to our exploration of it. The universe itself was trying to pull answers away, keep us in the dark.

As I hung my pants, I thought about all of the world's religions that claim that it is impossible for us, as mere mortals, to understand god's plan.

As I folded my underwear, I thought about humanities attempt to answer questions, and I reflected upon the idea that answers inevitably lead to more questions.

As I tucked my socks I remembered my quantum physics lessons, and the assertion that observation of a phenomenon changes the phenomenon. There is a quantifiable amount of uncertainty inherent to the cosmos.

Then I drove home, fired up and excited about this new "Law of Cosmic Habituation." I was certain that it existed, the rate of the effect hit an early peak.

Then I stewed about it for a while, and I began to consider extenuating circumstances. I got online, and found these comments. While reading them, I began to believe in the decline effect a little less.

Then I read the Nature article, and my suspicions began to be confirmed. This effect may not exist, or it may be very minor, attributed to humanity or life, but not the laws of chemistry or physics. I believed even less.

Then I read the New Yorker piece, and I thought about the fallibilities in the act of publishing, and the inherent weaknesses in the scientific method that I was already aware of, and the effect of the decline effect declined even further.

In other words, my belief in the decline effect began high, and declined over time. As it turns out, learning more, considering more, and seeking further answers likely lead to a greater understanding of the truth, even though it also lead to a "declining effect."

By the time I finished eating my grilled cheese sandwich, I had concluded that there may be a significant change in the way the universe functions due to observation, but there are also significant effects due to many other factors including human error, social liabilities, pressures to publish, pressures to not publish, plus the sheer excitement that comes with learning something new and fascinating. Only by eliminating these variables can we truly determine if the universe is messing with us.

Of course, any further experimentation may just be the universe messing with us more.

I quit...

May. 17 2011 02:20 PM
Amanda from Los Angeles

Can someone please explain to me whether the RL shorts are ALWAYS taken from a longer episode or are some of them exclusive content? I can't find the info anywhere, and I get so confused hearing things I've already heard. I don't want to miss anything, but the recycling drives me a little batty.

May. 16 2011 04:11 PM

Some legitimate scientists have offered explanations for this "effect." For some reason Radiolab chose to ignore these.

May. 15 2011 04:52 PM

It would be interesting to study the "regression to the mean" effect by doing the same test with different "observers" each time and see if the results stay the same or if the regression continues even though the "observers" aren't getting "better" or "worse."

May. 14 2011 05:42 PM
Lindsey from IL

maybe he's just slightly less interested in the expirament each time he does it...

May. 13 2011 02:15 PM

Very interesting comments. I particularly agree with Paul from MI, Benjo and Alexandre Van de Sande, whose comment suggests the obvious: scientists do not spend time replicating experiments without being convinced that they might be able to disprove past results. That is the routine of the true empiricist -- disbelieve and disprove -- as opposed the theoretician, who proceeds using analytical methods. Of course, the majority of scientists are empiricists. As the empiricist's career progress, it becomes increasingly more difficult to produce significantly new knowledge. You have to be continually looking for experimental results that can be debunked. For most, it is not until much later (if ever) that you may get an idea for a test that no one has really done (or published) before. So, scientists build their careers by debunking previous results (their own as well as those of others) and always looking to refine methodology to explain away (refine) effects and isolate cause-effect relationships.
Another view would be that a lot depends on what you think your sponsor wants to or needs to hear. So, at first, a researcher might demonstrate a simplistic version of an effect to get funded so as to be able to study the more complex relationships that underlie or mitigate this type of outcome. Like, what percent of those involved in the Manhattan project were professionally motivated by being involved in a project that produced the most destructive bomb ever made? They were Nuclear Physicists working on advancing the state of knowledge in their field. Would they have considered themselves failures if the A-bomb had never materialized? I don't think so. After all, we did get knowledge like the effect of Flouride on dental health out of the efforts.

May. 12 2011 03:55 PM
mark from bay village, ohio

this is the most mindblowing yet understated show radiolab has ever done! also, to all you people out there listening on the podcast - it feels so much better to listen to jad's intro about donations when you're a recurring donator yourself!

May. 12 2011 12:28 PM
C McLoughlin

I was reminded of this:

There is a theory which states that if ever anybody discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened.
Douglas Adams

May. 12 2011 11:49 AM

so how long will it take to see the decline effect changing the data for the decline effect?

or is it sort of like the "universally, all truth is relative" thing?

May. 12 2011 11:19 AM
me from Athens

It was a wonderful show...

May. 11 2011 09:54 AM

Before anyone else is tempted to comment that the described decline effects are due to obvious statistical reasons, I strongly recommend the link to the New Yorker article referenced in the show notes above.

This is not as simple as it sounds.

May. 11 2011 06:32 AM
Alexandre Van de Sande from rio de janeiro

Great show, but I find this claim to be incredibly sloppy. Before blaming a new law quantum law of the universe, we could assume another easier explanation: that Dr.Schooler was a sloppy scientist in the 80's and got more rigourous with his methods over time.

Consider this: experiments that do not show unexpected results are less likely to ever be replicated (after all, why replicate it if didnt show anything knew?). When replicating an experiment scientists might be more rigorous than the first time (maybe the reason the first experiment gave such results is that it was made by a unexperienced student not expecting to find anything). If the effect disappears, then it's considered a fluke and probably not tried again. If the effect persists it goes on to advance our knowledge. But if the effect dissappears ever so slightly, then we got baffled scientists like this case, preferring to blame the universe to something that could well be just an error of their youth...

May. 10 2011 10:17 PM

At first I agreed with those suggesting that Jonah, Jad, and Robert cherrypicked instances where re-tests exhibited steady declines, but if we assume that each experiment was done properly (as they noted, Schooler seems to be regarded as being fundamentally sound in his methods) and if both the participants and research assistants varied randomly in re-tests (ruling out unconscious bias or habituation on either of their parts), then it would seem surprising to see this even once, let alone several times in different domains?

Secondly, test-retest is already a kind of meta-analysis, so you're essentially saying that we need a meta-analysis of these meta-analyses, at which point the true mean and the true random distribution around it will reveal itself. But why not "zoom out" a level further and do a meta analysis of the meta analyses of the meta analyses? I'm not trying to be cute; only pointing out what I perceive to be a flaw in the logic. At which point would we expect to no longer see the decline effect, or be satisfied that we happened upon it by chance and that its not indicative of something real?

May. 10 2011 06:04 PM
Adam C. Snyder from The City University of New York

As a previous poster commented, "...there is fraud and bias, but most scientists try very hard to be objective and true to reality." I would just like to point out that biases are not necessarily intentional, and can run contrary to a researcher's best intentions. This is why the "double-blind" procedure is used. For example, I reviewed the methods in Prof. Schooler's verbal overshadowing articles, and I found that it was not explicitly stated that a double-blind procedure was used. If such a procedure was not used, then subtle non-verbal cues from the well-intentioned experimenter related to their knowledge of the participant's group assignment could influence that participant's performance. Additionally, there are emergent biases that arise from pressure from funding sources, publication editors and reviewers, department administrators, and the like that may not be wholly attributable to a single individual. This is why it is very important to be aware of the robustness of unintentional bias.

May. 10 2011 03:03 PM
Larry Smith from Silver Spring, MD

Very interesting. Yes there is fraud and bias, but most scientists try very hard to be objective and true to reality. I think there is a misunderstanding of what "regression to the mean" refers to. It was originally described in the 19th century in the context that very tall parents have offspring that tend to be less tall, and the height in successive generations tends back towards the mean. When you are looking for anything exceptional in a population, I think you may very well find something that is dynamic like this. In that case, successive generations would naturally regress to a previously unobserved "mean". Although phenomena determined by genes can change only gradually from one generation to the next, individuals can exhibit behavior changes resulting from experience and social influences that may be too subtle for anyone to perceive or observe.

May. 10 2011 10:37 AM
Adam C. Snyder from The City University of New York

This show was interesting, but I am surprised by two things. Firstly, I was surprised to hear it asserted that an example of the decline effect has not been reported in physics, when it is well known that the results of Millikan's oil drop experiment for the charge of an electron were subject to a gradual "settling" of this sort upon replication (although I suppose this could be classified as an "incline" effect). This is even more surprising since Jonah Lehrer, the presumptive instigator for this episode, posted a blog entry at the New Yorker about this very topic four months ago ( Secondly, I was even more astonished that Radiolab did not even acknowledge that bias in scientific reporting could be a potential source of the effect. A psychologist such as professor Schooler should be particularly aware of the robustness of (un)intentional biases in scientific reporting. Moreover, this is the most parsimonious explanation for the oil drop results, since it is unlikely that the charge of an electron has actually been changing over the last hundred years. I would say it is also more parsimonious for the other examples (e.g., sexual preference for symmetry, etc.) than some sort of "cosmic habituation". I hope Radiolab will follow up on this by acknowledging that bias could be a factor.
Adam C. Snyder
Graduate Student in Cognitive Neuroscience
The City University of New York

May. 10 2011 10:25 AM

Bazmati is absolutely right: what Terry Oas is describing is not at all the same thing. Regression to the mean is such a simple thing, and yet so ill understood by many. Btw, Schooler's article in Nature is a more balanced account of what might be going on.

May. 10 2011 05:16 AM

@Terry Oas, you've shown how the running average regresses to the mean. However I believe that the effect described in the programme was a gradual decline effect per experiment, that is without taking the previous results into account.

May. 09 2011 09:33 PM
Terry Oas

This comments system doesn't allow graphics input, so I can't show a simple demonstration of how "regression to the mean works". However, if you go to:

you will see a simple demonstration of how "regression to the mean" works and how a gradual convergence to a correct estimate is the usual result of a series of experiments. This is in contrast to the implication of the podcast, that it is unexpected. The demonstration on the above web page is for a series of die rolling experiments. A six-sided die is rolled 10 times and the average score is taken as a "result". This experiment is repeated 50 times and the running mean of the 50 results is computed. As expected, the average converges to the expected average of the six possible outcomes (3.5), but surprisingly slowly. Also, the result of the first experiment can deviate as much as 30% from the expected result (as described in the podcast) and when it does, it can take 10s of experiments to converge on the correct answer.

Thus, what seems like a mysterious effect is merely a hallmark of the so-called "statistics of small numbers," which says that unless you're very careful and collect a lot of data, you can easily be fooled into believing you see an effect (like efficacy of a drug), when you really aren't.

May. 09 2011 09:17 PM
Pete Costello from Matawan, NJ

One way to test the universal mind or similar solutions.

So if you conduct a similar experiment and BEFORE you analyze the results and form a conclusion, you repeat the experiment one or more times.

Check the results of each of the experiments either in reverser of by a random blind ordering.

Would the results be more consistent or independent of order or the first one always shows the most difference no matter what is chosen.

May. 09 2011 05:10 PM
Graham Frye from Boise, Idaho, USA

As many have already astutely observed, your treatment of this alleged effect appears to be biased. In researching the 'decline effect' each meta-analysis represents a single data point. Thus, you latched onto a number of data points that exhibited the effect you were looking for a priori. What about all of the thousands of other meta-analyses or potential meta-analyses that do NOT exhibit this effect? To address this research question properly one needs to analyze a large random sample of meta-analyses. It does not appear that this was done.

Nonetheless, you guys are fantastic and Radiolab is one of a kind....

May. 09 2011 02:19 PM
Michael Lichten from Takoma Park, Maryland

I, too, am very disappointed in you, lads. The subject of this show is actually a very real concern, and I kept waiting for a serious treatment of spurious correlation that would help your listeners to more critically evaluate the next miracle cure (or miracle association) that they encounter in the press. You had a chance to do some real education here, and instead you went all woo-woo on us.

May. 09 2011 09:00 AM

I usually love the stuff on Radiolab. But, oops… this episode is plain silliness. Just like the measured effect in the first study may easily have been an outlier due to chance, chances are that if you do enough studies you will find replications where the measured effect declines. If thousands and thousands of researchers are running experiments, chances are that you can find many examples of studies with results that disappear over time.
Just do the following thought experiment. We ask 10,000 people to flip a fair coin 20 times for 5 days in a row. Almost surely, some people will observe 18 tails on Monday, 16 on Tuesday, 15 on Wednesday, and so on. There really is nothing to it except for chance. Really.

May. 09 2011 06:14 AM
Michael Kilfoy from St. Louis

Among the various things I do, I am a professional photographer. And every so often, I remember that as I am taking a photo of a person, I am capturing the image of someone in the presence of an observer. It is good to consider how this might effect a subject and how it might influence and outcome, whether in art or science.

I'm not sure if all studies cited have a cultural aspect to them, but as I write this while my wife watches a 'reality' television show, I am keenly aware that we are under observation more often than ever before. We are being observed in ways we couldn't have previously imagined. Security cameras. Youtube. iPhones. Facebook. Data is being collected at an increasing rate.

This changes us.

When the Brothers Eisenstein made 'The Battleship Potemkin', the audience didn't understand what a close up was. They demanded to be shown people in their entirety. Though we see that as unsophisticated, it just shows how far we come.

Maybe we are more comfortable with an observer. In the case of the bank robbers, it may mean that we are gradually becoming more driven by images and less by words. In whatever case, something is different. The act of observation itself may change us.

As always, great show. You always leave me thinking.

May. 08 2011 10:34 PM
Dave from SC

The decline effect may draw attention, but what about an incline effect? If there was evidence to support both a decline effect and an incline effect would that not lend itself to the regression to the mean theory: all of these observations are going to have an average. Although very interesting, I suspect we might be too focused on looking at our study of the decline effect.

May. 08 2011 03:35 PM
Craig from Vancouver

I couldn't get which podcast you guys were pimping at the end was it omnimedia or something? could anyone point me in the right direction?

May. 08 2011 02:48 PM
Craig from Vancouver

I couldn't get which podcast you guys were pimping at the end was it omnimedia or something? could anyone point me in the right direction?

May. 08 2011 02:47 PM
Alan from California

A possible explanation can be found in John Ioannidis' article "Why most discovered true associations are inflated" in the journal Epidemiology volume 19 (Sept. 2008) pages 640-648.
DOI: 10.1097/EDE.0b013e31818131e7

May. 06 2011 08:36 PM
Ted Pavlic from Columbus, OH

Not every repeated study has this property, right? It is a statistical certainty that /some/ will have this property. How are you sure you are not just hand picking those that have just happened this way (with the gradual decline) by chance?

If there is a systematic gradual decline, wouldn't that follow from the second law of thermodynamics? As the entropy of the universe increases, the separation between control and treatment will decrease. In this alternate narrative, the observation is not affecting the outcome. It also helps to resolve the barber-shop-like-paradox of what the result of repeated studies of diminishing effects would be.

May. 06 2011 01:23 PM

I don't want to be a crank, but I have to agree with Anonymous that "radical conjecture" = new age pseudoscience. Nature is interesting enough without dreaming up supernatural explanations. And when statistics and the sociology of science are involved, this decline effect needs no other explanation.

May. 06 2011 09:52 AM
Todd Ferreira from MA

Double-slit experiment is all i could think of. Please expand this short into a full 3-parter. I'd really appreciate the research you do as I always do!

May. 06 2011 09:15 AM

"radical conjecture" = new age pseudoscience

May. 06 2011 12:38 AM

A simpler explanation is bias in publication towards findings. After a 'finding' is published, more and more researchers and studies are performed and then the regression towards the mean shows up more. Publications can then no longer ignore original biases of published 'findings'.

May. 05 2011 07:04 PM
Sean W. from Lancaster, PA

Oh. Man. This is definitely worth an entire show. And quickly! Now that this show is out there, the evidence is disappearing!

May. 05 2011 04:35 PM
Michael Forbush from Hollister, CA

Before you start looking for the supernatural explanation you should consider the subconscious. Obviously Jonathan or anyone working in his lab would prefer to have a positive effect rather than a negative one. Body language and cues could lead the subjects in his case. Similarly rating the attraction of animals to one another could be an exaggerated observation biased toward the positive outcome. As time goes on the researchers no longer have to look forward to a new finding and gradually relax into a non-interested observer mode.

May. 05 2011 01:53 PM

I can't believe you guys didn't contact/interview the group over at Princeton who've been performing the "Global Consciousness Project" for the past decade. Listening to your podcast, I thought of the "collective consciousness" about a minute before your guest suggested it in the interview. You could do a whole show on this ground breaking work:

May. 05 2011 01:26 PM

The last time I had to do any statistics, any number crunching to get at a value with a standar deviation, I was counting decay spirals in images from a bubble chamber, for an undergrad physics lab; and I still remember remarking to my prof how statistics is a fine art and him solemnly agreeing. So when I think of the degrees of freedom associated in biological/socialogical phenomena, it's like cats and quarks.

May. 05 2011 11:59 AM
Batturi from Yola, Nigeria

"lies, damned lies, and statistics!" as Benjamin Disraeli once said. I love statistics, but this quote helps to keep things in perspective. Examples like this are great for students. Often students see stats in fear and awe. This shows them the importance of good stats, the need for good studies, and that we always need to be skeptical of our results. Great show, hankering for a full length one!

May. 05 2011 11:48 AM
Chris from Oklahoma

Is there a regression phenomenon for the regression phenomenon?

May. 05 2011 10:53 AM
ColinAlcarz from nh

I am loving all the explanations for why studies have declining effects. Regardless of what ultimately explains how continuing to study an effect makes it appear to have a gradual decline, I shall never trust studies the same again. I had no idea there were so many ways that scientists could be getting it wrong. Or is the media getting it wrong in reporting it?

May. 05 2011 10:23 AM

A simpler explanation is bias in publication towards findings. After a 'finding' is published, more and more researchers and studies are performed and then the regression towards the mean shows up more. Publications can then no longer ignore original biases of published 'findings'.

May. 05 2011 09:29 AM
Bengo from Toronto

Hey Guys,

I know that Radio Lab often tries to go for the wonder factor and the spooky factor, but I was incredibly disappointed to see that not a single mention was made of the role that sociology of science plays in proposed hypotheses for the decline effect, even in Jonah Lehrer's own article about it! The fact that scientists are not just detached child-minded agents looking for "truth", but professionals trying to publish more than their peers, advance themselves and their careers (which a visit to any scientific conference will demonstrate) is a huge part of the picture. People don't like to publish results for a study that will unseat orthodoxy unless the results are BIG. So they will run the experiment until they get those results, and people who want to jump on the gravy train will immediately after them publish those studies that follow suit. After those ideas become accepted, it is now fashionable to try to unseat the new orthodoxy. Anyone who's done a scientific experiment and knows how much of science is playing with data, fiddling with instruments, deciding which results to accept and which to ascribe to some sort of error, knows how susceptible results can be to this form of conscious or unconscious bias. So you have a regression to the mean mixed with the biases of researchers, who, after all, are social, professional, human creatures. One excellent article about this sort of effect, especially to do with medical science is about a very interesting researcher named John Ioannidis who is one of the most cited researchers in the field, who got famous telling people about their biases:

At the end of the day, I love Radio Lab for the back and forth between Jad (the romantic), and Robert (The skeptic), and on an issue like this that cuts right down to the heart of what it is to do science in the last 30 years, I think you guys did a great disservice to listeners by not including this point of view.



May. 05 2011 08:25 AM
Nate from Greeley, Colorado

When asked to write down the description the subjects were using their brain's left hemisphere where as the gestalt view of the robber was primarily a right brain activity. When asked to recognize the robber again they had to switch back to the right brain. Apparently going from "Holistic" viewing to "discrete" recording then back to "holistic" viewing, the holistic viewing is impaired.

May. 05 2011 01:23 AM
paul from MI

I think we may be more likely to see a "decline" effect than an "increase" effect, because we are more likely to repeat studies that began with a notable finding. Thus, in a world where results naturally vary, "declines" are more likely to be observed than "increases."

May. 04 2011 11:43 PM
Isaac Vallejo Jr

could this be related to Prof Anton Zeilinger's "Double Slit" experiment? where an experiment's result changes as it is observed...
here @ 7:17 < >
and continued here...
< >

May. 04 2011 10:13 AM
Isaac Vallejo Jr

could this be related to Prof. Anton Zeilinger's "Double Slit" experiment referenced on BBC's What Is Reality? and experiment that can be accurate replicated until you try observe it...
start@ 7:17
and continue on

May. 04 2011 10:08 AM
Bazmati from Queensland, Australia

This effect is strongly reminiscent of Rupert Sheldrake's "morphic resonance" concept, in which the nature of reality is seen as a kind of habituation. Indeed, I was kind of expecting you to move on to that as the next obvious link.

His book "The Presence of the Past" includes intriguing examples of this kind of effect. One of the most well-known is the unexplained global increase in scores on standardised IQ tests.

I'd love to see Radio Lab explore this intriguing area further.

May. 04 2011 09:59 AM
Elan Dubrofsky from Ottawa, Canada

I should start by saying this is one of my very favorite podcasts. Radiolab is really amazing.

That being said, this episode frustrated me quite a bit. Like a couple of commenters have already insinuated, there are plenty of explanations for the decline effect that make perfect sense in the context of what we know about science. If you want to read the best analysis (in my opinion), check out Steven Novella's blog:

I wish you would have had Dr. Novella on the show instead of the guy saying that observing effects make them change.

May. 04 2011 09:39 AM
Stephanie Trigg-Smith from Amarillo Tx

It's regression to the mean. Every single one of those studies was too small to be significant. Even if you controlled the room, grad student, etc, the students have changed. I have had an entire conversation with a student texting someone else and she was able to tell me everything I had said. Honestly she wasn't even the sharpest crayon in the box.
The Prozac study is another example of bad project design. If you give SSRI's to only those with serious depression or OCD you WILL get great results. But over time as these drugs were prescribed to more people with lesser symptoms, well statistics will out. If you want to discuss the effect of receptor remodeling, the addiction to self medicating or stopping medication after "getting well"? You should probably ask a Psych.

Don't even get me started on the BIRDS...

May. 04 2011 08:22 AM
Beau Davenport from San Francisco

As with most things "spooky", might not we explain this away as a problem of scope and perspective?
If you dream someone dies, and wake up the next morning to find out they really did die over the course of the night, it sure does seem spooky. But when you take a step back and take into consideration all the instances in the history of the world when someone has dreamt that someone else has died who didn't in fact turn out to be dead come morning, you realize that statistically, yes, eventually some very small percentage of those dreams are going to coincide with the actual deaths of those people. But when regarded from the distance that includes all the uncorrelated dreams, it stops seeming so spooky.
So back to the gradual decline phenomena. What if we take into consideration all instances of "regression to the mean" in the history of all things that have ever been studied. While most regressions to the mean will have the typical pattern of starting with the outlier and then proceeding to "totter around the actual value", shouldn't we also expect for there to be some outliers in the realm of the regression to the mean process itself (in this case the "gradual decline" regressions)? Taken from this more distanced vantage point, it doesn't seem so spooky to me.

May. 04 2011 05:15 AM
Chris Ocean from MI

I've always assumed our bodies gradually acclimate to changes in a sort of fast evolution. An example would be our senses like his example about feeling your hand on your leg or another example might be the after image from your eyes. Moving on to the 'verbal overshadowing' experiment, I might think subjectively. Possibly the scientists/data takers are becoming acclimated to the experiment and thus erring to notice less and less of a change--the people measuring or interpreting the symmetry data (unless a computer is just telling us "symmetrical" "not-symmetrical") on the birds might be erring their measurements to see less of symmetry. I see human error as the most mystifying source of error; of course the micrometer only measured to 5 degrees of uncertainty, but how accurately did you read the tick marks?

May. 04 2011 05:07 AM
abelyssahbeo from Chesapeake Bay

I WANTED this podcast to go on -but I guess ,( lol :p) in retrospect of the program -its life will expand naturally ..

May. 04 2011 04:16 AM
Chris from St. Louis

I wonder if Dr. Schooler considered that the population itself might be to blame. Several studies have been done which indicate that the modern attention span is decreasing at a steady rate. What used to be called the Microwave Mentality (give it to me in 5 minutes or less) has been shortened even more in the age of Status posts and Tweets. Our news is fed to us in baby-sized soundbites because the news corporations know we won't (can't?) sit still for longer than a few seconds.

It would not surprise me at all to find that the subjects in the 80's with their CBS Nightly News and TiVo still two decades away had an easier time paying attention long enough to remember details. As technology *gradually* increased and attention spans *gradually* decreased their results also gradually got worse.

Or not. Regardless this was a nice short...thanks!

May. 03 2011 10:50 PM

Come ONE, guys!! I SOOO counted on a reliable regression to the mean! Who is to be my god now?? Barbie?! ; )

May. 03 2011 09:43 PM
Rev. David Huber

You, Jad and Robert, probably read this already (and Schooler talks about it in the radio clip), but here is a New Yorker article about it from last December that your listeners might enjoy reading.

May. 03 2011 08:49 PM

Leave a Comment

Email addresses are required but never displayed.

Supported by