PDA

View Full Version : More challenges to modern "science" published



jonathan barnett
December 12th, 2010, 03:15 PM
Here are a couple of newly published reports taking science in general, and medical research in particular to task.

It's astounding, if not telling, that neither article specifically mentions AIDS, despite the topic and observations made in them. A discerning dissident will have no trouble making the connections to "AIDS" research.

The first, entitled "Why Scientific Studies Are So Often Wrong: The Streetlight Effect" (http://discovermagazine.com/2010/jul-aug/29-why-scientific-studies-often-wrong-streetlight-effect/article_view?b_start:int=0&-C=) is available free online at Discover, and is full of gems:


The fundamental error here is summed up in an old joke scientists love to tell. Late at night, a police officer finds a drunk man crawling around on his hands and knees under a streetlight. The drunk man tells the officer he’s looking for his wallet. When the officer asks if he’s sure this is where he dropped the wallet, the man replies that he thinks he more likely dropped it across the street. Then why are you looking over here? the befuddled officer asks. Because the light’s better here, explains the drunk man.Another paragraph could well be discussing the use of markers to measure "health" in people who are Gallo positive:


There is just one little problem. While these surrogate measurements yield clean numbers, they frequently throw off the results, sometimes dramatically so. This “streetlight effect,” as I call it in my new book, Wrong (Little, Brown), turns up in every field of science, filling research journals with experiments and studies that directly contradict previously published work. Finally, there is this priceless closer:


How are we supposed to cope with all this wrongness? Well, a good start would be to remain skeptical about the great majority of what you find in research journals and pretty much all of the fascinating, news-making findings you read about in the mainstream media, which tends to magnify the problems. (Except you can trust DISCOVER, naturally. And believe me, there is no way this article is wrong, either. After all, everything in it is backed by scientific studies.)
Maybe we should just keep in mind what that Einstein fellow—you know, the one who messed up that electron experiment—had to say on the subject: “If we knew what we were doing, it wouldn’t be called research, would it?”
The second article, "The Truth Wears Off. Is there something wrong with the scientific method?" (http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer), requires a paid subscription to The New Yorker.

Referring to pschotropics, author Jonah Lehrer writes in the free abstract:


A recent study showed an effect that was less than half of that documented in the first trials, in the early nineties. Before the effectiveness of a drug can be confirmed, it must be tested again and again. The test of replicability, as it’s known, is the foundation of modern research. It’s a safeguard for the creep of subjectivity. But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts are losing their truth. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.(NOTE: I splurged for a subscription to the New Yorker article and according to the terms of service, the entire article is available for viewing on up to six computers. If others want to read the entire article online, PM me for the details.)

Gos
December 12th, 2010, 04:09 PM
This "streetlight effect" is very similar to an effect that I have observed with the West Nile outbreak in North America.

In the WNV epidemic, scientists have always found WNV outbreaks exactly where they expected to see them, and nowhere else -- mostly because they looked exactly where they expected to see WNV outbreaks, and didn't look anywhere else. (We saw a similar phenomenon with AIDS in the 1980s, when gay men were diagnosed who later turned out to be HIV-negative, and people with AIDS-indicator diseases -- including my own sister -- were told that they couldn't possibly have AIDS because they weren't in risk groups.)

In the first few years of the epidemic, WNV behaved more or less exactly as we would expect a mosquito-borne virus to behave, traveling in a straight line from New York to the Great Lakes region, through the territories that were most heavily infested with mosquitoes.

However, as the epidemic was hyped into a full-blown panic, we began to see more and more MDs and patients (in other words, people who know dick about mosquitoes) making the determination of where we looked for WNV infection, and as a result, the WNV epidemic continued in a straight line right through Arizona, New Mexico, and Colorado on its way to California, rather than turning north at Nebraska as it should have if it were following the path of least resistance through the most mosquito-infested states. Check out the maps on this page (http://en.wikipedia.org/wiki/Progress_of_the_West_Nile_virus_in_the_United_Stat es) to see what I mean.




A few fun facts:

In 2003, nearly 30% of all WNV cases reported to the CDC were in the state of Colorado, with approximately 60% of the total cases in the US distributed between Colorado, Nebraska, and South Dakota -- half of these in Colorado alone.
In 2004, things began to get even more bizarre, as the very first WNV cases of the 2004 season were reported in Arizona and New Mexico, prior to the start of the mosquito season. That year, the largest numbers of WNV cases reported to the CDC were in California, Arizona, and Colorado, with 57.5% of all reported cases in the US distributed between the three states, and by the end of 2004 season, Arizona and New Mexico had racked up more documented WNV diagnoses between them than all of the states of the east coast and the Gulf coast combined.
In 2005, California (with a moderate mosquito infestation at best) led the US in West Nile cases by a wide margin, with more than three times as many cases as the #2 state, Illinois (880 vs. 252). In that year, California recorded more human WNV infections than all of the states east of the Mississippi River combined.
The following mosquito-infested states have never produced more than a tiny fraction of the total US cases of WNV: Mississippi, South Carolina, North Carolina, Tennessee, Virginia, West Virginia, Kentucky, Pennsylvania, and Washington State. In addition, neither Texas, Louisiana, nor Georgia have reported more than 10% of all US cases in any given year.
What happened here is that right around 2002 or so, people in the Desert Southwest looked at the map and projected a line from New York through the Great Lakes region and said, "Dear God, it's headed right for us!" and went to their doctors for WNV tests. (Also, I'm sure that many MDs in the region thought the same thing and began testing their patients.) As a result, there were more people looking for WNV in this region than elsewhere, and thus more cases were found (which, according to Baye's Theorem (http://en.wikipedia.org/wiki/Bayes_Theorem), is exactly what we should expect even if there were no actual WNV epidemic to begin with.)

----

Gos
December 12th, 2010, 04:20 PM
I suppose I should point out that a key difference between the "WNV effect" and the "streetlight effect" is that scientists weren't "looking where the light was best", but rather where their expectations led them to believe the outbreak should exist.

----

Worker Bee
December 13th, 2010, 12:44 AM
This may have been discussed elsewhere on this board, but it seems appropriate to this thread. Henry Bauer (http://hivskeptic.wordpress.com/2010/10/26/lies-damned-lies-and-medical-science) recently posted a brief notice about an article entitled Lies, Damned Lies, and Medical Science (http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/), by David H. Freedman, published at The Atlantic:


Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.

From a recent mailing list, discussing the Atlantic article:


The flaws, errors, mistakes and lies of HIV/AIDS are a bit trickier
and harder for many people to grasp. It may be perhaps because,
unlike some of the other issues or debates he mentioned, HIV/AIDS
(putatively) involves sex, blood, and death.

Or as Root-Bernstein noted nearly 20 years ago, "The best kept
secrets about AIDS are the suggestions unanswered, the puzzles
unsolved, the contradictions unrecognized and the paradoxes
unformulated."

The original article by Ioannidis, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research, published in JAMA in 2005, is available on Roberto Giraldo's reference page (http://robertogiraldo.com/reference/index.html).

Gos
December 13th, 2010, 01:05 AM
Ioannidis is essential reading, IMO.



----

Gos
December 13th, 2010, 04:23 AM
You know, it's too bad that you don't have rights to re-post that New Yorker article (Truth Wears Out) here in its entirety. It would be an invaluable discussion tool.

In particular, I'm thinking of the three circumcision studies (http://forums.questioningaids.com/showpost.php?p=45163&postcount=9) that CBE posted here a few weeks ago. If the effect described in this article is legitimate (and it seems at the very least to be consistent and explicable,) then we should naturally expect the "protective" effect of circumcision to be observed to wane over time as well.

There is also the very serious question of the MTCT trials of nevirapine, since replication of these trials will not be attempted due to ethical considerations. If, as Lehrer suggests, the initial results were skewed and likely to be proved invalid by long-term attempts to replicate the results, then there are very serious ethical implications of implementing prevention strategies based on results that might otherwise have proven to be invalid if the scientific principle of reproducibility were practiced rather than discarded for putative ethical reasons.

Current, "exciting" research in pre-exposure prophylaxis (PrEP) comes to mind as well...

----

Gos
December 13th, 2010, 04:34 AM
Interesting quote from the article:


One of the classic examples of selective reporting concerns the testing of acupuncture in different countries. While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the UK, and only fifty-six per cent of these studies found any therapeutic benefits. As Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don't want to see. Our beliefs are a form of blindness.

Gos
December 13th, 2010, 04:47 AM
Another excellent quote:


Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from falling effect sizes, they continue to get cited in the textbooks and drive standard medical practice. Why? Because these studies seem true. Because they make sense. Because we can't bear to let them go. And this is why the decline effect is so troubling. Not because it reveals the human fallibility of science, in which data are tweaked and beliefs shape perceptions. (Such shortcomings aren't surprising, at least for scientists.) And not because it reveals that many of our most exciting theories are fleeting fads and will soon be rejected. (That idea has been around since Thomas Kuhn.) The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that's often not the case. Just because an idea is true doesn't mean it can be proved. And just because an idea can be proved doesn't mean it's true. When the experiments are done, we still have to choose what to believe.

Gos
December 13th, 2010, 02:27 PM
Ya know, all this talk about the limitations of scientific observation reminds me of an old joke I heard when I was a kid.

The following is excerpted from the abstract of a published paper in the biology journal Vivisection:


METHODS: We selected 100 frogs of roughly equal age and size, and tested their startle reflex. As a control, we slapped a ruler on the table next to each frog, and measured the length of the frog's jump. Subsequent tests were performed with one, two, three, or all four of the frogs' legs removed.

RESULTS: With all four legs intact, the frogs jumped an average of 18 centimeters. With the front left leg removed, the frogs jumped an average of 15 cm. With both front legs removed, the frogs jumped an average of 12 cm. With both front legs plus the left rear leg removed, the frogs jumped an average of 3 cm. With all four legs removed, the frogs did not jump.

CONCLUSION: A frog's ears are in its legs, as evidenced by the progressive hearing loss observed when a frog's legs are removed, and by the fact that when all four legs are removed, the frog goes completely deaf.

cdm
December 13th, 2010, 11:03 PM
The whole World is an illusion disguised as Reality.
A true science is like a blind man fumbling to find each way.