PDA

View Full Version : The Threshold of Statistical Significance Where Incidence is Small



Gos
November 5th, 2010, 10:23 AM
CBE,

Rather than risk derailing the circumcision discussion with a discussion of statistical significance, I have decided to start a new thread.

I know that a study with 5,000 subjects encompassing 10,000 person-years sounds like a pretty big deal, and certain to establish statistically significant results.

However, one has to look not only at the size of the study, but also at the incidence of what's being measured. If the incidence of the phenomenon being measured is small, then it takes a considerably larger and/or longer study to achieve statistical significance.

Take, for example, the odds of being dealt a royal flush in a single hand of 5-card stud poker. (1:650,000.)

One way of testing this would be to deal 650,000 hands from randomized decks to a group of people for 650,000 "person/hands". For example, you could deal 100 hands each to a group of 6,500 people.

After dealing 650,000 hands, the law of averages says that you should have one royal flush. However, in reality, it might be two or three, or there might be none.

If the actual number of RFs is anything but 1, then our study is off by a minimum of 100% of the actual incidence (which means that the outcome is essentially worthless, because we're actually at least as likely to see an outcome which is off by a large enough margin to hide 100% of the actual incidence at least once if not multiple times, as we are to see an accurate result.)

One way of getting around this is to expand the scope of the study. For example, rather than dealing 100 hands, let's say we dealt 400 hands to each of 6,500 people, for a total of 2.6 million P/H.

At that point, we could reasonably expect to see four RFs dealt. The actual number might be five or six or it might be two or three, but chances are it'll at least be close to four, and there's a better-than-even chance that it'll be 4 +/- 25%.

That's not bad, but have we achieved statistical significance?

No. For two reasons: 1) There's still too much probability that our outcome could be off by as much as 100% or perhaps more, and 2) While we've achieved some statistical significance to explore questions relating to instance itself, we still have not achieved a fraction of the statistical significance required to begin breaking our figures down to explore questions relating to things with even smaller instance within the grouping of samples.

For example, even if you had exactly four RFs, there's absolutely no reason to expect that you will have exactly one RF of each suit. It is far more likely that you will have (for example) one RF of spades, one of diamonds, two RFs of hearts, and none of clubs. The odds are actually against all four suits being represented if there are exactly four RFs in the sample.

In the above example, it would appear that our study "proved" that red suits are 3X luckier than black suits, and that hearts are twice as lucky as any other suit, if we were to judge by the number of RFs which were dealt in each suit.

Thus, when we start breaking the numbers down into subgroups, we still haven't achieved enough statistical significance to get any sort of accurate analysis of what's going on at the subgroup level.

The only way to get past this is (you guessed it) to expand the scope of the study even further. This time, instead of dealing 400 hands to each of our 6,500 participants, we'll deal them each 4000, for a total of 26 million P/H.

At 26 million P/H, our study is astronomically huge, but we've finally achieved enough statistical significance to begin to break the RFs down by suit and do analysis of the results. By the law of averages, we can reasonably expect the total number of RFs to be somewhere pretty close to 40, and while they still probably won't break down absolutely evenly by suit, you'll probably have somewhere in the neighborhood of 10 RFs of each suit -- maybe, for example, you've got a total of 39 RFs -- say, 12 RFs of clubs and nine of each other suit.

What we have at this point is just enough statistical significance to begin to ascertain what the average instance of RFs of any suit might be, but we still really haven't achieved enough statistical significance to begin to compare the instance of, say, clubs vs hearts, as evidenced by the fact that clubs appear to be about 33% luckier than any other suit without requiring any large anomaly in the statistical noise floor in order for this outcome to occur.

So how do we get around this? You guessed it, we have to expand the scope of our study. So this time we deal 8,000 hands apeice to 32,500 people for a whopping 260 million P/H.

This gives us enough samples to achieve sufficient statistical significance to begin to analyze the instance of RFs by suit. For each suit, there will be approximately 100 RFs, and while the statistical noise floor is still too high to be considered negligible, the study is large enough to rise far enough above the noise floor that we can begin to get a glimpse of the bigger statistical picture.

However, it is important to note that we haven't achieved high statistical significance at this point. There's still enough of a statistical noise floor that it's always eminently possible that the actual study ends up with, say 105 RFs of spades and 91 RFs of clubs, creating the illusion that spades are 15% luckier than clubs.

Thus, even at 260 million P/H, we still haven't achieved a high enough statistical significance to compare how lucky the various suits are and be in any way confident that our study was returning sufficiently accurate results.

Ya know how you get around that? You guessed it: We have to expand the scope of our study another 10X, to 2.6 billion P/H. At this point, we're dealing 32,500 hands of poker apiece to 80,000 people.

BUT -- we can be reasonably confident that during the course of this study, there will be approximately 4,000 RFs dealt, and that there will be roughly 1,000 of each suit, and that the difference between the "luckiest" and "unluckiest" suit will be negligible enough to be easily dismissed as statistical noise, and that the difference is unlikely to be large enough to be mistaken for "proof" that one suit is substantially luckier than another.

At an instance of 1:650,000, we had to expand the size of the study to 4,000 X the reciprocal of the instance in order to be confident that we were getting results which would not create an illusion that some suits are substantially luckier than others.

And yet, at an average instance of 1 seroconversion per 100 P/Y in this circumcision study, I am expected to believe that a measly 10,000 P/Y (100 X the reciprocal of instance) is a large enough sample to determine whether it's "luckier" to be circumcised or uncircumcised?


----

CBE
November 5th, 2010, 11:09 AM
Gos,

There is a wikipedia article which outlines the concept of statistical significance (http://en.wikipedia.org/wiki/Statistical_significance). I suggest you read it. See also the wiki article on P value (http://en.wikipedia.org/wiki/P-value).

One of the QA moderators is, I believe, a mathematician. He might be interested in tutoring you on the subject.

I'm not, for reasons I outlined elsewhere.

Gos
November 5th, 2010, 09:48 PM
CBE,

I'm already quite familiar with the concept of P-value. In fact, the article you linked states more or less exactly what I'm illustrating in this thread (http://en.wikipedia.org/wiki/P-value#Frequent_misunderstandings):




The data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). A small p-value that indicates statistical significance does not indicate that an alternative hypothesis is ipso facto (http://forums.questioningaids.com/wiki/Ipso_facto) correct; there are additional tests which may be performed in order to make a more definitive statement about the validity of the null hypothesis, such as some "goodness of fit (http://forums.questioningaids.com/wiki/Goodness_of_fit)" tests.
Despite the ubiquity of p-value tests, this particular test for statistical significance has come under heavy criticism due both to its inherent shortcomings and the potential for misinterpretation.
There are several common misunderstandings about p-values.[4] (http://forums.questioningaids.com/#cite_note-Sterne2001-3)[5] (http://forums.questioningaids.com/#cite_note-Schervish1996-4)

The p-value is not the probability that the null hypothesis is true. (This false conclusion is used to justify the "rule" of considering a result to be significant if its p-value is very small (near zero).)
In fact, frequentist statistics (http://forums.questioningaids.com/wiki/Frequentism) does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian (http://forums.questioningaids.com/wiki/Bayesian_probability) and classical approaches shows that a p-value can be very close to zero while the posterior probability (http://forums.questioningaids.com/wiki/Posterior_probability) of the null is very close to unity. This is the Jeffreys–Lindley paradox (http://forums.questioningaids.com/wiki/Jeffreys%E2%80%93Lindley_paradox).
The p-value is not the probability that a finding is "merely a fluke." (Again, this conclusion arises from the "rule" that small p-values indicate significant differences.)
As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is subtly different from the real meaning which is that the p-value is the chance that null hypothesis explains the result: the result might not be "merely a fluke," and be explicable by the null hypothesis with confidence equal to the p-value.
The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy (http://forums.questioningaids.com/wiki/Prosecutor%27s_fallacy).
The p-value is not the probability that a replicating experiment would not yield the same conclusion.
1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).
The significance level of the test is not determined by the p-value.
The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed.
The p-value does not indicate the size or importance of the observed effect (compare with effect size (http://forums.questioningaids.com/wiki/Effect_size)).
As noted above, the p-value p is the main result of statistical significance testing. Fisher proposed p as an informal measure of evidence against the null hypothesis. He called researchers to combine p in the mind with other types of evidence for and against that hypothesis, such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies. Many misunderstandings concerning p arise because statistics classes and instructional materials ignore or at least do not emphasize the role of prior evidence in interpreting p. A renewed emphasis on prior evidence could encourage researchers to place p in the proper context, evaluating a hypothesis by weighing p together with all the other evidence about the hypothesis. [6] (http://forums.questioningaids.com/#cite_note-Goodman-5)



So again, if you claim that the statistical analyses in any of these articles refutes my assertion of insufficient statistical significance, it is up to you to explain how.

Gos
November 6th, 2010, 07:34 AM
See also the wiki article on P value (http://en.wikipedia.org/wiki/P-value).


CBE,

You'll find a fascinating discussion on P-values and their significance in the predictive value of a study in this article (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124).

Don't let the title shock you. The point that Ioannidis is making with the title is historically and currently correct. Most published research findings are false, and historically this has always been so.

There are two types of people when it comes to science: Those who think that science is about being right, and those who know that science is really about being a little less wrong than you were yesterday.

Man's journey of scientific discovery has been a process of stumbling blindfolded through a maze of nothing but dead-ends, looking for an occasional nugget of truth.

And as much as modern scientific cultists protest that this has changed in the modern age, the simple fact is that there are still more published research findings that eventually turn out to be false than there are that prove to be correct.

Indeed, there are vast null fields in science (like the dark matter hypothesis), dedicated to concepts which have zero objective validity, and the researchers in these null fields publish papers like nobody's business. In fact, they've even published 3-D maps of the dark matter in our universe. (These are, in reality, maps of all of the areas in which Newtonian mathematics is not sufficient to accurately predict gravitational pull. "Dark matter" is, in other words, nothing more than a mathematical artefact produced by minor errors in conventional Newtonian mathematics. It does not exist in our universe except on paper.)

There are those who view the published literature the same way that fundamentalist Christians view the Bible. I've got some bad news for you: The published literature isn't the Literal Word of God either.

In fact, much like the Bible, most of what's in there is patently false.

Which is not to badmouth science. The simple fact is that if scientists had not explored so many dead ends and wrong answers, we wouldn't know even what little we truly know today. Being wrong is a normal part of the scientific process.

In fact, it's the most important part.

-----

Here's an interesting quote (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124#s7) from the above-linked article. I think you'll find it quite interesting to consider in the context of studies designed to find the medical benefits of a worthless intervention like circumcision.



As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.

For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.

Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field.” However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating.


Now, is there evidence of bias towards circumcision having certain medical benefits? According to the history books, the evidence is overwhelming that such bias exists, and that this bias has a long and well-documented history of tainting scientific research.

So you go on believing, if you want, that this study is most likely valid in the presence of a history of prevailing bias, small effect size, great financial interest, the extremely low pre-study probability that circumcision makes a difference one way or the other, and even though most published research findings are false even in the absence of these confounding factors, but the smart money says that it's toilet paper.

----

positivenegative
November 6th, 2010, 08:10 AM
There are those who view the published literature the same way that fundamentalist Christians view the Bible. I've got some bad news for you: The published literature isn't the Literal Word of God either.

-----

It's ironic that I have to deal with being "positive" in the face the AIDS Industry and belief filled people because I also have to deal with having been raised in a fundamentalist Christian upbringing which makes my life as paradoxical as it can get.

I deal with my so-called Christian family who unquestionably has the truth straight from the bible. The bible is everything and there is no telling them different. I've tried everything from being kindly sweet to practically hitting them over the head. No matter how much persuasion or proof I can show it makes no difference whatsoever.

Now that I'm in the AIDS conversation it looks pretty much the same but with different packaging. Whether it be AIDS or God providing proof either way is a head banger. I don't think it's an accident that AIDS and the Christian right came out of the closet at the same time.

Gos
November 6th, 2010, 08:17 AM
I don't think it's an accident that AIDS and the Christian right came out of the closet at the same time.


Did I sleep through a newscast? When did the Christian right ever come out of the closet?

Jerry Falwell, Pat Robertson, Jimmy Swaggart, Ted Haggard, George Rekers, et al are the pillow-bitingest cocksmokers you're ever going to meet, and they'll NEVER come out of the closet.

(Yes, I know what you meant. Just making a joke. :D)

----

Gos
November 6th, 2010, 08:37 AM
It's ironic that I have to deal with being "positive" in the face the AIDS Industry and belief filled people because I also have to deal with having been raised in a fundamentalist Christian upbringing which makes my life as paradoxical as it can get.

I deal with my so-called Christian family who unquestionably has the truth straight from the bible. The bible is everything and there is no telling them different. I've tried everything from being kindly sweet to practically hitting them over the head. No matter how much persuasion or proof I can show it makes no difference whatsoever.

Now that I'm in the AIDS conversation it looks pretty much the same but with different packaging. Whether it be AIDS or God providing proof either way is a head banger.


You and I are in the same boat. Out of the frying pan....

Science is rapidly replacing Christianity as the predominant Western religion.

However, in order to replace Christianity, Science first must become Christianity.



In order to become Christianity, Science must:

Convince the lay masses that Science, and only Science, has the Truth with a capital T.
Convince the lay masses that Science, and only Science, knows the secret of Man's origins. And as with Christian Creationism, all other proposed hypotheses are to be treated as the darkest sort of heresy, and the full force of law must be used, if necessary, to keep these "heresies" out of the classroom.
Keep the masses in fear. And especially, the masses must be kept fearful of an Apocalyptic future in which there will be massive plagues (AIDS, ebola, SARS, swine flu, etc., Revelation 16:2), the Sun will be given the power to scorch men with fire (Global Warming, Revelation 16:8-9), meteors may fall from the sky and threaten us with extinction (Revelation 8), and most importantly, like Christianity before it, Science must convince the masses that Science, and ONLY Science, can save Mankind from this coming Apocalypse.
Convince the masses to live in fear that sex will kill them (AIDS, Romans 6:23.)
So far, Science is doing a pretty good job of repainting and peddling the same old Judeo-Christian values and ideas to the masses.

Scientists are doing such a good job, in fact, that they're even finding "scientific" justification for the Biblical covenenant between Abraham and Jehovah, and people like CBE (who would probably "know better" if it were a Christian minister rather than a guy in a white labcoat who was claiming that circumcision prevents AIDS,) absolutely eat it up and beg for a second helping.

----

positivenegative
November 6th, 2010, 08:37 AM
(Yes, I know what you meant. Just making a joke. :D)

----

:eek: you scared me for moment. :D LOL!

John Bleau
November 6th, 2010, 04:29 PM
The following quote from the Ioannidis abstract resonates:


Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

The three studies Gos and CBE argued about earlier do have what might conventionally be called "statistical significance," but many points, such as those given in Ioannidis' abstract, cloud the issue. I don't know how many studies were started and aborted, how airtight the protocols were, whether CBE was selective in referring to them, etc. As Gos rightly mentioned earlier, if the studies are three among many that had been launched, that raises the bar of statistical significance.

Richard Feynman once described how bad most experiments really were, and gave an example of a good experiment and how hard it was to achieve. That adds a high-level voice to Ioannidis' observations. And most of the members here know that researchers have a professional and personal price to pay if their work threatens certain wallets. Therein lies the bias.

I put very little value in those studies. I certainly wouldn't get myself snipped just because CBE pulled them out of the massive sinkhole of AIDS research.

Gos
November 6th, 2010, 04:50 PM
The three studies Gos and CBE argued about earlier do have what might conventionally be called "statistical significance,"...


Of course they might, but as I illustrated with my stud poker example, the threshold of sufficient statistical significance rises exponentially when we start breaking the numbers down to subgroups and comparing those subgroups against each other.

In the poker example, 26 million hands were enough to establish sufficient statistical significance to examine the raw instance of royal flushes and be confident that the results reflected the statistical odds with a reasonable degree of accuracy. But when we started comparing how lucky one suit was compared to the others, it became necessary to expand the study to 2.6 billion hands.

----

positivenegative
November 6th, 2010, 04:57 PM
So far, Science is doing a pretty good job of repainting and peddling the same old Judeo-Christian values and ideas to the masses.

----

Good points. It's so eerie that so much symbolism exists similarly in AIDS and Christianity.

So where does that put us as a historical equivalent?

Is "treatment" really about hanging ourselves on the tree of life brought to us by GlaxoSmithKline or are we rebels against God, I mean AIDS? Are we Satanists? :cool:

Gos
November 6th, 2010, 05:27 PM
So where does that put us as a historical equivalent?

As near as I can figure, it puts us exactly where we've been for thousands of years: Far more ignorant as a species than we are capable of comprehending, and always in danger of being led into the next Dark Age by a small group of megalomaniacs who convince the masses that they have all the answers.

As to these megalomaniacs, I am rather fond of the following quote from Dr. Kary Mullis:


You need to know what they are up to. It's every man for himself as usual, and you are on your own. Thank your lucky stars that they didn't bother to change their clothes or their habits. They still wear priestly white robes and they don't do heavy labor. It makes them easier to spot.

*(Quote excerpted from Chapter 11, "What Happened to the Scientific Method?")

positivenegative
November 6th, 2010, 05:47 PM
You and I are in the same boat. Out of the frying pan....

----

What I meant to ask is where does that put us individually as historical equivalents? St. Joan or Arc? Sufis? Hildegard of Bingen? :rolleyes:

A few weeks ago I found a garage sale at the end of the day and everything left over was for free and I looked down and found Kary Mullis's Dancing Naked in the Mind Field. :)

jonathan barnett
November 6th, 2010, 06:23 PM
A few weeks ago I found a garage sale at the end of the day and everything left over was for free and I looked down and found Kary Mullis's Dancing Naked in the Mind Field. :)

Wow! Now that sounds like a religious experience I might have to believe in. :D

Gos
November 6th, 2010, 06:30 PM
What I meant to ask is where does that put us individually as historical equivalents? St. Joan or Arc? Sufis? Hildegard of Bingen?


I am convinced that Dr. Duesberg will go down in history as our generation's equivalent of Galileo.

Now, before the Perthians string me up by my nuts, my comparison is not between Duesberg and the Galileo of legend, who was 100% correct while the Church was 100% wrong. That guy never existed.

The Galileo I'm using for comparison is a guy who was not 100% correct (I don't think any scientist in history has ever truly been 100% correct in the larger sense), but instead was merely less wrong than his contemporaries.

Galileo believed that the Sun was the center of the Universe, fer Chrissake. He believed that the planets had circular orbits (on this point, the official position of the Church was actually more correct than Galileo, since Ptolemaic astronomy postulated elliptical planetary orbits.)

But on balance, Galileo was less wrong than the Church.

This harks back to a point I was making earlier in this thread to CBE. Science is not about being right; it's about being less wrong than you were yesterday. No scientist in history has ever been 100% correct about anything, but a good scientist endeavors always to be less wrong than he was the day before.

Duesberg is already legendary as a cancer researcher (a fact oh-so-grudgingly admitted by detractors from his stance on AIDS.) Back in the '60s, he "discovered" the retroviral oncogene, and was nominated for a Nobel Prize, only to later recant and eventually propose the aneuploidy hypothesis of cancer -- a brilliant concept which has turned the entire field of oncology on its ear.

Is Duesberg "right" about aneuploidy? Only time will tell whether he's even barking up the right tree, but I'd be willing to bet my left nut that he's far less wrong than he was when he believed in silly things like retroviral oncogenes.

And like Galileo before him, I think that Duesberg is less wrong about AIDS than his contemporaries, and he's paying the heretic's price for it.

----

positivenegative
November 6th, 2010, 06:46 PM
Wow! Now that sounds like a religious experience I might have to believe in. :D
Amen brother! :p

CBE
November 7th, 2010, 12:08 AM
Gos declared:
I am convinced that Dr. Duesberg will go down in history as our generation's equivalent of Galileo.

But Gos, Galileo has been proven wrong. I read it on the internet. (http://galileowaswrong.com/galileowaswrong/);)

Aion
November 7th, 2010, 10:59 AM
Good points. It's so eerie that so much symbolism exists similarly in AIDS and Christianity.


That's because "AIDS" has always been 9 parts propaganda/advertising to one part science.

Gos
November 17th, 2010, 04:23 PM
But Gos, Galileo has been proven wrong. I read it on the internet. (http://galileowaswrong.com/galileowaswrong/);)


I'm sure you're being sarcastic, but Galileo WAS wrong -- that's my whole point. He was not RIGHT, he was merely less wrong overall than Ptolemy (and in fact Ptolemy was correct on some points where Galileo was wrong. For example, Ptolemy postulated elliptical planetary orbits, where Galileo insisted that planetary orbits were circular.)

This is how real science works -- it's not about being right, it's about being less wrong than you were yesterday.

For an example of this, read Stephen Hawking's The Theory of Everything. Hawking is one of the brightest scientific minds on the planet right now, but TTOE is basically an account of all the wrong answers he came up with in his failed quest to formulate a unified field theory.

I'll take it a step further: There are various web essays dedicated to defining pseudoscience, which list litanies of telltale signs (observed phenomena near the threshold of detectability, unfalsifiable hypotheses, magical thinking, etc.) I can save you a whole lot of time with a single, surefire way to tell the difference between a scientist and a pseudoscientist: Has he ever publicly admitted being wrong? If he hasn't, then he's a pseudoscientist.

----

Gos
November 18th, 2010, 02:47 PM
That's because "AIDS" has always been 9 parts propaganda/advertising to one part science.

True, but you can't ignore the ingredient of religion in the mix.

If for no other reason, the most effective propaganda is that which is supported by the target audience's prejudices, and those prejudices are shaped in large part by religion.

Christianity is the single most influential religion in human history, followed by Islam. Between them, the two religions boast slightly more than half of the total world population.

Both religions associate sex with sin, and both teach that the worst sexual sin is that of homosexuality.

Here's the formula:

WHERE
sex = sin

AND
"the wages of sin" = "death" (Romans 6:23)

THEN
sex = death

Thus, AIDS propaganda that sex=AIDS=death falls upon the ears of an audience already willing to accept without question the premise that sex=death.

Thus, the masses believe without question that AIDS propaganda is factually correct, because it confirms their religion-induced prejudices.


-----