Confused?

Me too... but you can be helped by reading my intro post.

Friday, July 30, 2010

Miracle Drug (?)

The popular media has been all a flurry about the recent AIDS drug trial: in a study involving 889 South African women, a microbicide vaginal gel was found to reduce HIV infection by 39%. This data was first reported at the International AIDS Conference in Vienna, with a concurrent publication in the journal Science. The World Health Organization (WHO) and the United Nations publicly praised the study as a “landmark proof of concept study,” one that will “open new possibilities for HIV prevention.”

Of course, as is wont to happen when the popular media reports on a scientific discovery, the press releases have gotten exaggerated in order to make the story more sexy. Of course, more people will read a story entitled “Groundbreaking' gel halves HIV infection rates” than one called “AIDS gel study is an important proof-of-concept.” So, one story is read by the masses, and one read by scientists, and this dichotomy should not exist.

So, what’s the real deal? I was not at the AIDS conference but I can access the Science publication to summarize: the AIDS gel study is an important proof-of-concept. It is also an important teaching tool for the how the media doesn’t always translate scientific findings very well, and how statistics are used in reporting scientific data.

First, the basics of the study: it was a double-blind randomized trial of 889 South African women between the ages of 18 – 40. What does this mean? Double-blind means neither the women nor the doctors giving them treatment knew if the gel they were using was a placebo (that is, did not contain the microbicide drug tenofovir) or not. Randomized means the women volunteers were randomly assigned to the placebo or tenofovir gel group so that there would be no bias based on age, residence, number of sex acts, etc. The women reported their sex acts, condom usage, and returned used applicators, so the researchers could calculate adherence to the instructions. Women were tested for HIV infection and pregnancy (since the safety of the gel was not known for pregnant women) every 30 days over the course of the 30-month study. If a woman was found to be HIV-positive, she was removed from the study and referred to an AIDS treatment clinic immediately. Basically, this all means that the only variable in the study was the presence of the drug in the gel; as the researchers put it: “[The] protective effect is evident irrespective of sexual behavior, condom use, herpes simplex type 2 virus infection, or urban/rural differences.” Overall, the researchers report an overall reduction in infection of 39%, and up to 54% in the women who adhered most stringently to the instructions of the drug regimen.

These are the conclusions trumpeted by the media. Responsible scientific journalists would also inform the public that at no point do the researchers claim this is a cure for AIDS or will be available tomorrow or will solve all the world’s problems with just two applications a day. On the contrary, the researchers as well as the experts at the WHO and UN caution that this is a small trial, one that needs to be repeated, and that many questions are still unanswered – something, thankfully, that the NY Times did report.

And what about the statistics I mentioned before? Well, let’s take a look at where that 39% value comes from: the infection rate was 5.6 in the drug group compared to 9.1 in the placebo group. 5.6 is 39% lower than 9.1. But that’s not a lot of women, is it? However, by considering the probability for error based on the sample size, the statistics show that the probability that the differences between the two groups (placebo and drug) are due to chance and not to the drug is very low. This means that even though the absolute numbers do not look so different, they are “statistically significant.”

Some people scoff at this phrase. And while it’s true that statistics can be used or ignored to make data seem more impressive, the statistics can’t change the data. It’s important to know the limitations of statistics and to relate them to the risks involved. For example, you all are inundated with poll numbers before elections: so-and-so is leading the polls 52% to 48%. Warning! This number is an average, which means there is a range, which means there is a source of error. If this error is only 3 percentage points, this difference means absolutely nothing. However, a reporter isn’t going to make the news for saying the “poll is not statistically meaningful.” This highlights the catch-22 of scientific reporting: statistics can make significance out of very small differences, and make very large differences seem meaningless.

So, what’s the take-home here?

Don’t believe the numbers the media tells you from scientific articles – remember in the end they are trying to sell you something: in this case, a cure for AIDS. If they’re not giving you the full picture, complete with statistics, a red flag should go up.

Statistics are powerful but know their limitations: 39%, while significant, is not a cure for AIDS. On the flip side, just because the raw numbers aren’t impressive doesn’t mean they are meaningless.

Go to the source: the researchers clearly outline the limitations of their study, as do the UN and WHO. Read between the lines of the media press releases – obviously they will be filled with words like “groundbreaking” and “monumental,” but the words “potential” and “promise” usually signal that this isn’t the end of the end of the research.

And, always remember that science is never conducted in a bubble. The implications of this, and any, clinical trial reach far beyond the difference between placebo and drug. In this case, the most significant impact may be the power of preventative treatment controlled completely by women. In a country where approximately 3.2 million women are currently living with HIV (2007 estimate), 39% prevention is a pretty good place to be starting.

Saturday, July 3, 2010

Programming Note

My latest real-life research has led me to the conclusion that finding the time to start a new research-intensive blog is not possible when also working on getting two manuscripts out.

That said, I've been compiling a list of things I want to write about. Looking at that list every day reminds me that I actually DO have things to say, and finding the time to compose something should be easier in the near future as the manuscripts are thisclose to being sent to journals.