Are placebos getting more effective over time? What does it mean?

imagesThe placebo is the fake treatment at the heart of every clinical trial. It’s the sugar pill, the sham operation, the baseline to which the real treatment must be compared. The whole placebo thing started on a World War II battlefield, when a physician named Henry Beecher witnessed something extraordinary. A wounded soldier was in great pain, but Beecher and his colleagues had run out of morphine. A nurse gave the soldier an injection of salt water, telling him it contained the painkiller–and the man responded as though it really did. When Beecher returned to civilian life, he pointed out that this phenomenon, which he called the placebo effect, could be causing big problems in studies of new drugs. A new drug might seem effective because patients treated with it appeared to improve; but in truth it might not be more effective than, say, a sugar pill. Patients’ expectations might be responsible for a large part of the benefits that everyone had been ascribing to drugs. And If we can get more or less the same effect from a sugar pill, it doesn’t make sense to mess around with a medication that is bound to come with its own set of problems. That’s why in today’s clinical trials, we compare new treatments to either placebos or whatever the standard of care is (previous trials having shown the latter is more effective than placebo).

Here’s something interesting, though. It turns out that placebo treatments appear to have been getting more effective over time, at least for some conditions. Take antidepressants, for example. From 1980 to 2005, the improvement effect reported in the placebo groups of clinical trials doubled. There is no evidence that the effectiveness of the placebo has increased for disorders such as epilepsy. So it’s not clear how general this trend of increasing effectiveness is. But why would the placebo effect be getting stronger for conditions like depression?

One possibility is conditioning/expectations on the part of the patient. Perhaps, based on past positive medical experiences, patients have come to associate contact with a medical professional with feeling better. In other words, because people believe that being involved in trials will improve their health, it does. Maybe current patients have higher expectations than past patients. But it’s also possible that the change hasn’t occurred in the patients. In that study of changes in the placebo effect over time in antidepressant trials, the researchers found that the response was only evident if you looked at expert ratings of depression. If you looked at patient self-reports, there was no increase in the effect of the placebo over time. So what does that mean? Possibly that observer ratings are not very reliable. Maybe over the years, physicians have been getting increasingly confident about their results, and this is reflected in their assessment of patients. The authors argued that it is unlikely that the placebo effect has actually doubled over the last two decades.

In the popular press, some writers have viewed the increasing effectiveness of placebo treatments as an impediment to getting new drugs approved. After all, the more effective a placebo treatment is, the more effective a new drug has to be to prove itself. The idea is that patients are missing out on potentially effective meds because the placebo is more effective than it ought to be. In truth, researchers have found that the more effective a placebo, the more effective the comparison drug tends to be. So drugs that aren’t more effective than placebo may just not be that effective over and above the expectation/conditioning effect. The placebo may just be a convenient scapegoat for unsuccessful trials.

Whatever the reasons for the increase in the placebo effect over time, it makes sense to harness placebo power. One group that has been able to figure out how to capitalize upon the placebo effect is the pharmaceutical industry. As has been pointed out, they understand how important it is to set up specific expectations within the minds of their consumers. They carefully manage advertisements, medication names, and the way that pills look in order to create the idea that a given medicine will have a certain result. And it seems to work!

How could a placebo be used effectively by the typical doctor, though? The big hurdle here is ethics. Most people feel it isn’t right for a doctor to give a patient a sugar pill and pass it off as a “real” treatment. Of course this makes sense. But it’s also true that a physician doling out a sugar pill could truthfully say, in many cases, that the treatment they are prescribing has been shown to significantly reduce pain for a given condition. For example, in one unique experiment, patients with irritable bowel syndrome were randomized to several groups. Some were put on a study waitlist (because just signing up for a trial can result in improvement), some were given a placebo by a curt medical practitioner, and some were given a placebo by a warm practitioner who expressed optimism about their condition.  Each of these assignments was associated with increasingly good responses. Roughly 30% of those who were on the waiting list reported adequate pain relief, about 40% of patients with the curt doctor said their pain was under control, and over 60% assigned to the warm doctor/placebo group felt their pain was manageable. The same trend was seen for quality of life. Obviously the ritual of seeking medical help and receiving compassionate care is an important part of getting better.

Even if the ethical problems can be overcome, however, logistical problems remain. In this day and age, most of us google the medications our doctors prescribe prior to filling them. How could a placebo fit into our system? What would the doctor say he was prescribing? What would your prescription say? Obviously your physician must be honest, and a placebo won’t work if you know it’s a placebo. The placebo works well in clinical trials, where secrecy is a design feature, but how would it translate into the real world?

As placebo treatments are increasingly considered worthy of study themselves, it will be interesting to see if and how medical treatment changes to accomodate what we’ve learned.

P.S. Thanks to Lotus Eater for pointing out that changes in the placebo effect over time have been documented!

Does chelation therapy really help heart disease patients?

therapy-chelation-300x225This March, the results of the Trial to Assess Chelation Therapy, which looked at the effects of chelation therapy on heart disease, came out in JAMA.

What is chelation therapy, you ask? It works like this: a substance that binds heavy metals is given to a patient in order to help him or her excrete these toxic substances. So let’s say that you find your child chewing on lead-containing paint chips. Your first stop should be the hospital, to start chelation therapy. Chelation therapy has also drawn a huge following in the alternative medicine community, though, where it’s thought that many ailments result from heavy metal toxicity. Some parents have been using chelation therapy on their autistic children, in the belief that autism is caused by exposure to mercury via vaccines or other sources. In 2005, a young boy actually died while receiving chelation therapy. Other patients have been using chelation therapy to treat things like heart disease.

The rationale for treating heart disease with chelation therapy is a little confusing. I don’t think anyone believe that a buildup of toxic metals is a primary cause of heart disease. But some proponents appear to believe that EDTA removes calcium from the plaques inside of blood vessels, causing them to shrink, OR that metal ions inside the body produce free radical damage to blood vessels. In the 1990s, trials found no evidence that chelation therapy worked for this purpose. But apparently some alternative health providers have been marketing as an alternative to invasive surgeries anyway, telling patients it will clear their arteries (I borrowed an example of this type of false advertising from Quackwatch–you can see it below).

TACT has been controversial since the moment it began. This trial cost over 30 million dollars and took over a decade to complete. How did a trial this mammoth get started when the previous literature provided little reason to think that chelation therapy would be successful? Well, a proposal for a chelation therapy was submitted to the National Heart, Lung, and Blood Institute in 2000. Not surprisingly, it was rejected. However, the American College for Advancement in Medicine (or ACAM, a pro-chelation organization) and U.S. representative Dan Burton (a proponent of the theory that vaccines cause autism) kept agitating for a chelation trial. And, as it happens, The National Center for Complementary and Alternative Medicine at the NIH issued a very specific call for applications in response. They asked for proposals to investigate the EDTA chelation treatment protocol recommended by ACAM. Not surprisingly, a proposal to do just this was approved, and the TACT study was born.

In TACT, roughly 1,700 patients were treated with 40 three-hour infusions of the chelator disodium EDTA over the course of a year. Not a trivial undertaking for the patients involved! More than half of the sites at which therapy was provided included alternative medicine centers that had been providing chelation for years. Two of them were suspended because of violations. In the middle of the trial, the guy who owned the pharmacy that supplied the EDTA for the trial was indicted for Medicaid fraud. Of the health providers administering the therapy, several were convicted felons, several more had been disciplined by their state medical boards, and still others had been involved in insurance fraud. In addition, one prominent pro-chelation author who admitted that he had falsified data was nonetheless cited a number of times in the TACT protocols. Enrollment for the trial was put on hold in 2008 for an investigation into complaints about things like the centers failing to obtain informed consent properly. And as if all that wasn’t enough, the NIH centers that sponsored the trial weren’t kept blind throughout, as is typical. Instead, they analyzed data periodically throughout the study. So even before the study was complete, it had its fair share of critics.

chelationadWhat did the study’s authors report in the end? Heart attack patients over 50 years of age who got chelation therapy had almost 20% fewer cardiovascular events (mortality, another heart attack, stroke, coronary revascularization, or being hospitalized for angina) than those given a placebo. Now 20% sounds pretty substantial, but the results only just made statistical significance, with a p-value of 0.035. Critics of the study pointed out that, in addition to the concerns listed above about how the study was carried out, it was odd that significantly more patients who received the placebo dropped out of the trial. This is the opposite of what you’d expect–since a treatment is usually associated with more side effects than the placebo, typically patients in the treatment arm drop out at a higher rate. If just a few patients had been treated differently–not hospitalized for angina, not subjected to coronary revascularization–the statistical significance in the study could have disappeared. So if the doctors responsible for these patients were not blinded to their treatment arm, and their knowledge affected the treatment they provided, this may have affected the study results.

So what should we take away from this 30 million dollar, 10 year study? Even if many cardiologists don’t believe chelation therapy resulted in a significant improvement in heart disease patients, and the positive result was spurious, given the popularity of the treatment, some seem relieved that at least it doesn’t seem as though chelation therapy is dangerous. I guess that is encouraging! Overall, though, this seems like a great example of why scientific proposals should be vetted by scientists, rather than being pushed through NIH by lobbyists and congresspeople. It’s not unusual for big, expensive clinical trials to end up with ambiguous results. But in this case, a ton of money and effort was lavished on a study with a very skimpy rationale and major methodological issues–one that couldn’t have gotten past the peer-review process without substantial help from outside. The recent meddling of congresspeople like Lamar Smith in the grant funding process doesn’t bode well for science.