Does smoking weed really result in brain abnormalities? Maybe not.

554px-Cannabis_leaf_2.svgA study purporting to find that marijuana use (even casual marijuana use) may be associated with brain abnormalities has been getting a lot of press lately. You can check out some of the coverage at CNN Health, the Huffington Post, and Fox News. And you can check out the original paper in the Journal of Neuroscience, Cannabis use is quantitatively associated with nucleus accumbens and amygdala abnormalities in young adult recreational users by Gilman et al., here.

Shortly after the study came out, Lior Pachter posted an analysis of some major problems with the study on his blog. I’m posting a link to his post because I think it’s a great example of something science bloggers do very well: they share important information about the quality of recent studies in real time. This is essential stuff you just don’t typically see in media coverage.

I’d also like to note that the statistical issues he points out are very basic ones. Adjusting p-values for multiple testing is something that I think most researchers understand they have to do even after an introductory stats class. So I’m having a difficult time understanding how this manuscript sailed through peer review in its present form. The Journal of Neuroscience is not some fly-by-night journal! I hope that journal editors will see what happened here and realize that if a manuscript contains statistics, it’s probably a good idea to choose at least one reviewer with knowledge of statistics. Failure to control for multiple testing appropriately is something I see over and over again in the articles I review. There is definitely a need for the statistics police in the peer review process.

Tired of submitting to the same old journals? JournalGuide wants to help.

I just got back from the big Science Online conference in Raleigh, NC, and when I was there I ran into someone who is working on the JournalGuide project. What is it, you ask? It’s a website where you can input your manuscript’s title and abstract/key words and get back a list of journals that publish similar content, with info on cost, impact factor, open access status, etc.

Who will this service help, you may ask! After all, most of us researchers are already pretty familiar with the journals in our field. I think there may still be a market for this, though.

Some of us work in small fields, with relatively few journals. I publish in one field in which there is one go-to journal for solid-but-not-paradigm-shifting manuscripts. If your article happens to get rejected by that journal, the choices get unattractive pretty quickly. In terms of readership, the audience for your findings drops precipitously. Also, since there is only that one journal everyone wants to publish one, a lot of us end up submitting there over and over. It gets boring! Lately, for these reasons, we have been trying to generate some new potential journal ideas. I think JournalGuide could help in a situation like this.

A feature I’m more interested in, though, and one that is not functional yet, is the journal rating system. You can create an account and anonymously rate your submission experience (the site is supposed to keep track of postings to weed out trolls). At some point, the JournalGuide is supposed to aggregate the results and make them available. I’m not sure when this will happen–the site says late 2013, and I’m writing this in March 2014–but I’m looking forward to it!

Right now, most of us depend on word of mouth to figure out which submission processes are so onerous that it’s best to just avoid a journal all together. You know what I’m talking about. The two week review process that often turns into four months somehow. The editor who seems to regularly lose track of submissions. The journal that wants nothing to do with negative results, whatever its stated policies. Then again, there are also the journals that end up shocking you with just how smooth their submission process actually is. It would be nice to have a more systematic process for collecting and making available information about how efficient and fair a journal is. I’m curious about whether this system will catch on, since its success really depends on the number of users who buy in.

Anybody else used this site or planning on using it?

The real Sasquatch genome scandal

SmalfutThe claim that a group of researchers, led by Melba Ketchum, had sequenced the Sasquatch genome was one of the wackier science stories of 2013.

Unable to find a home for their findings in any established journals, the authors started their own journal–DeNovo–and published their article on the Sasquatch genome there. (So far, this is the only article that has been published in the journal.) They held a press conference in Dallas on October 1st, and the media had a lot of fun covering the story.

To their credit, Ketchum and co-authors have made their article and the supporting data public. You can check out the article here and the supporting data here. They even made available some peer reviews that had been leaked after they submitted their work to Nature and the Journal of Advanced Zoological Exploration in Zoology (I’m not familiar with the latter journal). They also seem to have been very open in discussing their work with other scientists, which is great.

John Timmer provided a great summary of the considerable problems with the Sasquatch study over at Ars Tehnica. Here’s the short version: the sequence produced did not belong to bigfoot. It was an artifact resulting from sample contamination, degradation, and subpar assembly methods.

So OK, a wacky bit of research comes out, it’s rejected during the peer review process, and the post-publication process seems to have worked well too. The media seems to have accepted that this is not solid science, although it made for some great headlines. The system worked, and no real harm was done. Right?

That’s what I thought initially, but in reading about this story something started nagging me. Ketchum is a forensic scientist. She is director of a company called DNA Diagnostics that appears to specialize in animal forensics. Forensics is all about quality control, correct? If elementary quality control measures were not taken in this bigfoot study (and don’t even seem to have been understood), what does that say about the forensic work of this team–and in particular the work of the lead author?

This study was quickly debunked by scientists because it was published, albeit in a vanity journal, to great media fanfare. But forensic evidence is usually presented in court absent any real peer-review process. As the ongoing Annie Dookhan saga illustrates, when the integrity of forensic evidence is compromised, it can have real consequences. People go to prison for crimes they didn’t commit. Other people don’t go to prison because the evidence that should have put them behind bars is thrown out. Murders go unsolved. This is serious stuff.

According to her CV, Ketchum has presented forensic evidence in criminal cases. In death penalty cases, even. Death penalty cases in Texas, which has executed more prisoners since the death penalty was reinstated in 1976 than any other state. If basic quality control procedures were neglected in the presentation of this criminal evidence, as they were in the bigfoot study, what does it mean for the outcomes of the cases involved? (Incidentally, the problems with DNA Diagnostics–the lab Ketchum runs–seem not to be limited to bigfoot research.) What does this say about the quality of scientific evidence being presented in life-or-death cases in general? This reminds me of the incredible problems with expert scientific testimony revealed for shaken-baby syndrome and arson. I found the whole thing very, very unsettling.

Do people get yaws from monkeys and apes? A potential roadblock for eradication.

Recently, a letter I co-authored called Treponemal infection in nonhuman primates as possible reservoir for human yaws was published in Emerging Infectious Diseases. It’s free if you want to check it out!

imgresMost people I know have never heard of yaws, but at one time it was very, very common in tropical regions across Africa, Asia, and the Americas. It’s a chronic, debilitating infection that is usually contracted during childhood, and it is caused by a bacterium closely related to the one responsible for syphilis. Luckily, it’s easily treated. You can cure it in its early stages with a single shot of penicillin, and recently we have learned that a single course of oral antibiotics appear to work just as well. In short, there is really no reason for anybody to have to suffer from this horrible disease.

Many other people feel the same way. In fact, a huge yaws eradication campaign took place in the mid-20th century. After World War II, this was one of the first big public health campaigns planned by a brand new World Health Organization. More than 40 million people were treated, and the number of new cases fell by as much as 95%. Not bad! The campaign wasn’t successful, though, in that it never achieved its ultimate goal: wiping this disease from the face of the earth.

There are multiple reasons why the first campaign failed. One big reason is that it simply didn’t have the resources to keep on top of things. After a while, the WHO turned over the responsibility for yaws surveillance and treatment to local governments. Unfortunately, the whole reason the campaign was necessary in the first place was that local governments weren’t capable of carrying out these kinds of tasks without support. Not surprisingly, yaws resurged in a number of countries and is still around today.

There is another important reason that the eradication campaign may have run into trouble: a potential animal reservoir. One of the most important criteria for an eradicable disease is that there is no animal reservoir. Otherwise, you can totally eliminate the infection from a population, only to have it re-enter via an infected animal. A single infected person spreads it throughout a newly susceptible population, and all of your hard work is for naught. In this situation, eradication is not an acceptable goal–though control certainly is. In our EID article, we outline all the evidence that supports the hypothesis (around since the 1960s) that (1) African monkeys and apes are infected with yaws and (2) they may be capable of spreading the infection to humans. Infection via animals could help explain the mysterious cases encountered during the first campaign, when infected individuals would turn up in a previously treated population, having had no contact with any infected people as far as anyone could tell.

The WHO announced a second yaws eradication campaign recently, but it doesn’t seem as though much thought has been given to the problem of an animal reservoir. People involved in the first eradication campaign were calling for further research into the potential problem of simian yaws as early as the 1960s, but this history seems to have been largely forgotten. That’s unfortunate. Eradication campaigns are incredibly expensive. In the end, the cost of finding and treating cases skyrockets, because it entails going to remote and dangerous places to treat the very last hidden cases of an infection on earth. The polio eradication campaign has been going on for years and years longer than was originally planned, and we have spent much, much more than was originally budgeted because of these difficulties. Eradication campaigns also put a tremendous financial burden on the countries involved, as well as sponsor organizations such as the WHO. Money spent on yaws eradication (vs. simple yaws control) is money that low income countries cannot spend on other important health problems, like HIV, tuberculosis, and the childhood infections that represent huge sources of mortality. There is a huge opportunity cost involved. (Side note: a great book on the drawbacks of the polio eradication campaign, relevant to eradication campaigns in general, is William Muraskin’s Polio Eradication and its Discontents.) If we decide to launch a new eradication campaign, we need to make sure that we can actually carry it out, so that the resources we expend will have been well spent.

Our argument in this letter in a nutshell: before throwing a massive amount of resources behind another eradication campaign, it makes sense to do our due diligence and make sure that an animal reservoir is not going to torpedo yaws eradication for a second time.

Why are parents refusing the Vitamin K shot for their babies?

images-2Between February and August of this year, 4 babies in Nashville developed brain hemorrhages or gastrointestinal tract bleeding. Luckily, all of them survived. Not all babies have been so lucky. There was a case, in Australia in 2011, in which the vitamin K shot was refused by the parents and a baby died.

Nashville-area physicians report that an increasing number of parents are refusing vitamin K shots for their babies. Although the percentage of parents refusing the shot is only about 3% at local hospitals, almost 30% of parents refused at birthing centers. And this isn’t just a Nashville thing. Over 20% of parents at a St. Louis-area birthing center refused the shot as well, and I’m sure the stats for hospitals/birthing centers in other places are similar.

Why would parents decline the vitamin K shot? Maybe because of misinformation like that present on Joseph Mercola’s website. Mercola warns of three risks.

1. Inflicting pain on the newborn (in the form of a shot). He warns that the momentary prick of the shot may have long-term effects on the baby’s wellbeing and may jeopardize the success of breastfeeding. I’ll let you judge for yourself whether you think this sounds reasonable. I don’t, and there is certainly no good evidence to support it.

2. The amount of vitamin K injected is 20,000 times the needed dose and contains toxic preservatives. Wow, 20,000 times the necessary dose? Toxic preservatives? What is his source for this dramatic claim? A peer-reviewed journal article? Nope, I’m afraid not. It’s a website called Giving Birth Naturally. This website, in turn, gives no sources at all. Solid stuff, Dr. Mercola!

3. Babies run the risk of acquiring an infection at the injection site. This is true of any injection, but the chances of infection are so, so small. Even a hypochondriac like me thinks this is a pretty minimal risk. Infinitesimally small–I can’t even find reliable numbers on how often it happens, it’s so rare. For what it’s worth, I haven’t been able to find a single reported case of a baby developing an infection at the site of a vitamin K injection.

Now even Mercola acknowledges that the vitamin K shot doesn’t cause cancer. Unfortunately, not everybody has gotten that memo. Check out this website: the Healthy Home Economist. Although the author DOES eventually point out that the vitamin K-leukemia link has been debunked, she buries this acknowledgement in the comments, where no one will read it. Nice. The same uber-outdated information is also found in Mothering Magazine’s Natural Family Living Guide to Parenting. If you’d like to take a look at some of the articles debunking this association, you can check out this one in the New England Journal of Medicine (from way back in 1993!) or this more recent one, from the British Journal of Cancer.

Many of these anti-vitamin K shot websites give suggestions for what parents can do in lieu of the shot. Unfortunately, they are not well thought out.

1. Why not just request an oral dose of vitamin K for your baby? Because it doesn’t prevent hemorrhaging, that’s why. While it sounds totally reasonable, single oral doses just don’t do the trick.  Comparisons of “failure rates,” i.e. the rates of hemorrhaging, in countries that use different methods to administer vitamin K  demonstrate that a limited number of big oral doses just doesn’t work as well as the shot. Daily, low doses may be as effective as the shot–but to the best of my knowledge, those aren’t an option in the US.

2. Eat a lot of vitamin K-rich foods and breastfeed your baby. Again, not a great strategy. Very little vitamin K makes it into breastmilk, even when a mother eats a lot of it. Very little can cross the placenta beforehand either, even if the mom has a great diet. That’s why the shot is necessary.

You would never realize it from the scare-mongering articles out there on the internet, but in reality the risks associated with the vitamin K shot are negligible compared to its potential benefits. It’s true that the chances of any one baby developing vitamin K deficiency-related bleeding are small–but when such a great way to avoid this risk is present, why not use it? A vitamin K shot may not be natural (meaning it didn’t exist tens of thousands of years ago). But neither are vaccines. Or carseats. And these inventions save lives. For any given child, the risk of dying from a hemmorhage or measles or a car accident may be small. But at the population level, these easy fixes make a difference–they save lives.

On reproducibility: the risks of the replication drive

ReplicationAn article called Reproducibility: the risks of the replication drive just came out in Nature. In it, Mina Bissell makes some great points.

The main idea: replicating studies is hard. It’s easy to tweak something (without even knowing that you did it) and end up with different results. Because of this, it’s important not to cast doubt on the results of someone else’s experiment too quickly. Communicating with the lab who did the original study is important if you find yourself running into problems. Failure to replicate can have serious consequences: good scientists can lose credibility, promising lines of research may not be pursued, etc. Thus, attempts at replication should be taken seriously, and everyone should try to remain civil during the process.

OK. I think everyone probably agrees with that!

But there were parts of the article that made me a little uncomfortable. Bissell gives compelling examples of how tiny changes–using the same cell line, but from different laboratories, for ex.–can torpedo replication attempts. I too believe that this happens frequently, so no arguments there. But unlike Bissell, I see this as a major problem. If you can’t replicate a study using virtually (but not totally) identical conditions, how generalizable are the original results likely to be? How useful is an experiment that yields such shaky findings? If we can’t replicate findings in the lab, what are the odds that they will describe what’s happening out in the messy real world?

Bissell describes a comforting example in which exploring a failure to replicate under slightly different conditions yielded valuable scientific data. I’m sure there are serendipitous situations like that one, but I also suspect that they are few and far between. My suspicion is that, in most cases, when other labs fail to replicate an experiment after credible attempts to do so, there is probably a real problem with the original study. Either (1) because the original results were faulty in some way or (2) because the original results, though valid, are not at all robust. Either way, the science community needs to know. So in my eyes, the drive for replication remains vital and the risks are well worth it.

A List of Things that Patients Should Question

I just learned about the Choosing Wisely campaign. It’s pretty amazing. The goal is to get each major medical specialty society in the US to make a list of 5 commonly done things that doctors and patients should question.

Being a patient is hard. People tell you that you are in large part responsible for your own care–but you are not a medical expert. When do you trust your doctor’s opinion? When do you need to do your own research? How many people are actually capable of doing this kind of research? The whole thing can be nerve-racking, especially if you are dealing with a serious health condition.

imgres-1The Lists for Choosing Wisely are still being written, but many are up on the website already. I wish they were being publicized more. I’d never heard of them until I happened on an article about the campaign in JAMA, and I doubt many other non-MDs are familiar with them either. I even asked a couple of MD acquaintances, and they hadn’t heard of these lists.

If you want to see them, you can check them out here. Unfortunately, they are not especially easy for patients to sort through. They may be helpful, though, especially if you know what you’re after. Take the American Academy of Pediatrics recommendations. Some are pretty well-known (no cough/cold meds for small children, no antibiotics for viral respiratory infections) but others might be less widely known (they all deal with common situations in which CT scans are not necessary–it was stuff I didn’t know).

Here’s a more user-friendly facet of the website: If you have questions about a particular condition/procedure, you may be able to find a fact sheet on it (look at the column on the left). Do you need a PAP smear? Maybe not! But you may have to convince your gynecologist of that. Are you thinking about scheduling an early delivery for your baby? Maybe not a great idea. But again, you might end up arguing with your OB about it. Need help controlling your migraines? Might want to avoid certain drugs. These fact sheets are being developed by Consumer Reports, so hopefully they will get disseminated widely. It’s a really nice idea!

One thing I like about this campaign is that it might give patients stronger footing when they decide to question a medical recommendation. Lots of times physicians recommend treatments that aren’t backed up by evidence, but patients are hesitant to speak up. Maybe this will give them (us) a little more confidence. It also may help people sort through the credible research and all the crazy stuff you find when you do a health-related internet search.