Wednesday, January 23, 2013

Wait, There's How Many Independently Funded Studies on GMOs?

I just came across this list of 126 independently-funded peer-reviewed articles on GMOs this morning, and I'm really surprised I hadn't seen it a long time ago. Clicking through some of the studies, they run the gamut of genomic analysis between conventional and genetically modified crops (particularly on unintended alterations of untargeted genes), the potential for transferring antibiotic resistance, the risk of allergenicity, analysis of tissue and metabolites in rats, amount of pesticide use, and the effect of Bt corn and GM soya on mouse testes. In all but a handful of studies, the investigators found no evidence that GM poses an additional risk over conventional farming. Again, these are the independently-funded studies so many critics of the technology have asked for for years. There's simply no excuse for ignoring it.

I didn't see any studies about cassette tape genes. I'd just steer clear for now. (Source)
I've written before about systematic reviews and meta-analyses, and how if you don't look at the full picture, it's extremely easy to cherry-pick to find the results you're looking for. Without really looking too hard, you'll find some studies that contradict the consensus this list represents, as you would expect in just about any field. For instance, one study may say eating eggs provides nearly the same cardiovascular risks as smoking, while a meta-analysis of all prospective cohort studies, including the previous one, on eggs and cardiovascular health representing data from over 260,000 people, shows no additional risk. Which conclusion do you think holds more weight? I can't stress enough that science isn't a push and pull of individual studies floating in a vacuum, it is a systematic way of looking at an entire pool of evidence. It takes work to train yourself to do this. It's just not how we're wired to think, and even people who have been exposed to it still struggle with it, as I see in my everyday experience in evidence-based medicine. People naturally have their preferences, but if there's a way to minimize this effect it's inconceivable to me not to use it to guide our decisions, from adopting a new technology, abandoning existing ones that don't work as well as we hoped, and in the way we determine our public policies.

In my many, many conversations on GMOs, I've found that confirmation bias isn't the only barrier, there's also a significant amount of conflation going on between what's specific to GMOs and what is just poor agricultural practice. For example, it's quite clear that pesticide resistant weeds (aka superweeds) are popping up on farms around the U.S. Certainly, growing Roundup Ready corn would logically facilitate the overuse of a single pesticide on a single area, but resistance will occur anywhere there's over-reliance on a single pesticide, and it's up to the particular farmer to rotate their crops to avoid this. Just because many have failed to do it doesn't mean that GMOs are the primary problem. On the spectrum of possibilities of genetic modification, in my view pesticide resistance is securely on the bottom, but let me put it this way: if we just simply banned them, totally removed pesticide resistant crops from all farms in the U.S., would we solve the problem of superweeds for all time? Obviously not. For every potential risk you've heard of about GMOs, ask yourself the same question. You'll almost always find that the problem comes down to general issues of large-scale agriculture.

Thursday, January 17, 2013

Colony Collapse Disorder, Neonics, and The Precautionary Principle


Over the past decade, the strange mystery of declining bee populations and colony collapse disorder (CCD) have justifiably received a ton of media attention. There's plenty of resources out there for more background on what researchers are thinking, and why it matters if you need it. I'd suggest starting with this excellent post by Hannah Nordhaus for Boing Boing, written shortly after a batch of studies were published identifying a specific group of insecticides called neonicotinoids (aka neonics) as a potential primary cause of CCD. I don't really have much to add to the discussion beyond that article, but I'm particularly drawn to this issue because the the precautionary principle is now front and center, and the debate has thus far largely avoided the type of hyperbole and fear-mongering that only serve to distract evidence-based policy. This post is less about trying to being informative as it is me being hopeful that a discussion pops up in the comments.

Earlier this week, the European Food Safety Authority (EFSA) concluded that the evidence suggests use of neonics constitute an unacceptable risk to honeybees, essentially laying the groundwork for an EU-wide ban. As outlined in the Nordhaus post, the study that most clearly linked neonics with CCD has been harshly criticized, certainly by Bayer, the largest manufacturer of neonics, but also by some independent scientists particularly troubled by what they see as unrealistically high doses given to the bees. Glancing at the study it didn't appear that the doses were completely without merit, but it's pretty apparent that the EFSA is operating on the premise that the potential risks of using neonics is worse than any possible benefit to farmers in the EU, as opposed to documented risks supported by large amounts of evidence that has been systematically reviewed.

Normally, I don't find the precautionary principle very compelling, and clearly the U.S. government doesn't either. I think oftentimes potential risks are over-hyped, while real benefits are not fully considered or outright dismissed. It could be used to halt or delay literally any technological advance, and yeah...slippery slope. However, in this case I find myself being a bit more sympathetic to it than I usually am. I really don't know what to think about it. Pointing the finger at synthetic chemicals designed to kill insects obviously seems particularly intuitive, but equally obvious is that our intuition sometimes leads us astray by oversimplifying a complex phenomenon. The UK's Department for Environment, Food, and Rural Affairs recently took the skeptical view reflected by Nordhaus. The department commissioned another long-term study to look at the direct sub-lethal effects on bees, as well as asking researchers in the UK to prioritize this issue, after which they will take another look at some point this year. It's not like they said there's nothing to look at here. I'm tempted to think that this is perfectly acceptable.

Banning neonics isn't going to force farmers in the EU to just abandon insecticides. The alternative chemicals do in fact have much stronger evidence of genuine, tangible, and imminent environmental risks than neonics do. What do you think? How much uncertainty should we tolerate? On what issues do you find the precautionary principle to be appropriate?

Friday, January 11, 2013

Alltrials.net

Hi everyone, I'm really awed at the overwhelmingly positive response I received from my last blog post. Never in my wildest dreams did I figure that 7 posts into this thing I'd actually kinda maybe impact the discussion around an issue I wrote about. Hopefully many of you who visited recently will check back in occasionally. I do this in my spare time, so I can't be hugely prolific, but I'll try to keep it interesting and engaging.

As someone who works in evidence-based medicine, Ben Goldacre's Bad Pharma has been on my mind a lot recently. I couldn't possibly give you a sense of the many issues in the book with the eloquence and expertise that he does, so I encourage you to take a look and see if it interests you. The U.S. version is due on February 5.

Briefly, he covers the many reasons, across all the various stakeholders in medicine, why the entire system is seriously flawed. That's hardly an overstatement. Trials go missing, data are manipulated, regulators don't mount up, etc. and it paints a pretty horrifying picture of doctors making treatment decisions on biased and incomplete evidence, thus putting patients in unnecessary and entirely preventable harm. In this day and age of open-access journals and cheap and easy access to information, there really is no excuse for this state of affairs. I can't imagine where one moderately well-read blog post gives me any right to think I have any influence now, but I didn't want to just read this book and do absolutely nothing.

One step in the right direction is alltrials.net, an initiative to register all trials worldwide along with information on the methods used and results. Take a look at the site, get a sense of why it exists if you don't already, and hopefully you'll sign the petition. We deserve real evidence-based medicine, and I don't think this is your average petition. There's some good names behind it, and a charismatic and likable advocate who I really believe has a chance to get somewhere with this.

I'll get back to blogging about my usual topics soon. In the meantime, I'll always be open to suggestions. A couple of weeks ago I tried to set it up so my latest tweets would show up somewhere on my blog, but it didn't work with this design. Sort of a missed opportunity to suddenly have tens of thousands of page views without my username anywhere. Find me at @scottfirestone, and send me a link or say hi if you like.

Thursday, January 3, 2013

The Link Between Leaded Gasoline and Crime


Kevin Drum from Mother Jones has a fascinating new article detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL) explains the rise and fall of violent crime rates from the 1960s through the 1990s, after the compound was phased out of gasoline worldwide. It's a good bit of journalism on issues of public health compared to much of what you see, but I'd like to provide a little bit of epidemiology background to the article because there's so many studies listed that it's a really good intro to the types of study designs you'll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet when we read such compelling stories as this one.

Drum correctly notes that simply looking at the correlation shown in the graph to the right is insufficient to draw any conclusions regarding causality. The investigator, Rick Nevin, was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it's been documented that values for a population do not always match those of individuals when looking at cross-sectional data. This is the ecological fallacy, and it's a serious limitation in these types of studies. Finding a causal link in a complex behavior like violent crime, as opposed to something like a specific disease, with an environmental risk factor is exceptionally difficult, and the burden of proof is very high. We need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:

During the '70s and '80s, the introduction of the catalytic converter, combined with increasingly stringent Environmental Protection Agency rules, steadily reduced the amount of leaded gasoline used in America, but Reyes discovered that this reduction wasn't uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you'd expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that's exactly what she found.

Well that's interesting, so I looked a bit further at Reyes's study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime. Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual-level. It's well-thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe significant associations found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals' exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.

The graph title pretty much says it all (Source)
The article continues by discussing a cohort study done by researchers at the University of Cincinnati where 376 children were recruited at birth between 1979 and 1984 to test lead levels in their blood over time and to measure their risk of being arrested in general, and also specifically for violent crime. Ultimately, some of these babies were dropped from the study by the end, and 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixed picture than they let on. In prenatal blood lead, this effect was not significant. If these infants were to have no additional risk over the median exposure level among all prenatal infants, the ratio would be 1.0. They found that for their cohort, the risk ratio was 1.34. However, the sample size was small enough where the confidence interval for this rate was as low as 0.88 (paradoxically indicating that additional 5 µg/dl during this period of development would actually be protective), and as high as 2.03. This is not very helpful data for the hypothesis. For early childhood exposure, the risk is 1.30, but the sample size was higher, leading to a tighter confidence interval of 1.03-1.64. It's possible that the real effect is as little as a 3% increase in violent crime arrests, but this is still statistically significant. For 6-year-olds, it's a much more significant 1.48 (95% CI 1.15-1.89)It seems unusual to me that lead would have such a more profound effect the older the child gets, but I need to look into it further. For a quick review of the concept of CI, see my previous post on it. It really matters.  

Obviously, we can't take this a step further into experimental data to enhance the hypothesis. We can't expose some children to lead and not others on purpose to see the direct effects. This is the best we can do, and it's possibly quite meaningful, but perhaps not. There's no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance. Despite an improved study design from cross-sectional studies, a cohort study is still measuring correlations, and we need more than one significant result. More cohort studies just like this, or perhaps done more quickly on previously collected blood samples and looking retrospectively at the connection, are absolutely necessary to draw any conclusion on causality. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it's a hypothesis that definitely deserves more investigation. There's a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that's not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:


Needless to say, not every child exposed to lead is destined for a life of crime. Everyone over the age of 40 was probably exposed to too much lead during childhood, and most of us suffered nothing more than a few points of IQ loss. But there were plenty of kids already on the margin, and millions of those kids were pushed over the edge from being merely slow or disruptive to becoming part of a nationwide epidemic of violent crime. Once you understand that, it all becomes blindingly obvious (emphasis mine). Of course massive lead exposure among children of the postwar era led to larger numbers of violent criminals in the '60s and beyond. And of course when that lead was removed in the '70s and '80s, the children of that generation lost those artificially heightened violent tendencies.

Woah. That's, um, a bit overconfident. Still, it's beyond debate that lead can have terrible effects on people, and although there is no real scientific basis for calling this violent crime link closed with such strong language, it's a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. His motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I'd love to, but we just can't do that yet.



The $60B figure seems pretty contrived, but is a generally accepted way to quantify a benefit of removing of neurotoxins in wonk world. The $150B is almost completely contrived, and its very inclusion on the infographic is suspect. I certainly believe that spending $10B on cleaning up lead would be well worth it regardless, and even question the value of a cost-benefit analysis in situations like this, but that doesn't mean I'm willing to more or less pick numbers out of a hat. That's essentially what you're doing if you only have one study that aims to address the ecological fallacy.

The big criticism of appealing to evidence would obviously be that it moves at a snail's pace, and there's a possibility we could be hemming and hawing over and delaying what really is a dire public health threat. Even if that were the case, though, public policy often works at a snail's pace too. If you're going to go after it, you gotta have more than one cohort study and a bunch of cross-sectional associations. Hopefully this gives you a bit more insight onto how regulatory agencies like the EPA look at these issues. If this were to go up in front of them right now I can guarantee you they would not act on the solutions Drum presents based on this evidence, and instead of throwing your hands up, I figure it's better to have an understanding of why that would be the case. It's a bit more calming, at least.

Update: I reworded the discussion on the proper hypothesis of a cross-sectional study to make it more clear. Your initial hypothesis in any cross-sectional study should be that the exposure has no association to the outcome.

Update 2: An edited version of this blog now appears on Discover's Crux blog. I'm amazed to see the response this entry got today, and I can't say enough about how refreshing it is to see Kevin Drum respond and refine his position a little. In my mind, this went exactly how the relationship between journalism and science should work. Perhaps he should have some latitude to make overly strong conclusions if the goal is really to get scientists to seriously look at it.

Update 3: This just keeps going and going! Some good criticism from commenters at Tyler Cowen's blog, as well as Andrew Gelman regarding whether I'm fishing for insignificant values. You can find my responses to each in their comment sections. Perhaps I did focus on the lower bounds of CIs inappropriately, but I think the context makes it clear I'm not claiming there's no evidence, just that I'd like to see replication. In that case, I think it's arguably pretty fair.

Update 4!!! This thing is almost a year old now! I've thought a lot about everything since, and wanted to revisit. Read if ya wanna.