Disclaimer Disclaimer Disclaimer Disclaimer Home           Home Home           Home About  site   About  site About  site   About  site About  me    About  me About  me    About  me Current reading   Current reading Current reading   Current reading CONTACT   CONTACT CONTACT   CONTACT Links                  Links Links                  Links Book shelves      Book shelves Book shelves      Book shelves
Blog Archive    Blog Archive Blog Archive    Blog Archive Recent posts  Recent posts Recent posts  Recent posts
You may have read one of my previous blog posts in which I had a closer look at a paper on radiation dose and risk of congenital anomalies (if not, you should, and here is the link), and in which the author (the notorious Chris Busby) argued he could calculate a better risk coefficient for low-dose radiation than the one currently used because the latter is based on external radiation exposure and not local, internal exposure to specific radioactive particles. Moreover, Busby argues that the dose-response should be bi-phasic rather than the commonly used linear-no-threshold (LNT, for intimi). As I discussed in that post, there is nothing wrong in principle with proposing an alternative theory, but as with so many (if not all) of Busby’s papers, this one was riddled with errors in the used data as well, the epidemiological inferences were mostly wrong, and the conclusion was completely overblown. So no surprises there. Anyhow, to no great surprise things got a bit heated in the comments section. whereas with many of these discussions, the actual errors in the paper and problems with the inferences were never acknowledged but instead it was, apparently, important to look at the ’bigger picture‘. So as is common in these kind of discussions, the goal posts were moved once again.  An appropriate analogy would be that of a building where the roof is supported by crappy walls about the collapse in on themselves (and 3 of 4 already have). Nonetheless, this bigger picture seemed to involve yet another paper, which was the ’real one‘, and “I should look at that paper instead”. My previous blog post about this work was going to be my last one. I thought I had looked into this enough; it was time for more interesting pursuits. …but then curiosity got the better of me. If this was the ‘real paper’, the one that was going to robustly prove the theory, maybe I should have a look at it again? Maybe this one didn’t have glaring errors in it; it was, after all, written with other investigators? …so ok then, for the very last time (I promise, pinky swear!) The paper in question is ‘Genetic radiation risks: a neglected topic in the low dose debate’, is written by Schmitz-Feuerhake, Busby and Pflugbeil, and is published in the open access journal Environmental Health and Toxicology (link to article). It is, more or less, a review supplemented with, let’s call it ‘political commentary’, and it is not that bad (especially compared to the other papers by Busby). As I mentioned before, I really do not have a problem with the theory as such. I remain doubtful that the available data on which the inferences are made is sufficient to make any sort of robust conclusion, and I am pretty sure that the conclusions  are gross over-estimation of true risks, but other than that: some researchers have a certain theory and aim to investigate their hypothesis. Fine. There is a lot of stuff in that paper, so if you want to know it all you will have to read it yourself, but for sake of the continuity of this blog I will focus on Figure 3 which, similar to the previous paper on congenital abnormalities, again aims to show a bi-phasic dose-response curve (or ‘hogs-back’, as illustrated theoretically in Figure 2). I have copied both figures below for convenience: These figures refer to aa completely different outcome (infant leukemia) than the initial paper, but the principle is the same; for convenience here is that figure again:   So to continue with Figure 3. Apparently, these data come from reference 84, which is this paper (link). However, as it turns out when you actually follow those references is that they are not at all….Eventually though you get to the original source which is the Committee Examining Radiation Risks of Internal Emitters (CERRIE) report (which can be downloaded from for example here: link). More specifically, the data come from Table 4A.1 on page 82 of that report. I reconstructed Figure 3 using the data from the CERRIE rapport, and guessing a bit about the dose (because I do not have those data, nor are they provided in the paper): I don’t think I did a very bad job with this. I changed the layout a little to make it clearer which data point belongs to which estimate, but this is pretty much an exact copy and shows that these data were used to create the graph (noting that the CERRIE report provides Relative Risks rather than Excess Relative Risks, but in this case the latter is just the former minus 1). However, all is not as it seems. You may notice some of points in the cluster around 0.05 mSv look different. I double-checked this and indeed, Busby has made some errors in copying the numbers across and labelling them correctly. Not a good start, but true to self. In this case it doesn’t matter too much for the interpretation of the data. It just does not give you great confidence in the scientific rigor of it all. I had to look at the following twice as well, but the CERRIE report has more data points?! In principle I can kind of see why maybe the USA was not included because that was not split in different dose levels, but interestingly ‘UK intermediate’ and ‘Germany intermediate’ were also not included (while Greece Intermediate is). There is one plausible a priori explanation for this, and that is that whereas the Greece Intermediate point serves the purpose of confirming the authors’ hypothesis, the other two probably don’t. So I added them (using a reasonable guess with respect to dose): Well that is interesting. The ‘UK Intermediate’ estimate is higher than the ‘UK low’ and ‘UK High’ estimates, which probably suggest random variation around the true risk as a result of [a] basically similar doses for all three dose categories, and [b] the fact the high estimate is based on 1 case only. Another explanation could be that these radiation doses may not have too much to do with the leukemia risk. More interesting though, is the omission of the ‘German Intermediate’ risk. This actually shows a protective effect at pretty much the same dose as where the ‘Greece high’ estimate suggests a 3-fold increased risk (in fact it shows the exact opposite: a 3-fold reduction in risk). Again, this pretty much suggests very large variation as the result of low numbers, and indeed these are 1 and 4 cases for Germany and Greece, respectively. However, just omitting that data point is, how best and politely to describe this, “dodgy” (scientific misconduct could be used as well, but  I cannot exclude the ‘honest mistake’ either). Something else bothered me a bit when looking at the original Figure in the paper. “How precise are these points?” I cannot say much about the accuracy because, for example, I don’t know how the doses are calculated, but surely the studies from which these risk estimates were taken had some sort of estimates of the precision in them? And indeed, the table in the CERRIE report provide a P-value or a 95% confidence interval for all but one of these estimates, as well as the number of cases for each risk estimate. So why were these not included? Luckily we can do this ourselves. P-values can be calculated from confidence intervals and the other way around using these equations (link 1 and link 2), and when we add the 95% Confidence Intervals to all estimates in the Figure, we get the following:  A right! Well this makes the evidence for the biphasic dose-response curve a lot less convincing. It seems that the confidence intervals overlap for each of the estimates; implying there is no real statistical evidence that they are different. Indeed, when you draw a linear regression line through it (with 95% confidence interval), it is quite clear there is not much evidence of a dose-response curve at all, let alone any evidence for a biphasic one (all confidence intervals of the confidence intervals cross that of the linear regression). This should not have come as a big surprise, really, because all the evidence for their hypothesis (at least where this figure is concerned) is based on 2 estimates from a single study (Greece High and Greece Intermediate) with 4 and 7 cases only, respectively. Is this really what counts as robust evidence to support a hypothesis? The answer to the above question, by the way, is ‘no’. So I promised that this would be my last blog about the topic, and unless something new and exciting happens, it probably will. I have now looked at the various wild claims thrown around by Busby, and have only found errors in the data, problems with the statistical and epidemiological analyses, problems with the inferences made, unsupported wild claims about enormous risks ignored by “the establishment”, and now also actual clear evidence of scientific misconduct. Does this mean that there is definitely no biphasic dose-response and a high excess risk at low dose however? Not really, this is still a valid hypothesis that can be researched. However, based on the evidence laid out before us so far, there is no evidence that could support this.                                                                   So for now, let’s bury this.                                                                   Bury this and move on…..there are so many more things to                                                                    investigate. Anybody looking into fracking yet? Or this climate                                                                   change thing?
Serving the community with a smile.......... ................in a Public Healthy kind of way
Back Back Back Back
Probably time to bury it....