Disclaimer Disclaimer Disclaimer Disclaimer Home           Home Home           Home About  site   About  site About  site   About  site About  me    About  me About  me    About  me Current reading   Current reading Current reading   Current reading CONTACT   CONTACT CONTACT   CONTACT Links                  Links Links                  Links Book shelves      Book shelves Book shelves      Book shelves
Blog Archive    Blog Archive Blog Archive    Blog Archive Recent posts  Recent posts Recent posts  Recent posts
A couple of months ago I attended a meeting to do with whether childhood leukemia incidence in  the vicinity of nuclear power stations could be related to radiation exposure or, well, not…  Unfortunately, the meeting got somewhat hijacked by a well-known (relatively speaking, in certain  circles) campaigner. Little did I know that the reason for his aggression, both verbal and in written  form, was the result of him battling the governments of most (if not all!) Western governments,  daring escapes from hitmen, and secret missions to silence him and hide “the truth”. I was  completely oblivious to this, until I read a recent a first-hand account of the adventures in  Counterpunch, in which the protagonist describes the events that led up to this meeting I attended as well. It is very exciting, and you can read it here [link].   The British government (presumably, but I am afraid we will never know)   was hot on his tail. And not without reason; the real truth was going to come out if they did not silence our hero first. They first made sure his plane was broken, in an attempt to make him miss the meeting, and   when that failed they send a spook to tail him. Luckily, he managed   to circumvent this obvious danger, which also involved an intervention   by Swedish airport security, and shaking of the spy (who disappeared after realising he had been caught out). Luckily he make it on to the flight to London, and into his hotel room; the door barricaded with several chairs (I mean, who would blame him). Anyway, no wonder his behaviour   was, let’s just say less than optimal, during the meeting.   Having said that, just because you are paranoid, does not mean they are not out to get you. And  indeed, on his way to the meeting he was under surveillance by the evil doers. Or, to rephrase that a bit, I was in the same tube carriage to him. It all looked very relaxed, so either our protagonist  may have exaggerated a tiny bit, or he is just that experienced in counter-espionage. Anyway, his recollection of the meeting is somewhat different to mine, but then again I would say  that, wouldn’t I, being one of the bad guys in this spy novel. The fact that someone forgot to print  my name badge is interpreted as further evidence of how devious the establishment are, at the  last moment bring in: “…that guy from Bristol who attacked my Trawsfynydd study on the  statistical power, but then found out he was wrong and backed off: Frank de Vocht. Quickest way to  get on COMARE (assuming anyone wants to) is to attack Busby….”. The discussion of the  Trawsfynydd study referred to above can be found here [link] and which as you can see has no  discussion resulting in any ‘backing off’. He may refer to the discussion of his El Diablo study,  which was of similar low quality, and which did have a discussion afterwards [link]). This discussion  fizzled out after it was pointed out to him that doing the same thing wrong over and over again is  not the same as updating of Bayesian priors.   Why, you may ask, do you keep pointing out the mistakes in the research of the same person?  Aren't there other people who deserve attention too? Fair question… I obviously have an interest in radiation epidemiology, so have read those papers anyway. Moreover, because his papers are generally published in "his own journal" rather than proper peer-reviewed  ones (despite having the appearance of one), it is not possible to do these discussion in the  scientific environment. But in addition, like everyone nowadays, I am quite busy and it is difficult  keeping up a blog in addition to normal work and family life, so I take shortcuts sometimes and  comment on the low-hanging fruits. And finally, it really is a shame that people with genuine  concerns about living near nuclear power stations, instead of being able to discuss this with the  different committees, politicians and scientists, get hoodwinked by Mesmerizing “science”,  “paradigm shifts”, “fights against the establishment” and reference to supposed apocalypses. A  quote from the article even refers to “…has been the cause of tens of millions of deaths”.     So with that in mind, I’d hate to disappoint. Busby referred in the meeting to new seminal work  that would appear in the near future. I had a look, and since there is only one paper it would have  to be the paper with the title “Radiochemical Genotoxicity Risk and Absorbed Dose” which was  published September 14, 2017, in the journal Research Reports on Toxicology. It is open access, so  read it here <link>. I am sure Busby, when asked, will refer to this as being peer-reviewed.  However, you may notice the paper was received on August 22, accepted September 7, and  published September 14 in volume 1, issue 1 of a new online-only journal of which Busby is the  editor-in-chief. In fact, the journal only has 1 issue, with in it four articles, all by editors. So I  think we can just delete any reference to peer-review.   That doesn’t of course make it a bad study by default, so lets us have a look at the seminal paper  itself. Like all good works of bad science, there is quite a lot of truth in it to make it look plausible  to the casual reader (with or without scientific training): although biological effects from radiation  will often occur from local doses, in epidemiological studies exposure-response associations are  generally based on external dose, which provide an estimate of the average energy transfer to all  tissue. The most important effects could come from local, internal, situations where, say, a  radioactive particle exerts a much higher ionization density to a small area than the average  external dose. The two could, but may not, be correlated. This is a well-known and recognized  phenomenon, but because it is virtually impossible to obtain such specific information accurate for  large numbers of people needed for epidemiological research, external doses, which can be  estimated relatively easily, are used instead. In other words, it is like saying: “To look at effects  from air pollution, do not link health effects to measures of air pollution, but instead only look at  effects at molecules in the body.” A grain of truth, but also by and large not very useful… As said,  this is well-recognized, but it is the skill of the true charlatan to find and exploit such grey areas  of science; especially when linked to another grey area of epidemiological science, exposures  barely above background levels and/or very protracted low-level exposures.   Busby then wants to calculate a better risk coefficient than the one based on the atomic bomb  survivors, and attempts to do this by looking at heritable effects and the Chernobyl accident. That  is fair enough in principle, such associations have been made, so it is worthy looking at these  studies in more detail. Unfortunately, Busby does this by just throwing all outcomes on one big  pile. This is recognized in the paper “the different studies often examined different congenital  malformations, and therefore it is not strictly accurate to employ all of these results to establish a  “risk factor” for “heritable disease”.”, but then conveniently brushed over. In fact, this is quite  important: one can compare ‘all malformations’ in different studies, but to compare legal  abortuses in Belarus with neural tube defects in Turkey, for example, to try and derive one risk  coefficient is just plain bonkers. The bi-phasic model (or basically, a distribution of risk with a  peak at low and a peak at high doses) advocated in this paper could, aside from many other  reasons, also be the result of including a multitude of many different dose-effect associations; all  for the different outcomes so carelessly put in one pile. It then turns out that different time periods are also compared without any discussions, so in some  cases this included malformation cases conceived will before the Chernobyl incident, while in  others it included cases conceived well after the event. This would, of course, relate to completely different mechanisms by which these malformations would occur (if an effect exists). Moreover, in  some cases this is stretched to such an absurd length after Chernobyl, that it is highly unlikely, and  probably impossible, for the two to be related (this is unequivocally pointed out in reference 41 of  the paper, see below).Yet all are included nonetheless.      All studies combined this way are shown in a large table (Table 2) in the paper, with the calculated  Excess Relative Risks per mSv (the dose) shown graphically in Figure 8, which I copied below.   As you can see, there is a massive peak at 0.5 mSv dose. However, looking a bit closer, the whole  theory turns out to be based on 7 data points. We will get to those, but first let’s have a look at  how these doses are calculated. First, they are based on external exposures, but apparently this is  sufficient because “the activity levels of internal exposures are second order”. Remember the  rationale for the paper I outlined above, the external/external radiation issue…….hmmmm. So yes,  it seems Busby ran into the same issue all epidemiologists have run into.  It gets worse though. According to the footnote below Table 2 “Doses were either taken from the  paper or estimated on the basis of UN reports or calculated from the reported levels of area  contamination by Cs-137 using FGR12 (Part 2) and the computer program “Microshield”.” Looking  at the doses used in the table for the only 7 datapoints on which Busby’s theory is based…..I am  sure you guessed it….they are all calculated by himself. In fact, they are all exactly the same value  (i.e. 0.5). That’s quite important because risk estimates are divided by this dose to get to the new  and improved risk coefficient. Or said differently, the 7 relevant datapoints are multiplied by 2  while other datapoints are divided by a larger number. I cannot prove that this is incorrect because I do not know what was done, but exposure estimates all resulting in the same number, with no  variation or uncertainty, and exactly for the doses relevant to your theory…it’s all a bit unlikely  and suspect. Thus, so far there is something clearly wrong with the outcomes that are just willy-nilly combined  and the exposure assessment is just a bit dodgy. That’s not a good start. So let’s have a closer look  at the 7 datapoints (from 6 studies) that are relevant to the theory. Of course, these are all used  without taking into account any of the uncertainties surrounding these risk estimations as well, so  no use of confidence intervals; this is what one would do to prove a point instead of using the  scientific method (you see it a lot in the world of ‘dirty electricity’ as well.   Unfortunately, I cannot access the studies in Paediatr Perinat Epidemiol, but the relevant estimates are mostly provided in the abstract. So here we go: Akar et al [39]: I cannot access this, but the results are also mentioned in Akar (Egypt J  Med Hum Gen 2015; 16: 299-300). Based on these data, the correct ERR for neural tube  defects and anencephalus are 2.2 and 4.2, respectively, and not 5.4 and 5.   Caglayan et al [40]: The ERR for neural tube defects is indeed 4.7.  Guvenc et al [41]: As clearly stated in the abstract “he increases observed occurred mainly in infants conceived well over a year after the Chernobyl disaster, suggesting that other  factors may be responsible.”  Mocan et al [42]: The ERRs in the paper are wrongly calculated (wrong numbers are  combined), and should in fact be 2.1 (not 3.4) for NTD and 1.9 for anencephaly (not  included by Busby). West Berlin Government [47]: this is a reference that is not available.  Lotz et al [48]: I know just enough German to understand the abstract, and the ERR that  should have been used is 1.8 (as stated “Compared with the reference period,  malformation risk increased by a factor of 1.8 (RR = 1.8, CI = 0.86-3.78)” and not the 4.1  used in the paper. Also note that the confidence intervals includes 1, so there is only very  limited evidence of any increased risk at all.      So, in summary: Out of the seven datapoints on which this “seminal” theory is based, 1 is correctly  inferred, 3 are wrongly inferred, 1 is clearly not associated to the Chernobyl incident, and 1 is  unavailable.   Even if we were to assume that the exposure assessment is correct (which I doubt it is), I just  halved the “new and improved risk coefficient”, just by checking Busby’s methodology for clear  errors. In fact, I have not even bothered really checking the exposure assessment or obtaining the  full papers to see whether the associations with the Chernobyl disaster that Busby infers may be  correct. All of this should have been picked up by a proper and thorough peer-review process, but in the  absence of one, this is the result. All this does of course not mean that the association between  radiation and heritable effects is non-existent; it shows the study by Busby provides no scientific  evidence either way (again).  So here we find ourselves again. A new paper by the same author: sloppy science, incorrect  inferences, and completely overblown conclusions. You may remember another individual in history who was good at this: 
· · · · · ·
Serving the community with a smile.......... ................in a Public Healthy kind of way
Back Back Back Back
A Spy Novel by a modern-day Franz Anton Mesmer (and a bit about low-level radiation)…