Disclaimer Disclaimer Disclaimer Disclaimer Home           Home Home           Home About  site   About  site About  site   About  site About  me    About  me About  me    About  me Current reading   Current reading Current reading   Current reading CONTACT   CONTACT CONTACT   CONTACT Links                  Links Links                  Links Book shelves      Book shelves Book shelves      Book shelves
Blog Archive    Blog Archive Blog Archive    Blog Archive Recent posts  Recent posts Recent posts  Recent posts
Experts:       Can’t live with them, can’t live without them I generally aim for this blog not be about my own research, but in this particular case I am going  to make an exception. We recently published the result of what may well be the first qualitative  study investigating exposure assessment methodology (Ok, I did not actually check this, but if it  is not the only one, it is one of very few…). And since new things are interesting, I thought this  would be worth your time. First things first, our paper entitled "Wishful thinking? Inside the Black Box of Exposure  Assessment" is available as open access from the Annals of Occupational Hygiene <link>, so if  you are interested in this kind of stuff there is nothing holding you back from printing it out and  having a read-through.   Now that you have the paper, or will download it in the near future, what is it all about? In a nutshell, we were interested in what is going on in the heads of experts in occupational  exposure assessment and occupational hygiene, and we used the most novel tool we have at our  disposal of getting that information - we asked them....  The reason we wanted to know more about this was because having experts estimating (or, and  we will get to this,…guessing) exposure is one of the main tools employed in studies  investigating the health effects of exposure to, mostly (but not exclusively), chemicals that  people may encounter while doing their jobs. Despite this method being regularly used, we  don’t actually know that much about it; especially not in the context of occupational hygiene. Anyway, to study the effect a specific exposures may have on the health of workers, we need to know to what kind of chemicals, and to how much, someone has been exposed to and how this  changed over time. For many diseases we'd need to know this for quite a long time period  because it takes a long time of being exposed before this leads to development of a disease (for  example, and not from the occupational environment, generally speaking one won't die from  smoking one cigarette, but a lifetime of smoking tobacco is not a great idea). Getting accurate  estimates of this exposure is quite difficult, but can be done by for example personal or area  measurements of the chemical in the air of the workplace or by measuring of some biomarker in  urine of workers. This is generally considered the best method and can be sufficient, but  sometimes it is not possible to measure exposure: measurements cannot be done in every  factory, employers or employees do not allow for measurement to be taken, or, most notably,  measurements cannot be taken retrospectively - that is in the past. One of the, very reasonable, solutions we have is that we ask an expert what the exposure likely was in the past and use this  in our risk prediction models instead of “real exposure”. This expert can be anyone we think  knows more about the situation than a random person we picked off the street, and can be for  example an employee, a manager, a toxicologist, an occupational physician, and of course an  exposure assessor or occupational hygienist. The latter groups are generally considered the best idea since, in theory, exposure assessors and occupational hygienists are those people with  specific training in how to characterize, measure, model, assess and estimate exposure and to  recognize hazardous situations.  The above sounds like a good idea, and is sometimes the only option available to us, but does it  work? The answer to that question is pretty useless, since....sometimes it does, sometimes it  doesn't. Studies have been published looking at this (for example 1,2), and we have looked at this as  well <link>, and the results of these are variable: on average it seems that experts are a little  bit better in estimating exposures than other people. In practice, I would say that implies that  some experts are better than others, and that this differs in different situations. Not a novel insight, but important to remember…  So now that we know all this, the results from the qualitative study are really interesting.  Theoretically (and this is what we tell ourselves), all of us exposure assessors and occupational  hygienists have been taught to think very systematically about how human exposure could  happen: we are thought about sources of exposure, release mechanisms, the physio-chemical  characteristics of the exposure, how they react with other exposures, various determinants that determine exposure including work characteristics, the impact of local exhaust ventilation or  other exposure reduction and removal tools; while of course everyone adheres to the  occupational hygiene strategy. And indeed, sometimes this happens; for example (quoted from  the paper): much of the process is wet and so minimal dust exposure, potential for exposure early on, opening bags of flour, scooping into mixers, dusting’  ‘…potential for exposure was intermittent to constant’ Unfortunately, our results also showed that even the experts use general ideas and ‘rules of  thumb’ for exposure assessment (or more scientifically, they resorted to the use of various  heuristics (explanation here)) – and maybe a little more than we are comfortable with.   For example:   ‘I assume that things were better in the 1970s compared to the 1950s and 60s.’ ‘…sometimes you’re rationalising things and sometimes you’re just doing it at a more sort of gut level.’ This is, of course, something most people would have come up with without being specifically  trained in exposure assessment. Heuristics are pretty useful, and indeed we all use them regularly, so you could argue that, as a  result of experience and training, experts make less errors when they use them. And you would  probably be right. But how do you know the “error rate” of experts? And how do you know you  have an expert with a low, or at least acceptable, error rate? Since in science we’d really prefer to have a better rule for this then “he/she seems like a nice  person”, in the paper we describe some suggestions for the improvement of expert assessment  in occupational health and epidemiology: For example the use of multiple experts so that errors  get averaged out, benchmarking of experts and the use of graphical material to show the  situation to the expert (ideally, site visits should be conducted to improve this even further, but  of course this cannot be done retrospectively). However, I would like to point to something else a bit more bluntly: when we are interpreting  study results from an occupational epidemiological study that incorporated expert assessment,  and the assessments are not validated by exposure measurements, and are done by only one  expert (or very few), should we be put any value on the outcome? A quick scan of the literature  reveals that quite a number of such studies have been published (Sorry, but I am not going to  name and shame…), so it is quite a serious issue. I previously wrote about the impact of  measurement error <here>, which definitely plays into this as well, but there is now even a  possibility for biased results which is even worse. Of course, this problem is not unique to  occupational exposure assessment and occupational hygiene, but include other fields of  epidemiology and public health for example, as well as for example ‘food experts’ <link>, and,  importantly (and resulting in feelings of smugness regardless of anything else) - wine experts  <link 1,2,3>  To, in conclusion, answer my own question, in my opinion studies including a sentence similar to  “the list of occupations was assessed by an expert…” should not be considered for publication  until it is complemented by some form of validation of the assessment.   Actually, now that I have finished writing this, I realize that colleagues and I have come up with  a potential way of evaluating this. It can be found here as open access <link>.  Excellent!
Back   Back Back   Back