Questions so far

October 4, 2006

1.  What requirements should be placed on medical/scientific journalists to be educated in their field? Compare British to US scientific journalism.

2. How should these – and other – requirements be put in place; voluntary regulation, legislation?

3. Should ethical training be required for journalists?

4. Should regulation be increased in other areas – i.e. declaration of interests, objectivity/fairness, separation of news and comment? Again, should regulation be self-imposed or legislated? What evidence is there of the effectiveness or otherwise of self-imposed (PCC) regulation, and what are the risks of legislation – might it draw the teeth of the media?

5. What medical/scientific knowledge and understanding is it reasonable for a journalist to assume of his/her readership? Where is the line between helpful exposition and condescension? How much should it change between publications? Is it true that The Sun is deliberately written for a reading age of nine, and if so should it assume a similar level of expertise in its readers?

6. Do journalists have a duty to their employers to sell papers? To what extent could sensationalism be justified by that? Or, rather – since it sounds tautological to say that “sensationalism” is a bad thing – when does justifiable volubility become sensationalism? Presumably if there is such a thing as a duty to their employers, then it will require “upselling” stories somewhat or the duty becomes meaningless.

7. Perhaps the chief question: what is the line between reporting in the public interest and scaremongering?

8. When should the scientific consensus view be taken at face value, and when should it be questioned? Scientifically untrained journalists should be wary of seeking out controversial views for “balance”, but I believe there are cases when the “establishment” view has turned out to be dangerously wrong – BSE? Asbestos? Were these views supported by the scientific community? A decent-sized section on Popperian scientific philosophy – what is certainty? Can it be achieved? Falsifiability, inductive reasoning, etc – would be interesting and informative.

 9. What responsibilities do scientists and doctors have in reporting their research to the media? Ben Goldacre, the Newton’s Apple Thinktank launch essays, badscience.net, 16th Oct 2006:

Scientists and doctors, for example, can take care to be clear about the status and significance of their work when talking to journalists. Are the results preliminary? Have they been replicated? Have they been published? Do they differ from previous studies? Can you generalise, say, from your sample population to the general population, or from your animal model to humans? Are there other valid interpretations of your results? Have you been clear on what the data actually show, as opposed to your own speculation and interpretation? And so on.

It is naive to imagine that such basic guidelines will be heeded by the irresponsible characters on the fringes who produce so much media coverage. However, they do represent best practice, and so they are always worth reiterating: they deserve to be incorporated into codes of practice from professional bodies and research funding bodies.

“Scientists and doctors would also be well advised to take some even simpler steps: to think through the possible implications of their work, inform interested parties before publication, and seek advice from colleagues and press officers. This advice and more is all covered in the Royal Society’s excellent Guidelines on Science and Health Communication, published in 2001 [3].

Journals, too, can take a lead, since they often produce the promotional material for research. Risk communication is a key area here, and although it is tempting to present risk increases, and indeed benefits, using the largest single number available (the “relative risk increase”) it is also useful to give the “natural frequency”. This figure has context built-in and is more intuitively understandable: it is the difference between ibuprofen causing “a 24 per cent increase in heart attacks” (the relative risk increase) and “one extra heart attack in every 1,005 people taking it”.

10. Does freely available, peer-reviewed scientific literature – which should in theory remove concerns about conflicts of interest on the part of the writers – confuse journalists more used to hunting out hidden agendas? A common theme appears to be “this researcher has in the past received funding from Drug Company A; therefore we should distrust his research purporting to show the efficacy of Drug Company A’s new product”. Is this unfair or is there reason to doubt it? Goldacre: http://www.badscience.net/?p=251 “…over the past few years there have been numerous systematic reviews showing that studies funded by the pharmaceutical industry are several times more likely to show favourable results than studies funded by independent sources”. Sinister?

11. Further to the drug company thing – how widespread is the practice of using charities and friendly media outlets to sidestep advertising standards rules? Who has what responsibility where in that scenario?

12. How should I decide which articles to use? Presumably won’t be able to use all of them. Is there a way of randomly selecting them? Should I deliberately choose the most distorted pieces or am I then guilty of distortion myself? Anyway. Will need to establish a selection process. Could simply be “the ones that interest me”, I suppose – they don’t have to be representative of a paper’s editorial stance, since the existence of an unsound piece is A Bad Thing all on its own. Hmm. Seek advice.

13. From Dad: [There are things like] glue sniffing which by consensus just don’t get a mention in the press in an attempt to prevent young people giving it a try. This made us wonder if you should have a chapter on examples of formal and informal agreements like this and their effectiveness.