Tuesday, October 30, 2012

Expert accountability

The ruling last week by an Italian court that seven scientists were guilty of manslaughter for failing to warn citizens about the threat of a major earthquake in the city of l'Aquila, raises some difficult questions about accountability and ethics in relation to the provision of 'expert' opinion and scientific advice to governments and the wider public. The scientific community and global media have reacted with outrage, comparing the verdict and threat of lengthy jail sentences for the scientists involved with the persecution of Galileo and the burning of witches.

I agree in principle and in spirit with this sense of outrage. The ruling appears to betray a lack of understanding of the inherent uncertainty in any scientific endeavour, which is particularly pronounced when it comes to forecasting future events. Furthermore, putting science on trial in this way threatens to curtail the progress of scientific inquiry generally, and more specifically--in the nearer term--to make the position of those charged with civil protection and disaster prevention, in Italy at least, almost untenable. However, there remains an important point relating to accountability and ethics that appears to have been overlooked amidst all the righteous indignation.

In an excellent article in Nature, Stephen S. Hall outlines the sequence of events that led up to the tragedy. In an extraordinary meeting of the National Commission for the Forecast and Prevention of Major Risks, the seven scientists--who were all members of the Commission--reached the conclusion that an earthquake was "unlikely" (if not impossible). This in turn was interpreted by a government official, speaking at a press conference, as meaning the situation in l'Aquila was "certainly normal" and posed "no danger". The same official further added that the sequence of minor quakes and tremors that had been occurring in the region was in fact "favourable ... because of the continuous discharge of energy".

This interpretation is apparently contrary to the scientific evidence. The same article, by Hall, quotes Thomas Jordan, director of the Southern California Earthquake Center at the University of Southern California in Los Angeles, and chair of the International Commission on Earthquake Forecasting (ICEF), as suggesting that in the aftermath of a medium-sized shock in a seismic swarm (a sequence of tremors), the risk of a major quake can increase anywhere from 100-fold to nearly 1,000-fold in the short term, although the overall probability of a major quake remains relatively low--at around 2%, according to a study of other earthquake-prone zones in Italy (G. Grandori et al. Bull. Seismol. Soc. Am. 78, 1538–1549; 1988), also quoted by Hall.

A 1,000- (or even 100-) fold increase in the probability of a major quake, would have been a very different message for public consumption than the "anaesthetizing" reassurances given to the media at the press conference. Clearly, the most egregious error committed here was in the over-, or indeed mis-interpretation, of the scientific evidence by the government official who spoke to the press. The scientists were--apparently--correct in their assessment that a major quake remained "unlikely", although not impossible. But, was this the best way of characterizing the risks for the general public?

One of the people affected by the L'Aquila earthquake, quoted in Hall's article, admits that he feels "betrayed by science"--"Either they didn't know certain things, which is a problem, or they didn't know how to communicate what they did know, which is also a problem."

It may be that the error here was not in mis-stating the risk, but in not being specific enough about it. There is an understandable reluctance to use statistics, probabilities and scientific terminology in the public communication of scientific evidence. But at times we take this too far. The public is not stupid, and--problems with common misunderstandings in relation to probability notwithstanding--would be better served by the scientific community and public officials avoiding condescending reassurance in favour of the clear presentation of facts.

Is woolly language, such as "unlikely" really any more useful or informative than saying simply "we don't know"? Might the committee have been better to present the available statistical evidence--including details about the changes in probabilities--while acknowledging that the precise timing and location of a major quake is essentially unpredictable?

The advice could have stopped short of ordering a full-scale evacuation--which, on the basis of the best available evidence, would have been unnecessary 98% of the time--and instead, simply presented that evidence, enabling people to make their own informed decisions about what level of risk they were willing to accept.

Returning to the issue of accountability and ethics, to what extent should scientific or other experts be held accountable for the advice they give to governments or to the wider public? In the case of the l'Aquila tragedy, a more relevant question might be; should researchers be held accountable for the way in which their findings are interpreted by policy-makers and, in turn, by the media? Should researchers generally consider how their findings are likely to be interpreted before making them public? This almost certainly places an unreasonable burden on researchers.

And yet--while researchers can't be expected to control how others interpret their findings, a greater effort needs to be made in communicating the science--its achievements and its limitations--directly to the public. With the recent proliferation of sources for news, opinion and analysis, the authority of traditional media outlets and the role of journalists and editors as the gatekeepers of public information, is increasingly being challenged. This presents both a challenge and an opportunity for scientific engagement with a wider audience. It is increasingly difficult for members of the public to distinguish the signal of scientific or expert analysis from (a) noise and (b) intentionally biased or deceitful opinion emanating from thinly disguised lobbyists, portraying themselves as independent 'experts'. On the other hand, the internet enables researchers to disseminate their findings, methods and data without intermediation by journalists or politicians.

The 'science on trial' headlines may sound melodramatic, but scientists from both the social and hard-sciences are right to feel they are being challenged to justify their art as at no other time in living memory. Public confidence in "science"--in its broadest sense--has been undermined by episodes such as the 'climategate' controversy. The discipline of economics, similarly, has been widely criticised for not predicting the financial crisis--and more fundamentally, for persisting with models that appear unable to explain 'real world' phenomena. This critique certainly has some merit, and economics as a discipline is evolving to take account of the lessons from related disciplines, notably psychology, biology and epidemiology. However, it has to be recognised--both by researchers and those who would criticise their efforts--that models, by their very nature, are imperfect simplifications of the world they are trying to explain. One clear responsibility of any researcher, is to think carefully about the domain of validity of the models that they use [PDF], to define their limitations, and to communicate this in an unambiguous and honest way, along with any findings from the research.

Reflecting on the trial of the seismologists, the former president of Italy's National Institute of Geophysics and Volcanology, concludes that "scientists have to shut up". On the contrary, the lesson for scientists from this tragedy and the subsequent trial, is to be more proactive in our engagement with the public.

A good starting point might be the establishment of a voluntary code of ethics for researchers. This would include, for example, a commitment to publish annually a list of all sources of funding for one's research. Furthermore, the code might also contain a commitment to make public not only our research findings but also the data and methodology used (including relevant context, limitations and assumptions). Signing up to this code could be a prerequisite for any government advisers, and could similarly become a useful tool for the media in screening 'expert' commentators.

More generally this code would be based on three fundamental guiding principles; honesty, transparency and humility. Going back to Hall's Nature article, he quotes a man who lost his wife and daughter in the earthquake, lamenting the fact that "the science, on this occasion, was dramatically superficial, and it betrayed the culture of prudence and good sense that our parents taught us on the basis of experience and of the wisdom of the previous generations." Perhaps the greatest lesson from this tragedy is the need for a greater degree of humility when it comes to the predictive powers of even the most sophisticated scientific models.