Over the past few weeks, a big topic of debate has been the relationship between civil servants, politicians, and scientists. The repeated refrain from government on policy around Covid 19 has been that they have followed the scientific advice in coming to their decisions; this may be taken as a laudable strategy, as an attempt to avoid blame if things go wrong, or as a bit of both. In that context, I thought it was worth considering the issues around scientific advice, to try to see if things could be done better.
I started by trying to list all the ways things can go wrong in the science/politics interaction. Firstly, of course, scientists are human beings and their knowledge, experience, and perspective are limited – just like those of any other human being. They come to the table with their own personality and their own agenda. A scientist who has advanced to the top of her (or more often in a still male-dominated academic world, his) discipline, is dependent for her success on the growth and development of that discipline, and specifically on her ability to get funding for her particular area of research. Giving advice to politicians is an opportunity to highlight and emphasize what that discipline, and her own research, has to offer – and that might be best achieved by putting it, rather than other work, at the top of the pile when providing advice. I should emphasize, that having that motive doesn’t mean that every scientist will choose to follow it and act in this way. However, because that motive exists, a robust system must account for the fact that it could affect what scientists say.
A bias towards one’s own field or one’s own research is not the only potential issue with scientific advice – as a specialist it might be that the scientist genuinely isn’t aware of other work that would be more pertinent, or accurate, or useful, or is aware of it but doesn’t fully understand it because it comes from another discipline. However, the outcome is the same – one approach within a particular discipline is highlighted and others are marginalised or skipped. And this problem isn’t necessarily solved by asking lots of competing scientists from different disciplines for their view – although this might be part of the solution, there is still the chance that the most dominant or persuasive voice (rather than the best science) will bend the ear of the civil servant or politician.
A further issue is that scientists each have their own philosophical position and worldview. Just because they are experts in their field, does not mean that this perspective is uncontroversial or even well developed. In fact, someone who spends their life embroiled in the detail of a narrow field might well have a rather under-developed and unbalanced worldview when it comes to issues or choices outside their speciality.
So much for the potential problems with the scientists. On the other side of the relationship there is also clearly potential for issues to arise from the motives of civil servants and politicians.
In the literature about researchers working with stakeholders, especially in settings that they are unfamiliar with, there is evidence that actors pursuing their own agendas can manipulate research findings – and researchers themselves. That might be a corporation trying to get off the hook for a pollution event, or – our focus here – it might be politicians trying to follow a particular course while avoiding responsibility for it. It might also be external groups pursuing an agenda not supported by the scientific evidence.
There is no guarantee that a scientist will spot or be able to effectively counter this manipulation of themselves or of their science by those they are advising – after all, there is no reason someone with expertise in a given scientific discipline should have training or a natural ability to deal with such situations. On the other hand, top advisers and politicians are likely to be adept at negotiating and using evidence to fit their needs. Again, morality might prevent civil servants and politicians acting in this way – but a robust system must take account of the temptation.
Before we think about how to solve these problems, it’s worth stating clearly why they matter. Firstly, there is the issue of accountability – if scientists give biased or partial advice, politicians should not be blamed; if politicians use or manipulate the science to make the choices they wanted to all along, the scientists should not be blamed. And if a mistake arises from a genuine ‘unknown unknown’ that neither scientist nor politician could have been expected to anticipate then neither should be blamed.
Secondly, research and the development of knowledge is vital to improving our lives, reducing our impacts on the planet, and providing us with a better understanding of the universe we live in. If it becomes vilified and distrusted through failure in the relationship with government, there may be serious consequences for all those elements that underpin our society.
So, it seems like it’s worth finding solutions – systems that can avoid the pitfalls discussed above and support high quality decision-making and correctly apportioned accountability.
The first step towards a robust system for the provision of scientific advice to government must be transparency. If all the evidence provided by scientists is published, then the public, other experts and opposition parties can assess:
- the extent to which the decisions taken flowed logically from the advice given, deviated from or went beyond it and,
- the robustness, completeness and focus of the advice itself – was anything misinterpreted, ignored or given insufficient weight?
However, publishing the scientific evidence is not enough – the second step is to publish the questions that the scientists were responding to in presenting that evidence. This element is vital for revealing whether any lack of completeness, bias or omission came from advisers or politicians, rather than from the scientists. For example, the answer to the question ‘how can we best minimise disease spread’ is likely to be different from the answer to ‘how can we best reduce disease spread within a budget of X’. Transparency about what scientists are asked is also vital for understanding government motives – and is therefore vital for accountability.
The third step is to define clearly in a protocol, the responsibilities of scientists and government and the limits to these responsibilities in the advisory process. It’s not too hard to sketch out some key aspects. Scientists should only be asked for advice on their area of expertise, and not beyond. For example, a disease epidemiologist could advise on the most likely spread of a virus in a given scenario, but it would be beyond their remit to speculate about how the human behavioural response to this threat should be managed.
Similar to the guidelines for carrying out things like environmental assessments, there should be an official format for providing scientific advice to government, so that specific elements are included in a pre-agreed order and style to avoid misunderstanding. This might include the specific question being responded to, the limitations of the evidence (uncertainty, scope, quality, the level of knowledge about the research), the range of different explanations for what was found, and the 'known unknowns' relating to the data. If there is time before a decision is taken, there should be an opportunity for other members of the scientific community to comment on the whole document – if not, to meet the need for transparency, it should be published soon afterwards.
With such a system in place, allocating responsibility might be easily achieved. Politicians and not the scientists should be accountable for the decision made if:
- the evidence and related information met the requirements of the pre-agreed format
- the evidence was proved incorrect by subsequent experience or research (but not if data were presented or calculated wrongly in ways that the scientists should have known about)
- they chose between conflicting evidence which met the standards above
- they made assumptions or judgements beyond the information given
- they acted on evidence that went beyond that included in the record (i.e., only politicians could be held accountable for decisions made on the basis of off the record discussions)
- they supressed or misinterpreted evidence meeting standards (i) and (ii)
- they acted without understanding the limits to the evidence they used, so long as those limits were correctly presented according to the protocol.
The scientists advising the government should be held accountable if:
- they provided evidence that did not adhere to the protocol defined
- they provided false evidence or hid some aspect of it – but not if the evidence only proved incorrect in the light of new information - so long as they had communicated all the known limitations of the research; science is not fool-proof and should not be considered as such.
Finally, the system should be accountable (i.e., be reviewed) if the evidence proved incorrect due to ‘unknown unknowns’ for the government and the scientists involved.
My point with all this is not to say that the solution I’ve presented is the only or the best one, but rather to highlight that it is perfectly possible to devise a system that avoids the problems of accountability and understanding that haunt government engagements with scientific advice. Tackling these issues is obviously in the public interest. Whether politicians and their political advisers would see such a system as being beneficial to them is another matter.