We often speak about evidence-based drug policies – but presenting decision makers with the evidence rarely works in the way it is supposed to.
The EU Drug Strategy aims “to take a balanced, integrated and evidence-based approach to the drugs phenomenon.” We often use the mantra of “evidence-based” or “evidence-informed” policy making without clearly defining what we mean. Most of us think of an ideal system where decision makers consult experts with multidisciplinary professional backgrounds, review and assess the available scientific evidence, and create strategic responses to fill gaps and break barriers, to be monitored and evaluated thoroughly.
“Evidence-based policy making” signifies an optimal relationship, a symbiosis between policy makers and experts, where both sides know their roles and cooperate in harmony. Researchers collect, analyse, publish, and review scientific data – decision makers respond to the data and trends revealed by researchers.
In practice, this optimal system would look like this: experts collect data about trends of drug use, evaluate programs and show what works and what does not work in the field. Policy makers read their reports, map the gaps and barriers, prioritise some interventions over others, create a strategy and action plan, implement, monitor, and evaluate it. So if we see increasing HIV rates among drug users, for example, decision makers will scale up harm reduction programs. If we witness growing problems with mass incarceration, decision makers will reform criminal laws. Present decision makers with the “facts”, and they will make good decisions!
Unfortunately or not, this harmonious cooperation almost never exists in the real world. And not only because policy makers are cunning little bastards who care more about their approval ratings than about scientific evidence. Some decision makers really do care about evidence – but they also have to meet the demands of the people who elected them in an often embattled public arena where they compete for limited resources and public attention. Where opposition parties always try to undermine public trust in their decisions. This is called democracy, the worst form of government except all the others, as Churchill famously said.
In addition to competition and opposition, politics is often driven by events and trends perceived important by the public rather than by trends perceived important by researchers. Highly publicised events and stories shaking and worrying public opinion, such as new drugs or new forms of drug use, conflicts in local communities, abuse by the authorities, or overdoses by celebrities, often influence the public discourse on drugs and drug policies much more than reports.
I think we have to honestly acknowledge that this political and communicational reality is also part of the game – a reality on its own.
Still, you rarely find any reference to these public events in official drug reports and studies on drug policies. Do we read about highly publicised drug scandals in official drug reports? Do we read assessments about how they shape policy making? Almost never. It seems both decision makers and experts tend to pretend that these events are not important. They both pretend to work according to the ideal system described above, where decision making is mechanically fed by evidence. And I think this is where the real hypocrisy comes in.
Both experts and political decision makers pretend not to see the elephant in the room, be it overdose cases, murders on the streets, political debates about drug law reform, or false, sensationalist media reports about new drugs. Most professionals who work on creating the evidence-base for policy making, either working for academic institutions or government agencies, stick to their general research findings and avoid discussing anything concrete that is beyond that – everything that can be controversial or can divide the public opinion. They often don’t want to risk losing their “neutral” and “apolitical” expert status by taking a position in polarising debates. It is up to decision makers to ignore inconvenient data and to cherry pick data that seems to support the decisions that will make their voters feel secure and happy.
What I really would like to see is not only evidence-informed but also reality-based drug policy. Where there is no gap between the professional/scientific discourse on drugs and the public discourse on drugs – where they mutually influence, reinforce, and respond to each other. While we know that an overdose death at a youth festival in itself is not comparable to the results of epidemiological surveys, and the story of a single drug user failing drug treatment has not the same value as the findings of randomised controlled studies, they are still powerful realities that can be highly influential for how people conceive drugs and drug users. Instead of ignoring concrete emotional stories often sensationally presented by tabloid newspapers, we have to put them into context and use them as tools to communicate our own messages.
We have to do much more to study how drug policy is really made and shaped – how the process is affected by external factors that have nothing to do with scientific evidence. How media stories can distract and distort the public discourse on drugs. How decisions are often influenced by these stories. How decisions are affected by the competition of different stakeholders within the government administration. What messages and arguments work in changing public opinion and how to present them in the most effective way, and how experts and decision makers can cooperate to respond to new realities in a rapid but responsible way, showing leadership and guidance in public debates about drugs.
Peter Sarosi