Finding what works

How do we connect decision makers to evidence that can help achieve SDG 2?

Photo: Trevor Butterworth

H ow do we achieve coordinated, evidence-based action to meet the goal of Zero Hunger? How do we provide policy and decision makers with evidence about innovative solutions from research that exists now? How can we infuse policy with science so decision makers can quickly allocate resources to address urgent problems—and in a way that does not cherry-pick results or minimize the complexity of science?

We ask these questions against a complicated backdrop: There is more information available than we can possibly manage; science is increasingly specialized; and access to scientific research has become so restricted it has made the use of science problematic. These barriers, high to scientists, are even higher for non-research audiences.

We also have to consider the social dimension of evidence in decision making: Why is some high-quality evidence useful to decision makers but not all? How do we include as many perspectives as possible—and especially those of colleagues who may not frequently publish in widely-read journals but have important contextual and experiential knowledge to share? How do we engage with donor stakeholders and others to ensure we are examining relevant questions?

We know science can help us respond to wicked problems like food security, which span economic, social, and policy sectors. To us, the question is not if we use science, but how: how do we deliver evidence-based information that can empower governments and donors to make informed decisions about pathways to allocate limited resources?

We approach building the evidence in three steps.

STEP 1

In order to deliver evidence-based information, we first have to find it, connect it, and classify it. The research we need to look at is scattered across disparate silos with no connection to each other. We’re using machine learning and natural language processing to unify this knowledge in such a way that we can explore large volumes of text quickly to create the connections between research and policy.

STEP 2

We work with diverse and global community to create and publish an evidence-based review. We need to assess the quality of the research in a way that emphasizes inclusivity, rigor, and transparency. We will invite global experts to review eight interventions that target some of the key elements of SDG 2, namely the impact of small-scale food producer interventions on household food security, rural economic livelihoods, and environmental sustainability. We are working with Nature Research to publish these materials in 2020, subject to the highest standards of peer review.

STEP 3

We use a well-documented approach to review the evidence for this program—a mixed method systematic review. Systematic reviews have a comparative advantage over expert-based narrative reviews because the research is documented as part of a protocol which is published before the review comes out. Meeting the United Nations’ Sustainable Development Goals requires “better” evidence—that is, evidence using criteria-based selection which appraises all relevant studies, assesses the quality of primary research, synthesizes results in a reliable way, and concludes with recommendations.

STEP 1 Ideally, research systems should have a way to help a user find “what works” instead of only “show me everything.” Unfortunately, researchers are given little choice but to rely on keyword searching and having to visit multiple systems to find information. Even Google doesn’t capture it all.

And all means a lot of research. According to estimates from Science in 2013, a new peer-reviewed article is published every seven seconds. In agriculture research, means more than four million papers published between 1860 and 2018, with nearly half published in the past decade. More than 60 major agencies in agriculture are providers of grey literature—academic research which lies outside the world of peer-reviewed journals.

Given the volumes of information available to us now, relying only on keyword searching doesn’t work. If we want to understand the fullness of human knowledge, we need to incorporate new methods of discovery that account for the way we describe similar things in different ways.

Over the past decade, there have been enormous advances in artificial intelligence that enable computers to analyze the way we use language. This involves training a computer program to recognize relationships between words, so that it can capture the different ways people describe similar things.

We used machine learning and natural language processing (NLP) to create and analyze a preliminary dataset of ~50,000 articles and reports (2008-2018) about smallholder farmers from science journals and research and development organizations. We used a variety of search terms, such as small-scale food producers, rural farmers, and subsistence and contract farmers.

In order to increase coverage of materials published in low and middle income countries, we included the full table of contents from the African Journal of Biotechnology, African Journal of Agricultural Research, African Journal of Food, Agriculture, Nutrition and Development, African Crop Science Journal, Indian Journal of Agronomy, and the Indian Journal of Agricultural Economics.

Lastly, we curated a dataset of systematic reviews and meta analyses relevant to SDG 2. This dataset is not exhaustive, but it provided us with a starting place to begin our analysis. 

 

EXAMPLE 1: SYNONYMS FOR INTERVENTION

In agricultural research, the word intervention isn’t used consistently as a way of describing “what works” to address a particular problem, even though there are, in reality, many interventions designed to tackle agricultural and food security problems that have been researched.

Semantic associations, or synonyms, are a key building block for us. They enrich our ability to discover what we are looking for and what we hope to find. For example, when we conduct a keyword search using only “interventions” and “greenhouse gas emissions” in our dataset, we find only about 10 percent of the relevant materials. But when we create associations between interventions and possible synonyms, this increases to 61 percent of the dataset.

Word

Intervention

Policy

Strategy

Measure

Program

Project

Programme

Outcome

Recommendation

Initiative

Targeting

Capacity building

Participatory approach

Programming

Social protection

Entry point

Policy option

Nutrition education

Multi-sectoral approach

Articles using the word (out of 49,000)

2561

7234

5752

2822

2785

2609

1961

1773

1180

1085

674

428

393

323

263

166

138

62

14

Now that we knew how interventions were described in agricultural research, we could set about analyzing our sample of articles to find and classify specific interventions. 

We found synonyms by looking at hypernyms and hyponyms, which are a type of relationship in semantics; for instance, a lemon is a hyponym of fruit, and classified them into four broad categories—technical, socioeconomic, ecosystem, unclassified—and then, specifically, as 995 narrow intervention concepts.

This is a much more targeted approach to uncover important research. It gives us a wholly new way to classify and organize sciences that it is accessible to an audience interested in policy-relevant research. See our Ceres2030 dashboard page for more details.

It gives us a wholly new way to classify and organize science so that it is accessible to an audience interested in policy-relevant research.

EXAMPLE 2: TOPIC MODELING AND GAPS IN EVIDENCE

Natural language processing also enabled us to unify and explore data, despite the fact the data came from many different places. We used topic modeling—a way of exploring text to see what is has in common with other text in the same corpus—to establish a baseline from which we could map the evolution of research from 2008 to 2018.

We can see the topics where there was a high level of research (the darker the blue in the image below, the greater the density of research papers) or where evidence and research were limited or missing. We can also create comprehensive research baselines to see the volume of research by topic, by funder, by country, and the potential relevancy of the research.

Having created a way of finding and classifying interventions in agriculture, we created a system with the potential to automate bringing in new research and bring us closer to the possibility of real-time analysis of research for policy relevance. We used an open-source tool (Elastic Stack) which allowed us to visualize queries and results. This helps us make all this information accessible, visualizable, and shareable.  It is also easy to add new sources of information.

THE CERES2030 DASHBOARD

We used an open-source dashboard to make all this information accessible, visualizable, and shareable. We see this as a way forward to help non-research audiences make better use of scientific information to aid decision-making for the Sustainable Development Goals. To see a step-by-step example of how we can search for policy-relevant interventions, click below.

THE CERES2030
EVIDENCE TOOLKIT

We’ve curated a global collection of more than 200 systematic reviews and meta-analyses about small-scale food producers and environmental sustainability. You can filter and organize the data, select a card to get more information about the resource, including a URL (where available) that links to the full-text resource. You can also download a .csv file of the entire dataset—and suggest additions.

STEP 2We are inviting global experts to participate in assessing the evidence for eight interventions through a global evidence review. We are working with Nature Research to publish these materials in 2020, subject to the highest standards of peer review.

In order to select eight interventions, we are exploring our sample dataset to develop a shortlist of interventions based on the volume of research. Of course, volume doesn’t necessarily mean that the research is of high quality or shows that these interventions work, so we will frame these as research questions, such as “What are the most effective risk management policy interventions for small-scale producers?”

Our goal is to find out if these research questions and interventions, which seem relevant in both the peer-reviewed and grey literature, are also relevant to experts on food security. This is an important criterion for our evidence reviews—after all, we don’t want to spend time reviewing interventions that lack relevance. But we also can’t review interventions where there isn’t a sufficient amount of data to support answering the questions.

It is critical that our data analytical approach is balanced by this expert insight. These eight intervention questions are presently being reviewed. We will publish the questions and the chosen author teams to review them on our website in early 2019.

STEP 3Evidence reviews, like scoping and systematic reviews, bring all the studies on a particular issue or intervention together to evaluate what they mean. It’s a process with specific steps designed to minimize bias and to ensure rigor and transparency, so that someone else could replicate the process and reach the same conclusion. In this process, each of the eight intervention review teams will be supported by research synthesis experts to help guide them through the process.

The first critical step in this will be for the authors to create a protocol for each review. This is the roadmap setting out how the review is going to be done, how the reviewers will decide what studies or data to include or exclude in the review, and how those studies and data will be reviewed.

This is the opportunity to have open, critical, and constructive discussion among the invited reviewers, synthesis research experts, and global stakeholders, before any actual review of the evidence takes place. Our goal is maximum engagement and maximum integrity.

Evidence synthesis

1. Formulate a research question

2. Search for similar systematic reviews

3. Identify all relevant evidence bases

4. Develop and test search strategies

5. Write inclusion and exclusion criteria

6. Publish protocol

7. Execute searching and screen results

8. Conduct quality of evidence assessment

9. Review and synthesize results

One particular issue facing agricultural research is that it has fewer randomized control trials than, say, medicine, and needs to be inclusive of many different kinds of evidence and data. This makes scientific appraisal more difficult.

For this reason, we are taking a mixed-methods approach, which combines quantitative and qualitative evidence on complex and pressing questions, and which has been successful in previous agricultural systematic reviews.

An important component of systematic reviews is to describe the anticipated methods for assessing the risk of bias of individual studies. We turn to colleagues such as the Campbell Collaboration for guidance on how to explore a risk of bias assessment.

Once there is a consensus on the protocol, it is then published—and it cannot be changed. Publishing ahead of doing the review protects its replicability and transparency. It also gives us a chance to share our work, what we are doing, and allow for scientific dissent as part of the process.