09. Design MEAL Plan

Summary

Latest update: 09/2025

This chapter provides a quick introduction to key concepts in monitoring, evaluation, accountability & learning (MEAL) for anticipatory action followed by an explanation of what MEAL activities are required for DREF-funded anticipatory action protocols and a review of methodologies and resources to consider and explore when designing MEAL activities for your anticipatory action efforts. Its focus is on post-activation monitoring and evaluation, as such it does not cover general monitoring and evaluation for program or system set-up. Additional resources on Results-based Management and other approaches to more general program management can be found in the toolbox.

Section 5.2 provides brief descriptions of several methodologies that may be used to demonstrate impact. The intent is not for each National Society to know or implement all of them, but to provide an overview of possible methodologies to help your National Society assess options and link to additional resources. Note that the steps outlined in this chapter are intended as a reference. Many of these processes may occur in parallel or need to be adapted to a specific programme or institutional context, and therefore they do not necessarily follow a chronological order.

Introduction

While commonly discussed together as MEAL—and grouped under Result-based Management (RBM) and Planning, Monitoring, Evaluation and Reporting (PMER), among others—each component of these acronyms refers to distinct phases of or processes in programme management cycle. The table below defines each of these terms broadly and outlines their connection to processes of developing or implementing anticipatory action programmes and protocols and how (if at all) they are covered in this guide.

The steps and resources that follow provide an overview of what is required as part of your s/EAP and links to guidance related to required elements. For National Societies or other actors exploring more in-depth evaluations, it also provides an overview of approaches and links to resources for a variety of other methods that have been used or could be adapted to evaluate anticipatory action programmes.

Term Definition Specifics for anticipatory action What you will find in this chapter
System planning / design The planning or design phase is when you decide what will be done, which results you want to achieve, how to monitor and evaluate them, and which indicators you will use for this. Ideally, this will incorporate learning from any previous and relevant programmes. In anticipatory action, planning and design will occur when setting up a broad anticipatory action programme and/or when specific actions and protocols are being established for specific hazards and contexts. Planning and design will also be important when incorporating lessons and revalidating your s/EAP. How to research and plan your s/EAP is covered in chapters 1-8 and 10-12 of the manual.
Monitoring Monitoring tracks the performance of the implementing actors vis-a-vis their agreed and planned roles and responsibilities. In anticipatory action this may refer to monitoring of the project or programme to set up the anticipatory action system in your country, monitoring of readiness and prepositioning activities, or monitoring of the activation after a trigger is reached. This guide focuses on monitoring of s/EAP implementation, which begins when the EAP is triggered and extends after the activation until data collection and analysis are complete. Monitoring data will help you to reconstruct events and eventually identify and evaluate any shortcomings, constraints, or bottlenecks that need to be addressed before the next activation.
Evaluation Evaluation refers to attempts to capture lessons and impacts (see the info box further below for definitions) resulting from programme setup or implementation. Evaluations of anticipatory action may seek to evaluate system capacity building, activation processes, or outcomes and impacts (see Table below). This guide includes resources for evaluating the activation process as well as outcomes and impacts.
Accountability IFRC defines accountability as “the obligation to demonstrate to stakeholders to what extent results have been achieved according to established plans. This definition guides [IFRC’s] accountability principles as set out in Strategy 2020: explicit standard setting; pen monitoring and reporting; transparent information sharing; meaningful beneficiary participation; effective and efficient use of resources; systems for learning and responding to concerns and complaints“ (IFRC M&E guidance 2021). Humanitarians must be accountable to the populations they work with (at-risk populations), partners, and donors. Given the added layer of uncertainty associated with anticipatory action, it is important to verify that early actions are reaching the right people, meeting their needs, and therefore constitute an wise investment of resources. Post-activation evaluations can help to verify accountability to affected populations, but as suggested by the definition, accountability to stakeholders should be integrated throughout the design and implementation process. This guide links to existing IFRC guidance on CEA.
Learning Learning refers to the acquisition and integration of knowledge and skills gained through experience. Gathering information and lessons from the previous steps without applying or acting upon that information should not be considered learning. Learning only takes place to the extent that lessons or knowledge from previous design, monitoring, evaluation and accountability activities are incorporated into new plans and protocols. In anticipatory action, revalidation of s/EAPs is a critical moment for ensuring that lessons identified during monitoring, evaluation, and accountability processes are actually learned by adjusting new versions of the s/EAP to overcome identified weaknesses and build upon identified strengths. How to incorporate learning and resubmit your s/EAP is covered in chapter 13 of the manual.
Step 1: Set up your MEAL team

At the very least, your MEAL team should include people from the disaster risk management department, staff dedicated to anticipatory action, and one representative from your National Society’s monitoring and evaluation department. Ideally, you should involve your National Society’s MEAL or PMER department to some extent in the entire EAP development process. This will help them to better understand the concept of anticipatory action, what the anticipatory actions are trying to achieve, and how MEAL for AA may differ from MEAL for other programs. With this understanding, they will be able to better support you in designing and implementing appropriate methodologies and tools. If longer-term engagement is not possible, the MEAL department should at a minimum be involved in designing and writing the MEAL section of the EAP to ensure that you have the expertise and resources necessary to execute your plans. It is also an opportunity for PMER staff to learn about the s/EAP and how MEAL for anticipatory action may differ from MEAL for response.

Step 2: Understand MEAL requirements in the s/EAP template and assessment criteria

Everyone involved in monitoring and evaluating your s/EAP should understand the minimum requirements for the s/EAP and be involved in deciding how to meet them. Requirements for the sEAP and full EAP are slightly different. Monitoring, evaluation, accountability, and learning are referenced in two areas of a full EAP: section seven requires you to outline your plans for monitoring, evaluating, and learning from EAP activations and the IFRC Operational tables require you to plan activities and processes for Community Engagement and Accountability. Quality criteria for the full EAP require you to have a plan for the following:

  1. assess the impact of the early actions and the extreme event after each activation;
  2. identify if all activities were carried out as planned and document how early actions were;
  3. learn from the process to improve the system in the future.

Despite requesting a plan for evaluating impacts and the implementation process, at present, the DREF only requires that National Societies conduct a (mandatory) lessons learned workshop. The lessons learned workshop can provide insights into implementation (objective two) and can be used to learn from the process (objective three), but it cannot be used to assess impacts. Instead the lessons learned workshop gathers actors involved in the activation to share experiences, identify and discuss achievements and challenges, and develop recommendations on how to improve subsequent versions of the s/EAP. When well designed, the workshop is an opportunity to engage external stakeholders and foster collaboration, buy-in and/or team building among relevant actors. IFRC guidance on how to conduct such a workshop and what it should cover is available in the toolbox. The activities and agenda are designed to be accessible to National Societies, should not require outside expertise or facilitation (unless desired), and can be planned after the activation has been completed. Examples of high-quality lessons learned workshop reports can be found in the toolbox.

Time to complete DREF-funded MEAL activities 

If DREF funds will be used to fund an s/EAP lessons learned workshop or other activities, it must be completed within the operational timeframe. This timeframe is generally the lead-time plus 3-months, though the lead time for fast-onset events is included in the timeframe, as indicated below. National Societies may ask for a no-cost extension of up to three months if they are responding to the disaster that prompted the EAP activation or another hazard event. This allows them additional time to pay the expenses but does not increase the budget. National Societies may still complete lessons learned workshop or other activities after the end of the operational timeframe, but any associated costs cannot be paid by the IFRC-DREF.

Hazard Lead Time Operational Timeframe
3-7 days 3 months
3 months 3 months + 3 months = 6 months
6 months 6 months + 3 months = 9 months

Requirements for the sEAP are somewhat lighter. Like the full EAP, an sEAP should include plans for monitoring, evaluation and community engagement and accountability in the operational matrix and budget. The only quality criteria explicitly required and considered in monitoring and evaluation section of the sEAP is that the protocol “must include a reasonable budget for a lessons learned workshop with partners and IFRC.” The objectives and guidance for the workshop is the same as for the full EAP (see above).

Step 3: Plan your PDM & CEA

The lessons learned workshop must include a presentation and discussion of the post-distribution monitoring (PDM) completed following the early action activities. Surveys are the most common way of collecting PDM data, and while your National Society is likely familiar with the basics of PDM surveys from response operations, anticipatory action PDMs should be adjusted to include sections not found in traditional PDMs. A well-conceived and executed post-distribution survey will include questions relevant to the design and implementation of your early warnings and early actions, including the following:

  • whether people understood and were able to act upon early warnings, when and how people received any warnings or aid distribution
  • how and when targeted people used any cash or other materials received,
  • any challenges or problems recipients may have encountered during distributions
  • their general satisfaction.

Examples of anticipatory action PDM surveys for various hazards that include standard questions regarding early warning, early action/preparation, when and how cash was used, and feedback/satisfaction can be found in the toolbox.

Appropriate timing is also essential to conducting a valuable PDM. Because it is difficult for people to recall specific details related to a disaster, it is important to conduct the PDM promptly following the distribution. Optimal timing will depend upon the hazard and the interventions in question, but you should aim to collect data when people can be expected to remember when and how they used the items in question. Three months after the activation, for example, is too long to expect people to remember. General guidance on post-distribution monitoring can be found in the toolbox.

Community engagement and accountability systems provide the opportunity for recipients to provide feedback (praise, complaints, or suggestions) regarding the process or content of aid following an activation. Such systems contribute to learning and improvement and serve as a check against misconduct by community leaders, or National Society staff and volunteers. CEA mechanisms for anticipatory action need not differ greatly from mechanisms for longer-term preparedness or response. IFRC guidance on setting up community engagement and accountability systems can be found in the toolbox as well.

Step 4 (optional): Plan to go beyond the minimum MEAL requirements

MEAL activities play a vital role in any programme cycle, but they are particularly important for anticipatory action, as it is a relatively new approach to humanitarian assistance. Despite increasing evidence of the effectiveness of anticipatory action, evaluation is critical to learning from and sharing experience, improving the effectiveness of early actions, and demonstrating the value and impact of anticipatory approaches. Impact evaluations for many organizations are available in the Anticipation Hub’s evidence database and RCRC evaluations and reports are available IFRC’s Evaluations and Research repository, linked in the toolbox.

In addition to the required post-distribution monitoring and lessons learned workshop, your National Society may wish to conduct more in-depth evaluation of the process or outcomes of an activation. As indicated in the table at the beginning of the chapter, these could include an evaluation of the process, the outcomes or impacts of your protocol, and/or the trigger(s). If you decide to conduct trigger, process, or outcome/impact evaluations, steps 5.1-5.3 of this chapter introduce potential methodologies and link to resources on how to implement these elements of an evaluation should you decide to include them in your plan.

What is being monitored or evaluated Objective
Set-up of an anticipatory action system To ensure that project timelines and goals are being met and to document lessons for other actors or other contexts/hazards.
Evaluation of capacity building or enabling environment To determine how, if at all, the setup and/or implementation of anticipatory action programs contribute to broader disaster risk management capacity within your organization or within your country’s disaster risk management system. While not covered in this chapter, additional guidance can be found in WFP’s guidance on planning and monitoring country capacity strengthening for anticipatory action (in the toolbox). Some National Societies have also used IFRC’s Preparedness for Effective Response (PER) as a baseline for National Society capacity and to identify areas for further development to effectively implement AA. A high-level explanation of the relationship of anticipatory action to PER and guidance and examples of how to use PER to inform or monitor anticipatory action capacity building can be found in the toolbox.
Process evaluation of activation To determine whether activities were implemented as planned, what went well, what could be improved, and how.
Outcome and / or impact evaluation To determine whether early actions achieved their goals (often as compared to aid delivered (or not) at other times or in different combinations).
Evaluation of trigger thresholds To assess whether the forecast threshold and timing were appropriate so that it may be adjusted as needed (and feasible).
Step 5.1: Evaluation of the activation process

Process evaluations help you to understand whether the activation went as intended, to identify successes, challenges, and bottlenecks, and to improve processes in the future. This will include looking at communications, coordination, financial flows, logistics, and community engagement and accountability. Data collection methods and tools for process evaluation include the following:

 

a) Real-time monitoring

One way to track what happens during an activation is to keep a log of what happens so that you can accurately reconstruct events later, when memories may have faded. Information gathered during real-time monitoring can be combined with other sources of data (e.g. interviews, focus groups, workshop reconstructions), all of which can help to interpret or explain the results of any outcome or impact evaluations. Real-time monitoring templates can be found in the toolbox.

 

b) Reconstructing events after the activation

You can also reconstruct events after the activation by speaking to key stakeholders. Key informant interviews, focus groups discussions, or participatory workshops can be used to reconstruct the timeline and gather feedback on what went well, what did not, and how to improve in the future. The results of these activities should always be cross-checked with the results from real-time monitoring, where available. Example guides for process evaluation key informant interviews and focus group discussion can be found in the toolbox.

Step 5.2: Outcome & Impact Evaluation

Outcome and impact evaluations seek to determine whether early actions helped people to prepare for or cope with hazard impacts during or immediately after a hazard event (see the box on evaluation terminology below for definitions and clarifications on evaluation terminology and anticipatory action). Although there are too many approaches to outcome and impact assessment to list here, the subsections below introduce methodologies that have been used or may hold promise for future evaluations of anticipatory action. Where applicable, we provide lessons from National Societies who have attempted them as well as links to additional resources.

While your National Society may be able to execute some of these approaches using staff and volunteers, all will produce less biased results if implemented by an external team and will require—at the very least—support from your PMER department. Support may also be available from a Partner National Society, the Climate Centre, or another reference center. As the s/EAP monitoring and evaluation budget should not exceed 5% of the total budget, most of these methodologies will require additional funding. If your National Society would like to undertake a more complex evaluation, contact your regional DREF focal point to explore the use of the DREF coordination budget to support this work. If your National Society attempts a methodology not listed here, please contact GRC to have your experience added to the manual.

Evaluation terminology

Evaluators typically refer to three different categories of results. The definitions below use examples from anticipatory action and align to the definitions found in the IFRC M&E guidance (p. 77-82).

1) Outputs: Measuring outputs is generally easy: keep track of trainings held, households reached, or kits delivered. This should be available through standard record keeping, such as distribution and participant lists. For anticipatory action, output measurement should also include when these items were provided or delivered. Because they are easy to measure, it is often referred to (somewhat pejoratively) as “bean counting:” they tell you who was reached, with what, and when but nothing about what was done with the services, information, or materials provided.

2) Outcomes generally refer to specific changes or immediate benefits resulting from program outputs and activities. In the case of anticipatory action, outcomes often manifest as what people do that they would not have otherwise because of the anticipatory support provided. Measuring outcomes of a shelter program for example, would entail determining whether households made the effort to strengthen their houses after receiving a shelter strengthening kit and participating in training on how to use it.

Tips for tracking outcomes:

National Societies that have households sign for individual items rather than complete aid packages often have difficulties identifying exactly who received what. This greatly complicates sampling or selecting households for surveys, interviews, focus groups, or other data gathering activities. Where possible, develop distribution tracking systems that capture everything a household receives or allow you to easily cross-reference multiple lists to develop a master database.

3) Impacts are actual changes in livelihoods, well-being or other stated goals because of anticipatory actions. Impacts can only be measured after an intervention is complete. Given the relatively short duration of anticipatory action interventions and of the window of intended benefits, this may be as little as two weeks after an anticipatory action activation. Whereas outcomes measure whether people wash their hands more after a WASH training or how people spend anticipatory cash, impact measurement would determine whether they are less likely to fall ill because they are now washing their hands or whether they have better food security scores because of purchases made with the cash.

Outcomes vs impacts in anticipatory action: as in many discussions of terminology, exact usage and definitions often differ. Impacts as defined by the OECD/DAC are “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended” (OECD Glossary p. 24). This definition reflects that the impacts of longer-term programs often cannot be measured in the short-term. For this reason, some practitioners think of outcomes and short-term results and impacts as long-term results. However, most anticipatory action programmes are not seeking to effect long-term change (yet). Using the short- vs long-term definitions, outcomes would include both actions or changes facilitated by anticipatory support  and any short-term impacts resulting from those changes. Nevertheless, the distinction between actions or other intermediate goals (e.g. using aid to prepare for a hazard based on a forecast by planting or harvesting crops, reinforcing houses) and the intended impact (e.g. a reduction in food insecurity, less damage to houses) is still important to understanding whether anticipatory action programmes are meeting their goals. For this reason, this guide uses outcomes to describe changes in knowledge, resources, or action resulting from anticipatory assistance and impact to indicate a reduction in negative (or an increase) in positive results (not based on measurement of longer-term results or sustained impact).

a) Randomized Control Trials

Randomized control trials (RCTs) are a highly robust method widely used in academia and in practice for impact evaluation. In an RCT, vulnerable households will be randomly assigned to receive one of the intervention packages to be compared . Comparison groups allow evaluators to establish what academics and evaluators call a counterfactual, or an understanding of what would have happened with a different intervention or the lack of intervention. Relevant counterfactuals for anticipatory action include different packages of anticipatory aid, the same package delivered at different times (including different anticipatory timings and/or response), or (ambitiously)   Outcomes and impacts are then compared across the groups, enabling evaluators to attribute differences to the intervention (assuming the recipient and non-recipient households have similar socio-economic characteristics and are similarly at-risk/affected by the hazard). Note that either the aid package or the timing must be the same for two groups to be compared, otherwise it is impossible to determine whether the package or the timing were responsible for any differences in outcomes and impacts.

While RCTs produce the most rigorous evidence, they come with tradeoffs. Designing and implementing RCTs for anticipatory action requires considerable technical expertise and consequent financial resources. If your National Society is interested in exploring an RCT, it should seek support from the Federation, external researchers, or consultants with expertise in applying RCTs in humanitarian or development settings. J-PAL, for example, provides resources on RCT and has considerable expertise in conducting them in humanitarian settings.

Ethical considerations may also play a role in decision-making. Some practitioners express ethical concerns about withholding aid from at-risk households. However, as organizations rarely have sufficient resources to provide anticipatory assistance to all households that might be affected by impending events, randomizing who among the most vulnerable receives assistance need not reduce the number of households reached overall. Using this method to compare anticipatory aid to traditional, post-response aid is another way to reduce such concerns. Nevertheless, randomization is significantly different from the community-based approach to recipient selection National Societies usually take, thereby requiring considerable adjustment of normal protocols.

No National Society has attempted an RCT, other organizations engaging with anticipatory action have. Examples of such studies can be found by searching for RCTs in Anticipation Hub’s the evidence database. A few examples are also available in the toolbox.

 

b) Quasi-experimental design

Quasi-experimental studies are another way to use comparison groups to determine the outcomes and impacts of anticipatory action. As with RCTs, this methodology quantitatively compares recipients of anticipatory action to households who receive nothing, receive response assistance, or who received different packages of anticipatory aid. Rather than randomly assigning households to intervention groups, evaluators identify comparison communities during or after an activation and randomly select samples from relevant groups or communities for comparison.

When to identify comparison groups

When to identify comparison groups will depend on the time you have during an activation (i.e. whether your hazard is fast or slow onset) as well as the groups you are comparing to. For example, for drought activations WFP regularly identifies households that receive aid at different times and will collect baselines information for both groups. This enables them to compare outcomes over time. This is particularly valuable and feasible when National Societies know they will be providing response aid, as they can approach communities for baseline information without worrying about raising expectations, as both anticipatory action and response groups will eventually get assistance, only at different times.

Demographic, socio-economic, and vulnerability characteristics are then used to control for differences between the two groups. The two infoboxes below outline the key requirements of this methodology and the steps required for implementation as well as the sequence of data collection. Additional guidance on this methodology can also be found in WFP’s Guidance on Monitoring and evaluation of anticipatory actions for fast and slow-onset hazards that is included in the toolbox.

Requirements and steps to implementing the quasi-experimental design

Before deciding to proceed with a quasi-experimental quantitative impact evaluation, ensure that your National Society will be able to do or provide the following:

  1. Provide the number of people targeted in each community, including what they received and when they received it. The template available in the toolbox may help to compile this information.
  2. A complete list of all the recipient households (i.e. the sampling frame). A corresponding map and/or contact information, while not required, will help you to more easily locate and survey the households and to sample in a way that controls for geographic location, if necessary.
  3. Identify comparison communities that have similar characteristics to the recipient communities: the same livelihoods/socio-economic characteristics, the same exposure and vulnerability to the hazard and that have experienced the same intensity of impacts as the recipient communities. Guidance on selecting comparison communities is available in the toolbox.
  4. A complete list or map of households living in comparison households (those not receiving aid, receiving a different package, or receiving the aid at a different time). This will serve as your sampling frame for the comparison group(s), allowing you to randomly select comparison households. Again, having a list and a map and/or contact details will be helpful in locating the randomly selected households.
    A note on comparing to non-recipient communities or households: when this methodology was attempted in Honduras and Ecuador, the National Societies found it difficult to obtain a complete list of households in non-recipient communities. No one (including community leaders) had access to complete overview community information. Both community leaders and the Red Cross were understandably reluctant to contact or gather people to develop comprehensive lists, as this raises expectations of aid. Unfortunately, the unavailability of lists or maps prevented the study team from getting a random sample, resulting in questionable data. For this reason, National Societies may find it easier to compare outcomes and impacts of aid received at different times.
  5. The ability to collect data while outcomes and impacts can still be measured. The timing of data collection should be based on when you expect the impact of your intervention to materialize and how long they can be expected to last. Additional guidance can be found in the WFP guidance in the toolbox.

Steps for quasi-experimental data collection

The following are the high-level steps for collecting quasi-experimental survey data. They are meant only to help you understand the process and decide if it is feasible for your National Society. If you are considering this methodology, you should also consult the guidance in the toolbox and seek external support as required.

  • Decide who will collect data (National Society volunteers or external parties)
  • Decide who will analyze data (National Society volunteers or external parties)
  • Develop survey (examples from 4As and GRC library)
  • Obtain lists and/or maps of all recipient and comparison/non-recipient communities (sampling frame).
  • Use sample size calculators to determine how many households must be surveyed. Most multi-community interventions will require cluster sampling. We recommend you consult with a statistician to ensure you sample correctly and get a large enough sample size.
  • Draw a random sample from the intervention and comparison groups.
  • Create budget (sample budget from other contexts). Note: it is important to budget enough time and resources to reach ALL the households that appear on these random lists. This should be done by visiting them in their homes rather calling them to a single meeting point. Calling them to one place increases the likelihood of bias. Failing to speak to even one household introduces possible bias, making it difficult generalize the results.
  • Train the data collectors.
  • Test the survey in the field and update the survey. Note: It is important to test the tool and make sure it is working correctly after each and every change to the tool.
  • Collect data from all recipients before starting data collection for the non-intervention and/or response group. This allows you to calculate a vulnerability score (see Box 5) and ensure that respondents from non-beneficiary households are comparable.
  • After data from the recipients is complete, calculate the vulnerability scores and identify a range of scores that participants from comparison communities should fit it.
  • Collect data from non-beneficiary households using the vulnerability score, again without omitting any households.
  • When possible, qualitative data (from key informant interviews and/or focus group discussions) should be conducted at the same time to help interpret the quantitative results.
  • Analyze the data & write the report

Two National Societies have successfully produced three quasi-experimental impact studies of anticipatory action: Mongolia and Bangladesh (see toolbox). Other National Societies have attempted this methodology, but encountered challenges, particularly in obtaining random samples.

Using vulnerability scores based on selection criteria to measure the similarity between communities

In quasi-experimental designs, it is necessary to ensure that the households you are comparing have similar socio-economic and demographic characteristics and that they have experienced the hazard in the same way. For example, you should not compare households with large livestock holdings to those who do not have any livestock or communities/households that can access irrigation during a drought to those who must walk several kilometres to get water. Often, the selection criteria for providing anticipatory assistance are an excellent starting point for ensuring that the populations are similar, as they point toward the critical factors contributing to vulnerability.

Based on the selection criteria and indicators of relative poverty or income appropriate for your context (such as livelihood and incomes sources or types of housing construction), you and your MEAL team will design survey questions relevant to each factor contributing to household vulnerability and assign relative scores to all possible answers. In this way, each household will be assigned a relative vulnerability score based on their circumstances. Sample vulnerability scores can be found in the toolbox.

Having and applying clear selection criteria when selecting recipient households will make it easier to establish a questions and scores for your quasi-experimental impact evaluation.

c) Value for money studies: cost-efficiency, effectiveness, and benefit studies

Given an increasingly tight funding climate and interest from National Society partners, your National Society may wish to demonstrate the value for money of specific programmes. Early attempts to measure the return on investment were not particularly transparent; however, emerging guidance (see toolbox)  on how to conduct transparent rigorous value for money analyses of anticipatory action provides a good starting point for National Societies looking to conduct such analyses. Examples of costing studies can be found in the toolbox. Additional studies can be found by searching for value for money studies in research methods field of the Anticipation Hub’s evidence database.

Cost-efficiency, cost-effectiveness, and cost-benefit analyses are three different but related ways to demonstrate the financial benefits of programs. Each of the three methods below will require detailed analysis of programme budgets to determine the staff, overhead, and procurement costs that went into anticipatory action (and any alternative programmes to which it is being compared).

  • Cost-efficiency analysis measures how much it costs to deliver a unit of output (e.g. cost per water purification kit delivered). It is useful for comparing the costs of delivering the same aid package at different times (e.g. anticipatory action vs response). Web-based programmes such as Dioptra, have developed standardized methodologies and tools to help aid organizations measure and compare the cost effectiveness of different interventions.
  • Cost-effectiveness analysis measures how much it costs to generate a relevant outcome (e.g. cost per unit of improved food consumption score). This approach is useful when trying to compare outcomes or impacts that are difficult to monetize, such as loss of education or increases in protection. It can also be used to compare the costs associated with achieving the same outcome using different interventions.
  • Cost-benefit analysis monetizes the values of outcomes or impacts and calculates whether those monetized benefits are greater than the costs. These kinds of analyses can produce a benefit-cost ratio, a net present value, and return on investment calculations. Such studies can be used to justify investments in anticipatory action

Note/Tip: To perform rigorous cost-effectiveness or cost-benefit analysis, you must first conduct a study to measure the outcomes and/or impacts. These studies are therefore only as rigorous as the outcome or impact studies supporting them.

A word of caution: In practice, the organization or consultant conducting the cost-benefit study will decide which costs to include and exclude (e.g. the value of setting up the programme or only the activation itself) and how to monetize benefits. Because the final cost-benefit or return on investment numbers may be quite sensitive to these decisions or assumptions, being transparent about how and why particular numbers or assumptions were chosen will help to lend credibility to the results.

Additional resources on value for money in disaster risk finance and how to conduct cost-efficiency, -effectiveness, or -benefit analyses can be found in the toolbox.

 

d) Success case method

The success case method has been identified as a promising mixed-methods methodology for National Societies desiring some quantitative measurement of outcomes but without the capacity to implement an RCT or quasi-experimental survey. The approach begins with a survey that includes demographic and vulnerability questions as well as critical outcome and impact indicators. The survey data is then analyzed to identify households who benefitted more (“success cases”) and less (“non-success cases”) from the interventions. The research team then follows up with several households to conduct in-depth interviews aimed at understanding why those households had comparatively better or worse outcomes. If resources allow, it can be expanded to include data collection from non-recipient or other comparison groups. Guidance on how to adapt the success case methodology to anticipatory action can be found in the toolbox.

Note/Tip: If you are considering the success case method, your PDM survey is an excellent opportunity to collect survey data and identify households who benefitted more (“success cases”) and less (“non-success cases”) from your interventions. By adding questions on key impact indictors and requesting contact information and permission to follow-up for in-depth interviews, your PDM could fulfill the survey component without requiring a separate survey.

Experience from Ethiopia

Ethiopia Red Cross Society was the first to test this methodology in 2025. Tools used by Ethiopian Red Cross can be found in the toolbox.

e) Qualitative Impact Protocol (QuIP)

The Qualitative Impact Protocol is a methodology designed to attribute change (i.e. outcomes and impacts) to an intervention through narratives, stories, and in-depth interviews with intervention recipients. Although it has been successfully used by humanitarian actors, it requires highly trained, independent data collectors, meaning that it may be challenging for National Societies with scarce resources. This method has not yet been used to evaluate anticipatory action programmes. More information on this methodology can be found in the toolbox.

 

f) Participatory Impact Assessment

Participatory impact assessment (PIA) is based upon participatory rural appraisal (PRA) which is similar in approach and philosophy to Red Cross Red Crescent vulnerability and capacity assessments. It has three main objectives:

  • to identify the factors leading to change in people’s lives;
  • to determine which of these factors can be linked to a specific intervention;
  • to determine the importance of each factor.

PIA allows communities to participate in the definition of indicators rather than relying on pre-determined or prescribed targets. Rather than attempting to compare results to non-participants in a programme, it asks participants to describe change over time. Many activities begin with a ranking or scoring activity, followed by an open-ended discussion to understand the results of the ranking. While the activities are often considered qualitative, when repeated in a standardized way, the ranking and scoring activities can yield some numerical data and quantitative insights or “participatory numbers.” This methodology has been used outside of anticipatory action to evaluate livestock interventions, particularly in pastoralist settings. A guide to participatory impact assessment can be found in the toolbox.

 

g) People-first Impact Method (P-FIM)

The people-first impact method (P-FIM) is a community engagement tool that respects the right and ability of communities to take to lead in the design, implementation, and evaluation of aid programs. Ideally, it would be employed from the start (i.e. feasibility study and EAP design stage), to ensure that anticipatory action programs in general, and specific actions in particular, address genuine community needs and integrate community abilities and solutions in an added-value cost-effective and sustainable approach. P-FIM builds trust, two-way engagement, and transparency between agencies and communities. It identifies the most critical areas for community-led action without introducing organizational bias or pre-conceived and (possibly) untested assumptions. Through two-way dialogue facilitated by trained local teams who live and work in the context, P-FIM brings to light what people believe are the most critical issues and forces impacting their lives, positive and negative. It is valuable in identifying appropriate interventions and adds value and relevance to existing agency tools by allowing communities to lead in the selection and development of MEAL plans and tools, such as theories of change or post-distribution surveys.

As a community engagement approach, P-FIM fits naturally with MEAL: engaging the community allows communities to shape MEAL components, ensuring they are easily understood by communities and agency frontline staff and volunteers.  P-FIM challenges agencies to develop simple easily understood tools. Like PIA, it helps agency staff and volunteers understand what communities and organizations feel should be monitored and evaluated and how. In the P-FIM approach, the evaluation questions used in focus group, key informant interviews, and other discussions are defined by communities themselves with the National Society. Similarly, P-FIM engages communities in selecting and designing other (perhaps more academic) methods to be used for evaluation.

Welthungerhilfe uses the method to inform its anticipatory action programs. The toolbox includes a guide explaining the full P-FIM approach and an overview of how P-FIM two-way engagement can be applied to MEAL at all stages of a programme. Within those documents, you will find contact information to request a 2-day, practical P-FIM training for your team.

 

h) INDABA

INDABA is an innovative participatory video process that allows communities to design, collect, edit and develop their own stories through a mobile application that can be used onsite and in remote contexts. IFRC’s Strategic Planning Department in coordination with National Society counterparts, developed and is supporting this qualitative approach in PMER for long term and emergency contexts. It has been rolled out in Colombia, Egypt, Eswatini, Gambia, Honduras, Indonesia, Kenya and Namibia. Examples can be found on IFRC’s PMER (Planning Monitoring Evaluation and Reporting) YouTube page.

This qualitative approach relies strongly on community engagement in the process to allow for feedback not only at the community level, but also at the level of disaggregated groups (e.g. the elderly, women, men, and youth). The process is adaptable to context and uses different facilitation methods (photos, story boarding cards, sectorial cards with questions, tags, dice etc.) to engage participants, put them into the context of the project or programme and allow them to share feedback that can inform improvement. The online process has been simplified to allow community members to easily design, tag, edit and create their own videos.

Although not yet explicitly applied to anticipatory action, the INDABA method that could be relevant to anticipatory action for the following reasons:

  • It is a community-based approach. Communities are the first to detect early warning signs of crises;
  • The feedback collected through video stories can provide the IFRC network, partners and donors with real time feedback on anticipatory action rooted in local knowledge.
  • Story telling is a strong tradition in many cultures. The production and sharing of video stories could allow people to describe their experience of anticipatory action in their own ways. It could also help promote peer-to-peer knowledge transfer in anticipatory action.
Step 5.3: Trigger review

Both activations and missed activations present a valuable opportunity to learn from and improve the triggering process and threshold. Questions developed by the Red Cross Red Crescent Climate Centre and university partners for activations and missed events are available in the toolbox as well as example reports based on these templates. These questions can be answered during a session of the lessons learned workshop.

Step 6: Setting yourself up for success - lessons from experience

If your National Society wants to measure outcomes or impacts, you are more likely to be successful if the parties involved in designing your MEAL plan have already determined the approach and methodology, who will execute it, and when well before your s/EAP is activated. This means that protocols, surveys and other tools, volunteers, and any necessary materials (e.g. tablets, phones, software licensees) should be pre-determined and ready for rapid training and deployment.

A common mistake National Societies make is waiting until after an activation to think concretely about monitoring and evaluation. While a lessons learned workshop can be organized after activation, your team will get better data if the PDM is developed and planned for before an activation. Because most early actions do not seek to provide long-term impact, timely data collection is essential. For example, to measure whether cash or in-kind items mitigated food insecurity immediately after a flood, you would need to return to the community to collect data on food security indicators before the benefits of your assistance have waned.  The precise timing of data collection will depend upon the methods and indicators you are using as well as the counterfactual scenario you are comparing to (when applicable). The table below provides a few (non-exhaustive) examples of common interventions and considerations for when to collect data.

Hazard Intervention & expected duration (if applicable) Expected impact Notes on when to measure
Flood Water purification kits to last 2 weeks (during and immediately after the flood) Reduction in cholera cases Cholera has an incubation period of 12 hours to 5 days. Data on illnesses experienced during and after the flood should be collected as soon as possible following the two weeks to ensure proper recall.
Drought Animal fodder to last until the next rains produce pasture (e.g. 3 months) Improve animal body condition and reduction in excess mortality For body condition, you would want to measure at the end of lean season but before the rains arrive potentially allowing animals to recover once pasture is more widely available.
Drought Cash to spend on food / basic needs for 3 months Improved food consumption or security scores Most food security indicators ask about food consumption for the preceding 7-days. One Food Insecurity Experience Scale (FIES) asks households about the last 30 days. You should therefore collect this data after you would expect food consumption patterns to have diverged between the groups in question but before cash to spend on food has been exhausted.

For more information on the timing of data collection see WFP’s M&E guidance that is available in the toolbox. Waiting until after an activation to begin planning an outcome or impact assessment, when the National Society is likely to be focused on implementation and on-going response, is likely to lead to delays that will compromise what you are able to measure.

No matter what approach you decide to take, defining the following will help to set you up for success:

  • Define how and when you will monitor and evaluate the EAP (steps 1-3);
  • Define / procure equipment and licenses you need (for example, phones, tablets, and/or licenses for digital survey data collection, e.g. Kobo or Open Data Kit;
  • Define who will collect data and who will analyze it;
  • Estimate a budget and identify where it will come from.

If you or someone on your MEAL team is interested in engaging regularly on MEAL for anticipatory action, the Anticipation Hub’s MEAL Practitioner Group meets monthly to share experiences in monitoring and evaluation for anticipatory action. A link is available through the toolbox.

Step 7: Incorporate findings

No matter what approach you take to monitoring and evaluation, the effort is wasted if the findings do not feedback to practice. Lessons and recommendations from the PDM, lessons learned workshop, and any additional activities should be used to revise and improve the next version of your s/EAP.

Step 8: Share your evidence

If you do conduct an evaluation, share it with the MEAL Practitioner Group and the Hub’s evidence database to help others learn from your findings and experience. All links can be found in the toolbox.

Toolbox

Project Management Tools and Templates

Guidance on Lessons Learned Workshop and PDM

Trigger Evaluation Resources

Guidance on Process Monitoring & Evaluation

Guidance on Assessment and Building of Capacities

Resources around Value for Money Studies

Resources & Examples for RCTs & Quasi-Experimental Designs

Resources for Qualitative Impact Assessment (QuIP)

Resources for the Success Case Methodology

Resources for Participatory Evaluation Methodologies (PIA, P-FIM, INDABA)

Anticipation Hub Resources

General Red Cross Red Crescent Movement MEAL Guidance

Quiz

Chapter 9

1 / 5

Only two things are absolutely required to be part of your EAP evaluation plan, what are they (select two)?

2 / 5

When is the best time to develop your evaluation plans and tools? (pick the best one)

3 / 5

Why is it important to have a MEAL plan? (Select all that apply)

4 / 5

Which of the following are tools that might be used to assess the outcomes of your early actions? (Select all that apply)

5 / 5

Which of the following steps are part of setting up a MEAL plan for an EAP? (Select all that apply)

Your score is

The average score is 60%

0%