Page cover

6.2 Dos and Don’ts in IM Practice

After reviewing field-based IM case studies from across NRC Country Offices and MapAction, several consistent patterns have emerged. These are not generic recommendations — they are rooted in operational realities, reflecting the successes and setbacks shared in Chapter 6.1.

Below are key dos and don’ts derived directly from those experiences, with references to specific case numbers for clarity.


6.2.1 What to Do

Do co-design IM tools with programme and support teams. Tools designed in isolation tend to break down in real use. In Case #9 (SRO), the Portfolio Monitoring Tool (PMT) was developed collaboratively with inputs from Grants, Finance, and MEL. This joint approach not only ensured adoption but also helped staff see the value of the tool in decision-making.

Do start small and tailor tools to the local context before scaling. In Case #8 (LARO), the regional ProLAC protection monitoring system worked because it built on locally tested tools from Venezuela before scaling to 11 countries. The system was introduced gradually, with country teams leading the adaptation.

Do automate when ready — not before. Case #4 (MapAction–Malawi) and Case #5 (MapAction–Syria) showed that automation, when built on stable workflows, saved significant time. However, the success hinged on clarity of inputs, shared data standards, and staff training. Automation worked because the groundwork had been laid first.

Do invest in structured data collection — with unique IDs and standard forms. In Case #3 (Iran), NRC developed its own “Sunshine” and “Bridge” platforms, embedding unique identifiers to reduce duplication and enable long-term follow-up. Similarly, Case #8 (LARO–Venezuela) applied activity tokens to simplify tracking across sectors.

Do prioritize usability and simplicity over complexity. Case #1 (DRC) used QR codes and mobile forms to enhance registration accuracy — a low-tech but high-impact solution. Case #9 (SRO) relied on Excel, a tool familiar to most staff, but structured it with automation and protections to reduce errors.

Do embed IM into real decision-making spaces. In Case #10 (NCA), the Time Scope and Budget dashboard became the central reference point for Programme Review Meetings. It worked not just because of its design, but because it was used — routinely — to inform action.

Do store documentation centrally and maintain shared access. Several cases (SRO, Iran, LARO) emphasized how knowledge continuity was improved when SOPs, user guides, and login details were kept in shared folders — not in inboxes or with individuals.


6.2.2 What to Avoid

Don’t rely on a single IM focal point without backup. Case #1 (DRC) noted that before digitizing, individual IM staff were overwhelmed with tracking and troubleshooting. Without shared ownership or documentation, the work was fragile and inconsistent.

Don’t introduce tools without a training and rollout plan. Case #2 (Lebanon) revealed that launching the WhatsApp chatbot was only successful after addressing digital literacy concerns, creating local support guides, and updating flows based on feedback.

Don’t assume more data means better insight. In Case #3 (Iran), teams initially collected more data than necessary, complicating analysis and slowing processes. Refining the forms to focus only on actionable data improved efficiency and ethics.

Don’t build dashboards no one asked for. MapAction’s Case #7 (Türkiye/Syria earthquake response) highlighted the need to match outputs to user needs. Dashboards and maps were effective because they were built in coordination with response leads — not just for reporting, but for logistics and planning.

Don’t neglect field testing before rolling out a new tool. In multiple cases (Lebanon, SRO, Venezuela), early piloting helped identify logic errors, interface gaps, or user resistance — all fixable before full deployment.

Don’t treat IM as a purely technical function. In every successful case, IM was seen not just as a data task, but as a programme enabler. In contrast, when IM was sidelined or consulted too late (as noted in the original challenges in Chapter 4.3), tools became underused or misaligned with needs.

These lessons show that effective IM is as much about collaboration and ownership as it is about design and tools. By connecting field-tested strategies to broader IM systems, we can scale what works, avoid what doesn’t, and build solutions that truly serve programmes, partners, and affected communities.

Last updated