7.1 IM & Artificial Intelligence
Humanitarian and development organisations are increasingly using artificial intelligence (AI) to strengthen information management (IM), both in crisis situations and in longer-term development contexts, in order to better guide decisions, anticipate needs and improve the impact of their interventions
"While there is no universally accepted definition of AI, the term is often used to describe a machine or system that performs tasks that would ordinarily require human (or other biological) brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems. There is a wide range of such systems, but broadly speaking they consist of computers running algorithms (a set of rules or a βrecipeβ for a computer to follow), often drawing on a range of datasets." (Spencer 2024)
Facing growing needs and limited resources, aid agencies see AI as a way to βachieve more with less,β improving efficiency and decision-making (ICRC 2024). At the same time, deploying AI comes with new risks and ethical challenges. This chapter provides an overview of how AI is being applied in humanitarian information management thus far, the potential benefits it offers, and the pitfalls to avoid.
7.1.1 Applications of AI in Humanitarian Information Management
AI is already being used in several areas of humanitarian IM to increase speed, scale, and insight generation. Here are a few illustrative applications:
Generative AI for analysis and reporting: Many organisations have started using large language models (LLMs) like GPT to analyse qualitative and quantitative data and draft reports, helping IM teams extract meaningful insights from large volumes of data. For this handbook, for example, a customized GPT has been used to support the consolidation and structuring of information.
Chatbots for data collection and/or community engagement: NRC, for example, has developed chatbots with partners from the technology sector to gather information for needs assessments in Ukraine, and help youth identify suitable education opportunities worldwide.
Predictive analytics for anticipatory action: UNHCR, for example, is piloting the use of machine learning to predict displacement trends by analysing conflict and climate data, helping agencies plan responses in advance.
Image recognition for damage assessment and the formation of new informal settlements: WFP, for example, is piloting a tool that analyses satellite images with the help of AI to identify destroyed buildings after disasters, enabling rapid needs assessments without waiting for field verification. The Humanitarian OpenStreetMap Team (HOT), on the other hand, uses AI models to identify infrastructure in areas of rapid urban growth, that might point towards the emergence of new informal settlements and thereby facilitates intervention planning.
These are only a few examples, but they demonstrate the growing role of AI in augmenting key IM functions, from data collection to decision-making. Using these technologies bares considerable ethical risk, which will be briefly outlined in the following.
7.1.2 Risks and Challenges
The integration of AI into humanitarian IM comes with several risks that must be carefully managed:
Algorithmic bias: AI can reflect and reinforce existing biases in training data, potentially leading to inequitable outcomes for marginalized groups.
Lack of transparency: Many AI systems do not make their reasoning processes transparent, making it difficult to understand or explain their outputs.
Data privacy concerns: Feeding sensitive humanitarian data into AI tools, especially third-party platforms, risks breaches or misuse.
Potential marginalisation: Access to technologies, digital skills or necessary infrastructure, can be barriers for participation of communities or local actors.
Over-reliance on automation: Excessive trust in AI-generated outputs can erode critical human oversight, leading to errors or inappropriate actions.
Technical and contextual limitations: AI may misinterpret complex local dynamics or operate poorly in data-scarce environments.
Incompatibility with humanitarian principles: AI applications may conflict with the principles of neutrality, impartiality and independence if, for example, they use data from political or commercial sources.
These challenges underscore the need for clear organisational governance and guidance on the use of AI systems and tools in humanitarian operations.
7.1.3 Ethical Considerations for Responsible AI Use
While ethical frameworks for AI in humanitarian contexts are still evolving, four ethical dimensions outlined by the Danish Refugee Council (DRC) are a useful starting point. Although focused on generative AI, these four dimensions offer relevant guidance for broader AI applications:
Caution Against Misuse and Abuse: Generative AI systems lack contextual understanding and may generate misleading analyses or hallucinated conclusions that could impact program decisions. Therefore, it is recommended to implement structured review protocols and systematic human validation of generative AI outputs, particularly for decisions affecting vulnerable populations. It is important to maintain a "Check-Ask-Check" approach, and the principle of 80% working with generative AI and 20% checking its output quality
Privacy and Data Protection: Generative AI systems can inadvertently breach privacy by using personal data from one user to inform responses to another, imposing significant risks. Therefore, agencies must ensure AI systems comply with data protection laws (like GDPR) and avoid sharing sensitive data with AI services. The rights of data subjects need to be uphold, ensuring informed consent and opt-out options for addressing privacy concerns.
Inclusion and Participation: AI can reflect and reinforce existing biases. Through participation, representation, and capacity-building, biases can be reduced. Mechanisms like human-in-the-loop feedback and community audits can assist to mitigate risks of inaccuracy, bias, and discrimination.
Transparency and Explainability: Making generative AI processes transparent and technical explainable, ensures accountability and mitigates the risk of biases. Therefore, it is essential to enhance AI literacy and ensure that AI processes are clear to all stakeholders involved.
For a closer examination of each ethical dimension, and their application in specific case studies, see DRCβs position paper.
References & Further Readings
CartONG (2022). Guidance on Artificial Intelligence Applied to NGO Program Data Management.
NetHope (2024). The Guide to Usefulness of Existing AI Solutions in Nonprofit Organizations.
OCHA (2024). Briefing Note on Artificial Intelligence and the Humanitarian Sector.
OECD (2021). OECD AI Principles.
Sphere (2024). How can humanitarian organisations use AI safely?
UN CEB (2022). Principles for the Ethical Use of AI in the United Nations System.
UN STATS (2021). KITE: An Abstraction Framework for Reducing Complexity in AI Governance.
Last updated
