Saltar al contenido principal

Responsible AI for disaster risk management: Working Group summary

Países
Mundo
Fuentes
World Bank
+ 2
Fecha de publicación
Origen
Ver original

01. Introduction

This document is intended to help practitioners and project managers working in disaster risk ensure that the deployment of artificial intelligence (AI), and machine learning (ML) in particular, is done in a manner that is both effective and responsible. The content of this report was produced as part of a 6-month interdisciplinary collaboration between experts from intergovernmental organizations, non-profits, academia, and the private sector. While we do not claim to offer the last word on this important topic, we are publishing the results of this collaboration in order to generate further discussion and help advance efforts towards better understanding the role of these technologies in pursuit of a world that is safer, more equitable, and more sustainable. It is our hope that—as a product produced through intensive consultation with the community for whom it is written—this document will inform and improve the important work carried about by data scientists, risk modellers, and other technical experts working in disaster risk management (DRM).

Many members of our community are working to explore opportunities offered by machine learning technologies and expand the range of applications for which they are used. While we welcome the potential of these tools, we also need to pay close attention to the significant risks that unconsidered deployment of these tools may create in extremely complex contexts in which DRM is undertaken, across the whole cycle of preparedness, response, recovery and mitigation. Important questions are currently being raised by academics, journalists, and the public to questions of the ethics and bias of AI systems across a variety of domains including facial recognition, weapons systems, search, and criminal justice. Despite the significant potential for negative impacts of these tools and methodologies in disaster risk management, our community has not given these issues as much attention as other domains.

Some specific risks of improper use of machine learning include:

  • Perpetuating and aggravating societal inequalities through the use of biased data sets

  • Aggravating privacy and security concerns in Fragility, Conflict and Violence (FCV) settings through a combination of previously distinct data sets

  • Limiting opportunities for public participation in disaster risk management due to the increased complexity of data products

  • Reducing the role of expert judgment in data and modelling tasks in turn increasing the probability of error or misuse

  • Hype and availability of private sector funding for artificial intelligence technology in DRM leads to overstatement of the capacities of these tools and the deployment of untested approaches in safety-critical scenarios

These are risks that need to be weighed seriously against the potential benefits before introducing new technologies into disaster information systems. While there are some relevant guidelines being produced in adjacent domains, the conversation is still evolving. In some cases, like facial recognition, experts have begun to recommend not using it at all, and it has been banned in a number of jurisdictions. It is too early to know how this debate will play out in the field of disaster risk management, so it is worth proceeding with caution.

There are many ways by which, as we will discuss, through technical means as well as improved project design and management, machine learning projects can be more responsibly developed and applied. However, we will also note some of these issues go deeper than machine learning, and are rooted in how we collect disaster risk and impact data, and how we design DRM projects more broadly. The heightened focus on the social impacts of AI tools offers an opportunity to draw attention to some of these questions. We will return to this issue in Section 7.

To develop this document, a team comprised of disaster data experts from the World Bank Global Facility for Disaster Reduction and Recovery (GFDRR), University of Toronto, and Deltares convened 7 online meetings between January and March of 2020, where machine learning experts and other researchers working in the area of disaster data discussed the opportunities offered by machine learning tools in disaster risk management, potential risks raised by these tools, and opportunities for mitigation. On average, 17 individuals participated in each 90-minute session. These interdisciplinary conversations were shaped by joint readings of relevant research and presentation of detailed case studies. In addition, the project team conducted 14 in-depth interviews with data scientists working on these topics for their views.

We present the preliminary findings of this process here, for wider review by the community. We have organized the body of the work around 4 key sources of threat: bias, privacy and security risks, lack of transparency and explainability, and hype. For each, we have presented an overview of the topic, realistic threat models along with hypothetical examples, strategies for managing these risks, and suggested further reading. Wherever possible, we keep the text here short and provide plenty of footnotes and links to more information. Through this process, we sought to produce these recommendations collectively as members of the community of expert researchers and practitioners working to create and use disaster data effectively and responsibly. We look forward to continued discussions with the DRM community on the contents of this report and to continued exploration into these important issues.