Abstract

Weeks after a disaster, crucial response and recovery decisions require information on the locations and scale of building damage. Geostatistical data integration methods estimate post-disaster damage by calibrating engineering forecasts or remote sensing-derived proxies with limited field measurements. These methods are meant to adapt to building damage and post-earthquake data sources that vary depending on location, but their performances across multiple locations have not yet been empirically evaluated.

In this study, we evaluate the generalizability of data integration to various post-earthquake scenarios using damage data produced after four earthquakes: Haiti 2010, New Zealand 2011, Nepal 2015, and Italy 2016. Exhaustive surveys of true damage data were eventually collected for these events, which allowed us to evaluate the performance of data integration estimates of damage through multiple simulations representing a range of conditions of data availability after each earthquake. In all case study locations, we find that integrating forecasts or proxies of damage with field measurements results in a more accurate damage estimate than the current best practice of evaluating these input data separately.

In cases when multiple damage data are not available, a map of shaking intensity can serve as the only covariate, though the addition of remote sensing-derived data can improve performance. Even when field measurements are clustered in a small area—a more realistic scenario for reconnaissance teams—damage data integration outperforms alternative damage datasets.

Overall, by evaluating damage data integration across contexts and under multiple conditions, we demonstrate how integration is a reliable approach that leverages all existing damage data sources to better reflect the damage observed on the ground. We close by recommending modeling and field surveying strategies to implement damage data integration in-real-time after future earthquakes.


1. Introduction

From rapid forecasts to remote sensing-derived maps, novel sources of post-disaster building damage data are needed to make crucial decisions for early recovery. For example, two to four weeks after a disaster, the government of the affected region will often lead a Post-Disaster Needs Assessment (PDNA) to assess metrics such as the number of damaged buildings and cost to reconstruct. The PDNA memorializes the losses from an event and influences the aid a country receives for its recovery. Damage information also underlies shorter-term response activities such as temporary shelter allocation and longer-term recovery policies such as distribution of reconstruction aid (Bhattacharjee et al., 2021). In many cases, potentially useful data, especially derived from satellites, is rapidly available. However, these data were often only used to guide the collection of more precise damage data later on or to inform building safety, as they could not identify lower damage grades necessary to support the PDNA which guides major reconstruction decisions (The European Commission, 2017; Sextos et al., 2018; Eguchi et al., 2010; Government of the Republic of Haiti, 2010).

Post-earthquake damage maps come from a wide range of sources, including remote sensing-derived or forecast-based estimates (Loos et al., 2020). We call these sources secondary datasets, which areadvantageous since they provide a rapid estimate of damage over a large region in less time than it would take to collect primary field surveys of damage. They are highly uncertain, however, usually because they are produced using methods developed for global use. Remote sensing-derived data is based on imagery from any type of remote sensor, including satellites, planes, drones, among many others. Publicly available remote sensing-derived data include NASA JPL-ARIA’s Damage Proxy Map (DPM) derived from Interferometric Synthetic Aperture Radar (inSAR) data and the Department of Defense’s xView2 challenge, which called for participants to use computer vision with high-resolution imagery to estimate multi-hazard building damage (Yun et al., 2015; Gupta et al., 2019). Additionally, maps from manual interpretation of remote sensing imagery exist, such as the crowdsourcing efforts carried out after the Haiti 2010 earthquake (Ghosh et al., 2011) or damage grading maps from the Copernicus Emergency Management Service (Dorati et al., 2018). Outside of remote sensing-derived maps, engineering forecasts are also produced as soon as a map of shaking intensity becomes available (Erdik et al., 2014; Earle et al., 2009; Trendafiloski et al., 2009; Gunasekera et al., 2018). Engineering forecasts are predictive models of damage that relate the estimated distribution of shaking to consequence metrics, like building collapse, through models of exposure and vulnerability. Alternative machine learning methods that similarly use hazard and building characteristic data to rapidly forecast damage have also been developed (Mangalathu et al., 2020). An example of publicly available engineering forecast is the United States Geological Survey’s PAGER system, which aggregates forecast results to country-level estimates of economic loss or casualties (Jaiswal and Wald, 2011).

While abundant data might seem beneficial, three issues exist. First, rapid damage maps are produced at varying resolutions with units that do not necessarily align with the needs of post-disaster planners. In some cases, like with the DPM, the information provided is a proxy of damage, where each pixel contains a unitless integer that indicates change between pre- and post-earthquake imagery, but has inconsistent meanings between earthquakes. Second, many models are developed with data from prior events in other places and therefore still need to be calibrated to the current disaster. Third, because of the fast-moving and haphazard nature of post-disaster decision-making, many response workers or recovery planners use only the data they trust, rather than considering all the available data at once (Liboiron, 2015; Bhattacharjee et al., 2021; Hunt and Specht, 2019).

The Geospatial Data Integration Framework (G-DIF), based on the geostatistical method Regression Kriging, addresses these issues (Loos et al., 2020). G-DIF is a general modeling framework that is agnostic to different types of primary and secondary data, and therefore adapts to different places and new developments in secondary data. The method decomposes the spatial distribution of damage into the trend, or the average gradient in damage over the affected region, and spatially correlated and stochastic residuals around that trend. The estimation of the trend depends on the secondary damage data, while the estimation of the residuals depends on the expected spatial correlation in the residuals from the trend at the field survey locations. Since our initial application of G-DIF to the Nepal 2015 earthquake, others have built upon this idea with alternative models (Sheibani and Ou, 2021; Wilson, 2020).

Three main assumptions were made about the expected performance of G-DIF and alternative damage data integration methods, which we evaluate and address in this paper. The first is that G-DIF will perform better than any alternative secondary dataset alone. Without better performance, the effort of building a G-DIF model would not be justified. The second is that the secondary data available in the earthquake- affected country is of good enough quality to correlate with the damage seen on the ground. This assumption might not be the case after earthquakes in regions with little remote sensing data and few seismic stations to measure shaking intensity. The third assumption is that the field surveys used to calibrate the secondary damage data to the local observations of building damage is collected from a spatially representative sample. Field surveys may not be representative if engineering reconnaissance missions or local survey teams focus on the communities that are easiest to reach immediately following a disaster or the areas where they expect to find damage (resulting in a preferential sample).

In this study, we evaluate these assumptions by applying G-DIF to damage data that became available after four major earthquakes: Haiti 2010, New Zealand February 2011, Nepal 2015, and Italy 2016. We evaluate whether G-DIF’s damage estimate outperforms alternative secondary estimates of damage across various contexts with different patterns of damage and quality of secondary data. Additionally, we examine whether G-DIF is able to produce an accurate damage estimate with different sources of secondary data available or more realistic field surveyed locations. Applying G-DIF to multiple real-world scenarios does require assumptions to perform comparisons. To facilitate comparisons across events, we made several simplifications to develop the inputs and models in G-DIF. While this might somewhat reduce predictive performance, we still find clear and intuitive trends that allow us to understand the general performance of the method.

We find that many of our assumptions hold, indicating that G-DIF generalizes well across contexts under different scenarios of primary and secondary data availability. Overall, this study demonstrates how G-DIF is a reliable approach that can leverage all available damage data after an earthquake to better reflect the damage observed on the ground. Thus, G-DIF is an improvement over the current practice of qualitatively evaluating each input damage data source, whether it be field surveys or remote sensing-derived, on their own. We, therefore, close with both modeling and field surveying strategies to implement damage data integration in-real-time after future earthquakes.