Skip to content

Official code for 🔥 Unsupervised Wildfire Change Detection based on Contrastive Learning 🔥

Notifications You must be signed in to change notification settings

spaceml-org/FireCLR-Wildfires

Repository files navigation

FireCLR-Wildfires

Official code for 🔥 Unsupervised Wildfire Change Detection based on Contrastive Learning 🔥. Work conducted at the FDL USA 2022.

FireCLR A self-supervised learning model was developed based on a popular contrastive learning architecture named SimCLR. The model takes 4-band multispectral imagery (blue, green, red, and near-infrared) to detect the changes caused by wildfires using the distance maps calculated from the models' representation layer. The downstream tasks at the Mesa fire in Idaho show that the FireCLR outperforms baselines. At this repository, we release the scripts to show how to use the model and reproduce the requirements. We also release the annotated dataset of burned severity and training and validation datasets at the Mesa fire from Sentinel-2. Work conducted at the FDL 2022.

NeurIPS workshop paperQuick Colab Example


Unsupervised Wildfire Change Detection based on Contrastive Learning

Overall Demonstration

Abstract: The accurate characterization of the severity of the wildfire event strongly contributes to the characterization of the fuel conditions in fire-prone areas, and provides valuable information for disaster response. The aim of this study is to develop an autonomous system built on top of high-resolution multispectral satellite imagery, with an advanced deep learning method for detecting burned area change. This work proposes an initial exploration of using an unsupervised model for feature extraction in wildfire scenarios. It is based on the contrastive learning technique SimCLR, which is trained to minimize the cosine distance between augmentations of images. The distance between encoded images can also be used for change detection. We propose changes to this method that allows it to be used for unsupervised burned area detection and following downstream tasks. We show that our proposed method outperforms the tested baseline approaches.

Dataset

Map of the events

Training Dataset

Sentinel-2 PlanetScope
Mesa (2018, Idaho) 8 images X
East Troublesome (2020, Colorado) X 17 images
McFarland (2021, California) X 26 images
Total instances 941,190 tiles
(32,32,4)
4,382,607 tiles
(32,32,4)

Validation Dataset

Sentinel-2 PlanetScope
Mesa (2018, Idaho) 2018-07-26 (prefire)
2018-08-15 (postfire)
[not included in training]
2018-07-26 (prefire)
2018-08-15 (postfire)

Code examples

Install

# prep environment
conda env create -f environment.yml
conda activate myenv

Training

The training process is shown in the two notebooks that employ Sentinel-2 and PlanetScope separately. To reproduce the training process, we provide the Sentinel-2 training dataset, which can be downloaded from here. And the training dataset needs to be processed into tiles in a size of (32,32,4) based on this notebook.

Validation (Downstream tasks)

The scripts of the validation for the downstream tasks on detecting the changes caused by the Mesa fire in Idaho are also provided (Sentinel-2, PlanetScope). The Sentinel-2 validation dataset is available at here. Similar to the training process, the validation dataset is provided in a full scene of the Mesa fire. You will need to preprocess it into tiles using this notebook before inputting into the trained model.

Pre-trained models

We provided our pre-trained models based on the Sentinel-2 and PlanetScope at here if you would like to use them to reproduce the validation or transfer it on applying to other downstream tasks.

Inference

To start using our models for inference, it's best to start with the prepared notebooks (Sentinel-2, PlanetScope), which employs our annotated dataset for the burned severity and evaluates the predicted cosine distance maps from the pre-trained models based on Sentinel-2 and PlanetScope. The Shapefiles of the manually annotated labels are available at here.

Demo

Indication of the severely burned areas (white/black ash) at the central (slightly forward northeast) of the Mesa fire based on the outputs of the model trained on the PlanetScope datasets. The pixels in yellow have a larger cosine distance between the two representation layers (vectors in the latent spaces) from the pre and post Mesa fire multispectral images.

Prefire - Postfire - Predicted changes based on the distance map

Mesa fire based on PlanetScope

Citation

If you find FireCLR useful in your research, please consider citing the following paper:

@inproceedings{fireclr2022,
	title = {Unsupervised Wildfire Change Detection based on Contrastive Learning},
  author = {Zhang, Beichen and Wang, Huiqi and Alabri, Amani and Bot, Karol and McCall, Cole and Hamilton, Dale and Růžička, Vít},
  booktitle = {Artificial {Intelligence} for {Humanitarian} {Assistance} and {Disaster} {Response} {Workshop}, 36th {Conference} on {Neural} {Information} {Processing} {Systems} ({NeurIPS} 2022), {New Orleans}, {USA}},
  month = nov,
  year = {2022},
  url = {https://arxiv.org/abs/2211.14654},
  doi = {10.48550/ARXIV.2211.14654}
}

About

Official code for 🔥 Unsupervised Wildfire Change Detection based on Contrastive Learning 🔥

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages