Skip to content

Generalized Few-Shot Meets Remote Sensing: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework

License

Notifications You must be signed in to change notification settings

LiZhuoHong/SegLand

Repository files navigation

SegLand: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework

Land-cover mapping is one of the vital applications in Earth observation. As natural and human activities change the landscape, the land-cover map needs to be rapidly updated. However, discovering newly appeared land-cover types in existing classification systems is still a non-trivial task hindered by various scales of complex land objects and insufficient labeled data over a wide-span geographic area. To address these limitations, we propose a generalized few-shot segmentation-based framework, named SegLand, to update novel classes in high-resolution land-cover mapping. 

The SegLand is accepted by the CVPR 2024 L3D-IVU workshop and score 🚀1st place in the OpenEarthMap Land Cover Mapping Few-Shot Challenge🚀. See you in CVPR (Seattle, 17 June)!

Contact me at ashelee@whu.edu.cn

Our previous works:

  • Paraformer (L2HNet V2): accepted by CVPR 2024 (Highlight), the hybrid CNN-ViT framework for HR land-cover mapping using LR labels.Code
  • L2HNet V1: accepted by ISPRS P&RS in 2022, The low-to-high network for HR land-cover mapping using LR labels.
  • SinoLC-1: accepted by ESSD in 2023, the first 1-m resolution national-scale land-cover map of China.Data
  • BuildingMap: accepted by IGARSS 2024 (Oral), To identify every building's function in urban area.Data

Training Instructions

  • To train and test the SegLand on the contest dataset, follow these steps:
  1. Dataset and project preprocessing
  • Replace the 'YOUR_PROJECT_ROOT' in ./scripts/train_oem.sh with your POP project directory;
  • Download the OEM trainset and unzip the file, then replace the 'YOUR_PATH_FOR_OEM_TRAIN_DATA' in ./scripts/train_oem.sh;
  • Download the OEM testset and unzip the file, then replace the 'YOUR_PATH_FOR_OEM_TEST_DATA' in ./scripts/evaluate_oem_base.sh and ./scripts/evaluate_oem.sh; (The train.txt, val.txt, all_5shot_seed123.txt (the list of support set), and test.txt have already been set according to the released data list, which do not need any modification)
  1. Base class training and evaluation
  • Train the base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/train_oem.sh, and the model together with the log file will be stored in ./model_saved_base;
  • Evaluate the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/evaluate_oem_base.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the output prediction maps together with the log file will be stored in ./output;
  1. Novel class updating and evaluation
  • Run python gen_new_samples_for_new_class.py to transform the samples generated with cutmixing operation, the generated samples and list are stored in 'YOUR_PATH_OF_CUTMIX_SAMPLES', and the samples should be copied to 'YOUR_PATH_FOR_OEM_TRAIN_DATA', while the list should be appended after all_5shot_seed123.txt;
  • Update the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/ft_oem.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the model together with the log file will be stored in ./model_saved_ft;
  • Evaluate the trained base model by running CUDA_VISIBLE_DEVICES=0 bash ./scripts/evaluate_oem_base.sh, you shall replace the 'RESTORE_PATH' with your own saved checkpoint path, and the output prediction maps together with the log file will be stored in ./output;
  1. Output transformation and probability map fusion
  • Run python trans.py to transform the output map to the format that matches the requirements of the competetion, the output will be stored in ./upload;
  • (Optional) If multiple probability outputs (in *.mat format) are generated, these can be fused by running python fusemat.py, you shall replace all the 'PATH_OF_PROBABILITY_MAP_*' with your own paths (which will be generated under ./output/prob)

Citation

@article{li2022breaking,
title={Breaking the resolution barrier: A low-to-high network for large-scale high-resolution land-cover mapping using low-resolution labels},
author={Li, Zhuohong and Zhang, Hongyan and Lu, Fangxiao and Xue, Ruoyao and Yang, Guangyi and Zhang, Liangpei},
journal={ISPRS Journal of Photogrammetry and Remote Sensing},
volume={192},
pages={244--267},
year={2022},
publisher={Elsevier}
}

@InProceedings{Li_2024_CVPR,
 author    = {Li, Zhuohong and Lu, Fangxiao and Zou, Jiaqi and Hu, Lei and Zhang, Hongyan},
 title     = {Generalized Few-Shot Meets Remote Sensing: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework},
 booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
 month     = {June},
 year      = {2024},
 pages     = {2744-2754}
}

About

Generalized Few-Shot Meets Remote Sensing: Discovering Novel Classes in Land Cover Mapping via Hybrid Semantic Segmentation Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published