The project was developed as a graduation work of computer science degree and now has become playground for different AI projects.
The actual part of the original project. The idea is to use two NDVIs (from two different timestamps) to compare vegetation change and visualize change map.
There are totally two architectures that was used in the project:
There are totally 3 different encoders:
Model performance | Metrics | Confusion |
VGG16 U-Net | ||
VGG16 LinkNet | ||
ResNet18 U-Net | ||
ResNet18 LinkNet | ||
EfficientNetB0 U-Net | ||
EfficientNetB0 LinkNet | ||
EfficientNetB6 U-Net | ||
Because it's awful idea to make a project that will work only from command line GUI application was developed.
There are a couple of easy-to-use scripts for developments process. For example image cropping, dataset generation, model training, etc.
Takes two Landsat folders and generates six files:
- ndvi1.TIF - NDVI map from first Landsat folder;
- ndvi_classification1.TIF - approximated classification of first Landsat folder based on NDVI (see below for details);
- ndvi2.TIF - NDVI map from second Landsat folder;
- ndvi_classification2.TIF - approximated classification of second Landsat folder based on NDVI (see below for details);
- classes.TIF - deforestation classes (each pixel is 5d vector);
- dmap.TIF - actual deforestation map based on classes.TIF.
This script adjust and crops images accordingly to coordinates in both files.
Usage:
python classify.py -fip "PATH_TO_LANDSAT_FOLDER" -sip "PATH_TO_LANDSAT_FOLDER" -op "PATH_TO_OUTPUT_FOLDER"
Generates an actual dataset from classify.py
output. In output folder two sub-directories will be created: x_data
and y_data
. Both will contain the same numbers of images. Shapes will be (N, 64, 64, 2) and (N, 64, 64, 1) respectively.
Usage:
python dataset.py -ip "PATH_TO_CLASSIFY.PY_OUTPUT" -op "OUTPUT"
This script allows model creation and training with a lot of different callbacks and logging. To use the script different set of parameters are essential to specify:
Param | Short form | Required | Description |
--x-data | -xd | ✅ | Path to x_data directory |
--y-data | -yd | ✅ | Path to y_data directory |
--x-data-t | -xdt | ✅ | Path to x_data directory (for test) |
--y-data-t | -ydt | ✅ | Path to y_data directory (for test) |
--backbone | ✅ | Specifies type of encoder. Available encoders are: 'vgg16', 'vgg19', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'seresnet18', 'efficientnetb0', 'efficientnetb1', 'efficientnetb2' | |
--epochs | -e | Specifies number of epochs. Default is 25 | |
--architecture | -a | Specifies type of the architecture of the model. Available architectures are: unet, linknet |
Usage:
python train.py -xd "x_data/" -yd "y_data/" -xdt "data2/x_data/" -ydt "data2/y_data/" --backbone vgg16 -a linknet
Important notice: train dataset is used totally for training purpose. Test dataset is split into test (80%) and validation (20%) dataset. ModelCheckpoint, EarlyStopping and ReduceLROnPlateau is used so output directory will contain trained models and training process may be stopped earlier. After training OUTPUT
directory will also contain .csv
file with history, ARCHITECTURE-BACKBONE.png
file with loss (+val_loss) and accuracy (+val_accuracy) graphs and confusion matrix of the model without central picks.