Skip to content

Implementation of transfer learning approach using the YOLOv7 framework to detect and rapidly quantify wind turbines in raw satellite imagery.

License

Notifications You must be signed in to change notification settings

nvriese1/WindTurbineDetection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 

Repository files navigation

Wind Turbine Detection via YOLOv7

Summary

Implementation of transfer learning approach using the YOLOv7 framework to detect and rapidly quantify wind turbines in raw satellite imagery.

Figure 1: Turbine detections shown in a wind-producing region of the Southwest United States.

Table of Contents

Overview & Problem Addressed

    Using raw LANDSAT and NAIP satellite imagery, a wind turbine object detection model was developed via a transfer learning approach from the state-of-the-art YOLOv7 architecture for the purpose of automating on-shore U.S. wind turbine count estimations.
    Current state-of-the-art databases which monitor wind turbine development in the United States such as the U.S. Wind Turbine Database are capable of exceptional accuracy, but suffer from poor temporal resolution (updated quarterly). This model, when paired with sufficiently recent satellite imagery data, can provide leading estimates of U.S. on-shore wind resources for both foreign and domestic investors, and government officials, providing value especially within regions of ongoing development.

Model Performance

Figure 2: Model precision, recall, and mean Average Precision (mAP) as evaluated during training.

    The final, trained model achieves 0.651 mean average precision (mAP) at 0.5 intersection-over-union (IoU), resulting in a Mean Absolute Error (MAE), of 0.97 or ~1.00 turbine per image inferenced upon. Note that the number of turbines in a given training image ranged from 0 to 36, and that the MAE increases significantly above 10 turbines per image. Given the total number of turbines in the test set (407), the model was able to correctly detect 358 of 407 resulting in an 88% detection rate, however, the model performed demonstrably better at detecting turbines within smaller scale imagery containing fewer turbines.
    To address this, future model development could include significant training dataset augmentation. Examples of this may be increased training image mosaic augmentation to improve small-object detection rates, the addition of off-shore wind turbine images, or the addition of biome classification to identify backgrounds within which the model performs more poorly. Other development could include new class additions (ex. solar arrays) for use cases related to total renewables generation capacity estimation.

Project Organization

├── LICENSE
├── README.md                                                <- Top-level README for developers using this project.
|
├── notebooks                                                <- Notebooks folder.
│   └── 1.0-TurbineDetection-data-wrangling.ipynb            <- Imagery data wrangling & EDA notebook.
│   └── 2.0-TurbineDetection-traning-evaluation.ipynb        <- Google Colab model training notebook.
│   └── 3.0-TurbineDetection-inference-visualization.ipynb   <- Model inference and evaluation notebook.
|   └── data.yaml                                            <- YAML file for custom model.
|   └── detect.py  
|   └── export.py
|   └── hubconf.py
|   └── inference.py
|   └── test.py
|   └── train.py
|   └── requirements.txt                                     <- Required dependencies. 
|   └── models                                               <- Folder containing additional models and experimental features.
|   └── utils                                                <- Folder containing additional functions.
|
├── data                                                     <- Data and results folder.
│     └── cleaned                                            <- Cleaned/augmented image data folder.
|            └── train                                       <- Training data split.
|                  └── labels
|                  └── images
|            └── test                                        <- Test data split.
|                  └── labels
|                  └── images
|            └── valid                                       <- Validation data split.
|                  └── labels
|                  └── images
|    └── raw                                                 <- Raw annotated image data folder.
|            └── images
|            └── labels 
|    └── results                                             <- Folder containing model metrics and inference images.
│            └── detections                                  <- Folder containing output model inference images.
│            └── **metrics**                                 <- **Folder containing model performance metrics.**
|
├── reports                                                  <- Generated analysis as PPTX.
│   └── TurbineDetection_SlideDeck.pptx   
│ 
├── src                                                      <- Source code from notebooks developed for this project.
    └── 1.0-TurbineDetection-data-wrangling.py
    └── 2.0-TurbineDetection-traning-evaluation.py
    └── 3.0-TurbineDetection-inference-visualization.py

Built With

Python
Jupyer Notebook
Google Colab
Pytorch
Scikit-Learn
Pandas

Contact

Noah Vriese
Email: noah@datawhirled.com
Github: nvriese1
LinkedIn: noah-vriese
Facebook: noah.vriese
Twitter: @nvriese

Acknowledgements

WongKinYiu: YOLOv7 implementation
Liscense: MIT

About

Implementation of transfer learning approach using the YOLOv7 framework to detect and rapidly quantify wind turbines in raw satellite imagery.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages