Skip to content

Different Deep Learning to extract the hand from (scanned) X-ray images

License

Notifications You must be signed in to change notification settings

behjava/hand-segmentation

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RSNA Bone Age - Artifact and Confounder removal by Hand Segmentation

Code style: black     DOI     License: CC BY-NC 4.0

This repository contains three Deep Learning approaches segment hands in the RSNA Pediatric Bone Age Dataset. It is intended to remove potential artifacts from scanning or department-specific which could disturb or bias downstream analysis or learning. In all models, an array of data augmentations was employed to cope with different challenges such as white border from scanning, boxes, and gradients, as well as inverted intensities, etc.

The ground truth data is available on zenodo.

On a manually crafted test set within the RSNA training set, we achieve a DICE similarity score of $>0.99$. The models were also qualitatively validated on the Los Angeles Digital Hand Atlas and private data sets:

FastSurferCNN

The main model is a semantic segmentation model based on FastSurferCNN (Henschel et al., 2020).

FSCNN drawing

The model is rather lightweight and, therefore, can run without GPU acceleration in almost real-time. The model was trained based on the predictions of the other models.

Test models:

python FSCNN/predict.py \
    --checkpoint=/path/to/checkpoint.ckpt \
    --input=/path/to/input/ \
    --output=/path/to/target \
    --input_size=512 \
    --use_gpu

Hereby, the input can be either a whole directory containing the files or a single file.

Train / fine tune:

python FSCNN/train_model.py \
    --train_path=/path/to/train/dataset \
    --val_path=/path/to/val/dataset \
    --size=512

The model training can be configured using the YML files in FSCNN/configs. Note, that the model will generate pre-computed/cached files containing the loss weights. Input images are expected to be encoded as RGBA, whereby the Alpha channel is the target mask and color information is ignored.

Per default, logs are saved to run.log. To specify a different path, run the script with the $LOG_FILE environment variable:

$ LOG_FILE=<path/to/log_file.txt> python train_model.py [...]

Efficient-UNet

Here, another semantic segmentation Efficient-UNet model was used.

Test models:

python UNet/predict.py \
    --model=/path/to/checkpoint.ckpt \
    --input=/path/to/input/ \
    --output=/path/to/target 
    --input_size=512 \
    --use_gpu

Train / fine tune:

python UNet/main.py

Check with the --help flag for training options.

Tensormask

Here, an instance segmentation model implemented in Detectron2 was used. Models were trained in Colab, so the requirements are specified there.

Citation

tba

About

Different Deep Learning to extract the hand from (scanned) X-ray images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.1%
  • Python 1.9%