Skip to content

Self-supervised pre-trained weights on TCGA

Latest
Compare
Choose a tag to compare
@Jeffkang-94 Jeffkang-94 released this 10 Apr 03:21
· 9 commits to main since this release

Benchmarking Self-Supervised Learning on Diverse Pathology Datasets

We execute the largest-scale study of SSL pre-training on pathology image data. Our study is conducted using 4 representative SSL methods below on diverse downstream tasks. We establish that large-scale domain-aligned pre-training in pathology consistently out-performs ImageNet pre-training.

Pre-trained weights

  1. bt_rn50_ep200.torch: ResNet50 pre-trained using Barlow Twins
  2. mocov2_rn50_ep200.torch: ResNet50 pre-trained using MoCoV2
  3. swav_rn50_ep200.torch: ResNet50 pre-trained using SwAV
  4. dino_small_patch_${patch_size}_ep200.torch: ViT-Small/${patch_size} pre-trained using DINO

md5sum

Weight MD5SUM
bt_rn50_ep200.torch e5621a2350d4023b78870fd75dc27862
mocov2_rn50_ep200.torch 54f7a12b63922895face4ef32c370c5e
swav_rn50_ep200.torch b817e5e2875e7097d8bb650168aa4761
dino_small_patch_16_ep200.torch 8dbbdae7d6413d58bef6aa90c41699dc
dino_small_patch_8_ep200.torch 5b6d6262fb87284fa5b97d171044153a

Image statistics

We used the following statistics for image intensity standardization (normalization):

mean: [ 0.70322989, 0.53606487, 0.66096631 ]
std: [ 0.21716536, 0.26081574, 0.20723464 ]

which are values corresponding to R, G, and B channels respectively, determined from 10% of the training samples.