Tingting Zheng, Kui jiang, Hongxun Yao
Harbin Institute of Technology
- [2024/11/01] We provide the training and testing code, along with the pre-trained weights on the BRCAS dataset, which were pre-trained with a 512-dimensional ResNet18 on ImageNet. By using the preprocessed features below, our results can be reproduced. We will also further optimize the code and provide the complete pre-trained weights.
- [2024/05/30] We provide pre-processed data methods and features.
- [2024/03/30] The repo is created
- Linux (Tested on Ubuntu 18.04)
- NVIDIA GPU (Tested on 3090)
torch
torchvision
numpy
h5py
scipy
scikit-learning
pandas
nystrom_attention
admin_torch
We use the same configuration of data preprocessing as DSMIL.
We use CLAM to preprocess CAMELYON16 at 20x. For your own dataset, you can modify and run create_patches_fp_Lung.py and extract_features_fp_LungRes18Imag.py.
The data used for training, validation and testing are expected to be organized as follows:
DATA_ROOT_DIR/
├──DATASET_1_DATA_DIR/
└── pt_files
├── slide_1.pt
├── slide_2.pt
└── ...
└── h5_files
├── slide_1.h5
├── slide_2.h5
└── ...
├──DATASET_2_DATA_DIR/
└── pt_files
├── slide_a.pt
├── slide_b.pt
└── ...
└── h5_files
├── slide_a.h5
├── slide_b.h5
└── ...
└── ...
We provide a part of the extracted features to reimplement our results.
Model | Download Link |
---|---|
ImageNet ResNet50 Testing | Download |
ImageNet ResNet50 Training and validation | Download |
The preprocessed features, as well as the training, validation, and testing splits, are all derived from MMIL-Transformer.
Model | Download Link |
---|---|
SimCLR ResNet18 | Download |
The preprocessed features, as well as the training, validation, and testing splits, are all derived from MMIL-Transformer.
Model | Download Link |
---|---|
ImageNet supervised ResNet18 | Download |
SSL ViT-S/16 | Download |
The preprocessed features, as well as the training, validation, and testing splits, are all derived from ACMIL.
We appreciate their contributions and outstanding work for the entire community.
@inproceedings{zheng2024dynamic,
title={SMILE: Self-Motivated Multi-instance Learning for Whole Slide Image Classification},
author={Zheng, Tingting and
Jiang, Kui and
Yao, Hongxun},
year={2024}
}