Skip to content

Code for paper "SMILE: Self-Motivated Multi-instance Learning for Whole Slide Image Classification" submission under review.

Notifications You must be signed in to change notification settings

titizheng/SMILE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

SMILE

SMILE: Self-Motivated Multi-instance Learning for Whole Slide Image Classification

Tingting Zheng, Kui jiang, Hongxun Yao

Harbin Institute of Technology

News

  • [2024/11/01] We provide the training and testing code, along with the pre-trained weights on the BRCAS dataset, which were pre-trained with a 512-dimensional ResNet18 on ImageNet. By using the preprocessed features below, our results can be reproduced. We will also further optimize the code and provide the complete pre-trained weights.
  • [2024/05/30] We provide pre-processed data methods and features.
  • [2024/03/30] The repo is created

Pre-requisites:

  • Linux (Tested on Ubuntu 18.04)
  • NVIDIA GPU (Tested on 3090)

Dependencies:

torch
torchvision
numpy
h5py
scipy
scikit-learning
pandas
nystrom_attention
admin_torch

Usage

Dataset

Preprocess TCGA Dataset

We use the same configuration of data preprocessing as DSMIL.

Preprocess CAMELYON16 Dataset

We use CLAM to preprocess CAMELYON16 at 20x. For your own dataset, you can modify and run create_patches_fp_Lung.py and extract_features_fp_LungRes18Imag.py.

The data used for training, validation and testing are expected to be organized as follows:

DATA_ROOT_DIR/
    ├──DATASET_1_DATA_DIR/
        └── pt_files
                ├── slide_1.pt
                ├── slide_2.pt
                └── ...
        └── h5_files
                ├── slide_1.h5
                ├── slide_2.h5
                └── ...
    ├──DATASET_2_DATA_DIR/
        └── pt_files
                ├── slide_a.pt
                ├── slide_b.pt
                └── ...
        └── h5_files
                ├── slide_a.h5
                ├── slide_b.h5
                └── ...
    └── ...

Dataset Preparation

We provide a part of the extracted features to reimplement our results.

Camelyon16 Dataset (20× magnification )

Model Download Link
ImageNet ResNet50 Testing Download
ImageNet ResNet50 Training and validation Download

The preprocessed features, as well as the training, validation, and testing splits, are all derived from MMIL-Transformer.

TCGA Dataset (20× magnification)

Model Download Link
SimCLR ResNet18 Download

The preprocessed features, as well as the training, validation, and testing splits, are all derived from MMIL-Transformer.

Bracs Dataset (10× magnification)

Model Download Link
ImageNet supervised ResNet18 Download
SSL ViT-S/16 Download

The preprocessed features, as well as the training, validation, and testing splits, are all derived from ACMIL.

We appreciate their contributions and outstanding work for the entire community.

Cite this work

@inproceedings{zheng2024dynamic,
    title={SMILE: Self-Motivated Multi-instance Learning for Whole Slide Image Classification},
    author={Zheng, Tingting and
            Jiang, Kui and
            Yao, Hongxun},
    year={2024}
}

About

Code for paper "SMILE: Self-Motivated Multi-instance Learning for Whole Slide Image Classification" submission under review.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages