Skip to content

TetrisNet: Tetris Kernels for Sketch Recognition and Beyond

Notifications You must be signed in to change notification settings

kingbackyang/TetrisNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

TetrisNet: Tetris Kernels for Sketch Recognition and Beyond

This is the official implementation of "TetrisNet: Tetris Kernels for Sketch Recognition and Beyond"

Folder Overview

CAN_yang: Codes and data for Handwritten Mathematical Expression Recognition on CROHME dataset.

Swin-Transformer-Object-Detection: Codes and data for Instance Segmentation on COCO dataset.

MMSegmentation: Codes and data for vessel segmentation on DRIVE dataset.

TetrisNet: Codes and data for sketch recognition on QuickDraw-414k and TU-Berlin datasets.

Prerequisites

The code is built with following libraries:

CAN_yang

Datasets

Download the CROHME dataset from BaiduYun (downloading code: 1234) and put it in datasets/.

The HME100K dataset can be download from the official website HME100K.

Training

Check the config file config.yaml and train with the CROHME dataset:

python train.py --dataset CROHME

By default the batch size is set to 8 and you may need to use a GPU with 32GB RAM to train your model.

Testing

Fill in the checkpoint (pretrained model path) in the config file config.yaml and test with the CROHME dataset:

python inference.py --dataset CROHME

Note that the testing dataset path is set in the inference.py.

Attention: Tetris Kernels are directly applied to densenet (CAN_yang/models/densenet.py)

Swin-Transformer-Object-Detection

Installation

Please refer to get_started.md for installation and dataset preparation.

Training

CUDA_VISIBLE_DEVICES=0,1,3,4 ./tools/dist_train.sh configs/swin/cascade_mask_rcnn_swin_small_patch4_window7_mstrain_480-800_giou_4conv1f_adamw_3x_cocov2.py 4 --work-dir tetrisnet_bench/tetrisnet  

Attention: Tetris Kernels are directly applied to swin transformer (Swin-Transformer-Object-Detection/mmdet/models/backbones/swin_transformer.py)

TetrisNet

Training

You can modify the config (e.g. configs/swin_image.yaml) to choose or define your model for training

Supported Distributed Training

CUDA_VISIBLE_DEVICES=0,1,2,3 torchpack dist-run -np 4 python train_img_single.py configs/swin_image.yaml --run-dir nbenchmark/swin_sce # 4 gpus
CUDA_VISIBLE_DEVICES=2,3 torchpack dist-run -np 2 python train_img2.py configs/quickdraw/sd3b1_image_stroke.yaml --run-dir nbenchmark/trans/resnet50_quickdraw_image_stroke_sd3b1_norm/

Single GPU Training

python train_img_single.py configs/swin_image.yaml --run-dir nbenchmark/swin_sce --distributed False
python train_img2.py configs/quickdraw/sd3b1_image_stroke.yaml --run-dir nbenchmark/trans/resnet50_quickdraw_image_stroke_sd3b1_norm/ --distributed False

Citation

Issues

If you have any problems, feel free to reach out to me in the issue.

About

TetrisNet: Tetris Kernels for Sketch Recognition and Beyond

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published