This repository contains PyTorch implementation of the following paper: Stoimchev, M., Ivanovskа, M., Štruc, V. Learning to Combine Local and Global Image Information for Contactless Palmprint Recognition (This is an initial version, the code will be updated)
- Python3
- PyTorch
- Torchvision
- Numpy
- Matplotlib
- Pillow / PIL
- imgaug
- scikit-learn
- First clone the repository
git clone https://github.com/Marjan1111/tpa-cnn.git
- Create the virtual environment via conda
conda create -n tpa python=3.9
- Activate the virtual environment.
conda activate tpa
- Install the dependencies.
pip install -r requirements.txt
To list the arguments, run the following command:
python main.py -h
python main.py \
--backbone Vgg \
--data IITD \ # switch to: CASIA if you want to train on CASIA dataset
--palm_train left \ # note: if you chose left forr train, chose right for palm_test, and vice versa.
--palm_test right \
--n_epochs 100 \
--num_trainable 10 \
--metric_head arc_margin \
--patches [75, 1, 0, 30] \
--lr_centers 0.5 \
--alpha 0.001 \
--save_path saved_models \
--model_type Vgg_16 \
To start the all-vs-all evaluation protocol, run the following command:
python test.py
datasets
├── IITD_ROI
│ ├── Segmented
│ ├── Left
| | └── 001_1.bmp
| | ...
| | └── 230_5.bmp
| |
│ ├── Right
| └── 001_1.bmp
| ...
| └── 230_5.bmp
|
├── CASIA_ROI
└── 0001_m_l_01.jpg
...
└── 0312_m_r_11.jpg