Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diagonalwise Refactorization: 15x speedup Depthwise Convolutions #3908

Open
AlexeyAB opened this issue Sep 12, 2019 · 1 comment
Open

Diagonalwise Refactorization: 15x speedup Depthwise Convolutions #3908

AlexeyAB opened this issue Sep 12, 2019 · 1 comment
Labels
ToDo RoadMap

Comments

@AlexeyAB
Copy link
Owner

Use Diagonalwise conv instead of Depthwise Conv, or use TensorFlow Depthwise Conv implementation.

Diagonalwise Refactorization: An Efficient Training Method for Depthwise Convolutions

image

Depthwise convolutions provide significant performance benefits owing to the reduction in both parameters and mult-adds. However, training depthwise convolution layers with GPUs is slow in current deep learning frameworks because their implementations cannot fully utilize the GPU capacity. To address this problem, in this paper we present an efficient method (called diagonalwise refactorization) for accelerating the training of depthwise convolution layers. Our key idea is to rearrange the weight vectors of a depthwise convolution into a large diagonal weight matrix so as to convert the depthwise convolution into one single standard convolution, which is well supported by the cuDNN library that is highly-optimized for GPU computations. We have implemented our training method in five popular deep learning frameworks. Evaluation results show that our proposed method gains 15.4× training speedup on Darknet, 8.4× on Caffe, 5.4× on PyTorch, 3.5× on MXNet, and 1.4× on TensorFlow, compared to their original implementations of depthwise convolutions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ToDo RoadMap
Projects
None yet
Development

No branches or pull requests

1 participant