Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is a mismatch of the sizes of the feature maps #24

Open
SwordHolderSH opened this issue Apr 20, 2023 · 3 comments
Open

There is a mismatch of the sizes of the feature maps #24

SwordHolderSH opened this issue Apr 20, 2023 · 3 comments

Comments

@SwordHolderSH
Copy link

 File "D:\anaconda3\envs\mypytorch\lib\site-packages\torch\nn\modules\module.py", line 1488, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\Ubuntu\pyproject\3Dface_commpare\SADRNet-main\src\model\modules.py", line 540, in forward
    out += identity
RuntimeError: The size of tensor a (129) must match the size of tensor b (128) at non-singleton dimension 3

In SADRNet-main\src\model\SADRNv2.py, class SADRNv2,the input of layer0 is size of [1, 3, 256, 256], and output is size of [1, 16, 255, 255],
微信截图_20230421065011

and then, there is a mismatch of the sizes of the feature maps

微信截图_20230421064933

@chenhao-user
Copy link

Hello, did you solve this problem? I have the same error.

@SwordHolderSH
Copy link
Author

SwordHolderSH commented May 31, 2023

Hello, did you solve this problem? I have the same error.

This is because different versions of Pytorch have different methods for calculating convolutional kernel sizes and padding. I modified the padding method using "if ... else"

class ConvTranspose2d_BN_AC2(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=4, stride=1, activation=nn.ReLU(inplace=True)):
        super(ConvTranspose2d_BN_AC2, self).__init__()
        if stride % 2 == 0:
            self.deconv = nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels,
                                             kernel_size=kernel_size, stride=stride, padding=(kernel_size - 1) // 2,
                                             bias=False)
        else:
            self.deconv = nn.Sequential(nn.ConstantPad2d((2, 1, 2, 1), 0),
                                        nn.ConvTranspose2d(in_channels=in_channels, out_channels=out_channels,
                                                           kernel_size=kernel_size, stride=stride, padding=3,
                                                           bias=False))

        self.BN_AC = nn.Sequential(
            nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.5),
            activation)

    def forward(self, x):
        out = self.deconv(x)
        out2 = self.BN_AC(out)
        return out2
def conv4x4(in_planes, out_planes, stride=1, padding=3, dilation=1, padding_mode='circular'):
    '''
    pad = 3
    dilate = 1
    stride = 2
    '''
    if stride == 2:
        padding = 1
        kernel_size = 4
    elif stride == 1:
        kernel_size = 3
        padding = 1

    return nn.Conv2d(in_planes, out_planes, kernel_size= kernel_size, stride=stride, padding=padding, bias=False,
                     dilation=dilation, padding_mode=padding_mode)

@AdityaNair17
Copy link

Hello,
I am new to this field and I am confused about how does version impacts the kernel size and padding. When I use the above approach, I still get the same error but if I keep the kernel_size as 3 and padding as 1, I am getting output with incorrect mask.
My pytorch version is 2.1.0+cu118. Could you please guide me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants