Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

the results in gpu and cpu are different. #751

Open
zhangvia opened this issue May 22, 2024 · 0 comments
Open

the results in gpu and cpu are different. #751

zhangvia opened this issue May 22, 2024 · 0 comments

Comments

@zhangvia
Copy link

i'm using the sam encoder which is based on vit model to encode an image. but it seems that the result on gpu is different from cpu.
there is some simple test code:

>>> from segment_anything.modeling.image_encoder import ImageEncoderViT
>>> import torch
>>> from functools import partial
>>> model = ImageEncoderViT(depth=32,embed_dim=1280,img_size=1024,mlp_ratio=4,norm_layer=partial(torch.nn.LayerNorm, eps=1e-6),num_heads=16,patch_size=16,qkv_bias=True,use_rel_pos=True,global_attn_indexes=[7, 15, 23, 31],window_size=14,out_chans=256,)
>>> input = torch.randn((1,3,1024,1024))
>>> a = model(input)
>>> model = model.to('cuda:0')
>>> input = input.to('cuda:0')
>>> b = model(input)
>>> a
tensor([[[[ 1.8612e-01,  1.1807e+00,  1.7301e+00,  ...,  1.9374e+00,
            1.0217e+00,  1.0662e+00],
          [-2.9578e-01, -5.7185e-01,  1.4766e-01,  ...,  9.9894e-01,
           -8.1706e-01,  1.1982e+00],
          [-5.4977e-01,  5.0114e-01, -2.0103e-01,  ...,  1.3675e+00,
            3.3389e-01, -7.0931e-01],
          ...,
          [ 1.6346e+00,  9.0330e-01,  1.0282e+00,  ..., -6.7727e-01,
           -2.4132e-01,  7.0905e-01],
          [-6.3068e-01,  4.9860e-01, -7.3669e-01,  ..., -1.4967e-01,
            1.5211e+00,  6.1666e-01],
          [ 3.5227e-01,  1.3268e+00,  1.9159e-01,  ...,  3.1407e-03,
           -7.5928e-01,  5.6686e-01]],

         [[-1.1724e+00, -4.8071e-02,  6.3057e-01,  ...,  4.3397e-01,
            4.8736e-01, -3.8219e-01],
          [ 6.6380e-01, -3.5226e-01, -3.4688e-01,  ...,  1.2240e+00,
            1.1453e+00,  2.5559e-01],
          [-4.9017e-01,  1.3543e-01, -1.9186e-01,  ...,  7.8529e-01,
           -1.0934e+00,  4.1711e-01],
          ...,
          [-1.8451e+00, -1.3379e+00,  7.4616e-01,  ...,  1.5989e-01,
            1.2872e-01,  2.6470e-01],
          [ 8.1469e-01, -1.2960e+00, -1.4881e-01,  ...,  2.3249e-01,
            1.6365e+00,  1.5253e+00],
          [ 1.1759e+00, -7.4470e-02, -2.3624e-01,  ...,  7.0021e-01,
            1.7946e-01, -8.6396e-01]],

         [[-1.6003e-01,  9.0498e-01,  4.7817e-01,  ..., -1.3752e+00,
            8.6951e-01,  6.1081e-02],
          [-7.8159e-01, -8.6177e-02,  2.3580e-01,  ..., -1.1550e-01,
            1.6117e+00,  8.6789e-02],
          [-1.0102e+00,  4.7301e-01, -1.3565e-01,  ...,  1.6855e+00,
            1.0961e+00, -5.8807e-01],
          ...,
          [-1.0291e+00,  2.5171e-02, -3.0832e-01,  ..., -1.0538e+00,
           -7.5840e-01,  7.3209e-01],
          [-8.3241e-02,  1.3793e+00, -2.4078e-01,  ..., -3.6528e-01,
           -1.9410e+00,  3.2880e-01],
          [-1.1042e+00, -3.1801e-01,  3.9523e-01,  ..., -9.5052e-01,
            1.4437e+00,  1.0042e+00]],

         ...,

         [[ 8.1897e-01, -6.1691e-01,  2.1173e-02,  ...,  5.6371e-01,
            6.6797e-01, -1.5076e+00],
          [ 4.8713e-01, -1.4290e+00, -1.2916e+00,  ..., -1.6410e+00,
            3.3599e-01, -2.6969e+00],
          [ 4.1811e-01, -9.8550e-01, -3.5540e-02,  ..., -1.7273e-01,
           -2.0933e-01,  3.4603e-02],
          ...,
          [-1.1345e+00, -3.2653e-01,  6.7625e-01,  ..., -1.5426e+00,
           -1.9519e+00,  5.1214e-01],
          [-1.7559e+00, -6.6151e-01, -8.6155e-01,  ..., -1.1955e-01,
            9.3050e-02, -8.3560e-01],
          [ 5.3564e-01, -1.0189e+00, -1.0242e-01,  ..., -2.9650e+00,
            1.5463e+00, -1.3964e-01]],

         [[ 1.4698e+00,  1.3735e+00,  1.9960e+00,  ..., -6.1414e-02,
           -2.1789e-02, -2.3297e-01],
          [ 1.5900e+00,  7.5189e-01,  1.2917e+00,  ...,  2.9176e-01,
            4.7061e-01, -1.3825e-01],
          [ 3.8520e-02,  8.2141e-02,  8.5229e-01,  ...,  4.3940e-01,
           -8.7757e-01,  1.6680e+00],
          ...,
          [-5.8654e-01,  4.2654e-01,  1.2356e+00,  ...,  4.9412e-01,
           -1.7638e+00,  7.5148e-01],
          [-1.5260e+00,  3.3236e-01, -4.0470e-01,  ..., -4.5623e-01,
            1.0419e+00, -6.7022e-01],
          [-1.4664e-01, -1.3360e+00, -1.3063e-01,  ..., -4.8114e-01,
            8.4078e-01,  4.9047e-01]],

         [[-6.5096e-01, -8.3868e-02, -5.0712e-01,  ..., -1.2956e+00,
           -1.7728e+00,  2.0674e-03],
          [ 5.6505e-01, -4.3940e-01, -5.6637e-01,  ..., -9.7842e-02,
           -1.8229e-01, -9.4668e-01],
          [ 9.9391e-02, -1.8613e+00, -2.3191e-01,  ...,  5.4263e-01,
            3.3098e-01,  5.7215e-01],
          ...,
          [-5.9365e-01, -4.4359e-01,  5.7365e-01,  ...,  1.4440e+00,
           -8.0939e-01,  4.5247e-01],
          [-4.4945e-01,  8.4763e-01, -4.5822e-01,  ...,  1.0057e-01,
           -2.0821e+00, -4.1656e-01],
          [ 1.1467e+00, -2.7241e-01,  2.2380e-01,  ...,  1.3074e+00,
            1.1064e+00, -1.9652e-01]]]], grad_fn=<ToCopyBackward0>)
>>> b
tensor([[[[ 1.8613e-01,  1.1808e+00,  1.7298e+00,  ...,  1.9373e+00,
            1.0220e+00,  1.0663e+00],
          [-2.9474e-01, -5.7186e-01,  1.4721e-01,  ...,  9.9892e-01,
           -8.1679e-01,  1.1981e+00],
          [-5.4976e-01,  5.0110e-01, -2.0095e-01,  ...,  1.3670e+00,
            3.3403e-01, -7.0914e-01],
          ...,
          [ 1.6345e+00,  9.0402e-01,  1.0286e+00,  ..., -6.7679e-01,
           -2.4079e-01,  7.0921e-01],
          [-6.3119e-01,  4.9833e-01, -7.3691e-01,  ..., -1.4916e-01,
            1.5213e+00,  6.1662e-01],
          [ 3.5211e-01,  1.3276e+00,  1.9139e-01,  ...,  3.5384e-03,
           -7.5894e-01,  5.6629e-01]],

         [[-1.1723e+00, -4.8274e-02,  6.2981e-01,  ...,  4.3429e-01,
            4.8750e-01, -3.8201e-01],
          [ 6.6343e-01, -3.5284e-01, -3.4701e-01,  ...,  1.2244e+00,
            1.1450e+00,  2.5615e-01],
          [-4.8977e-01,  1.3493e-01, -1.9164e-01,  ...,  7.8547e-01,
           -1.0936e+00,  4.1790e-01],
          ...,
          [-1.8454e+00, -1.3374e+00,  7.4570e-01,  ...,  1.6032e-01,
            1.2813e-01,  2.6515e-01],
          [ 8.1454e-01, -1.2965e+00, -1.4904e-01,  ...,  2.3186e-01,
            1.6366e+00,  1.5256e+00],
          [ 1.1758e+00, -7.4639e-02, -2.3658e-01,  ...,  6.9997e-01,
            1.7988e-01, -8.6346e-01]],

         [[-1.5887e-01,  9.0525e-01,  4.7847e-01,  ..., -1.3753e+00,
            8.6982e-01,  6.0748e-02],
          [-7.8159e-01, -8.6499e-02,  2.3592e-01,  ..., -1.1533e-01,
            1.6125e+00,  8.6588e-02],
          [-1.0099e+00,  4.7295e-01, -1.3520e-01,  ...,  1.6858e+00,
            1.0969e+00, -5.8796e-01],
          ...,
          [-1.0280e+00,  2.5919e-02, -3.0776e-01,  ..., -1.0534e+00,
           -7.5797e-01,  7.3154e-01],
          [-8.2854e-02,  1.3790e+00, -2.4053e-01,  ..., -3.6519e-01,
           -1.9405e+00,  3.2865e-01],
          [-1.1040e+00, -3.1811e-01,  3.9496e-01,  ..., -9.5097e-01,
            1.4436e+00,  1.0055e+00]],

         ...,

         [[ 8.1913e-01, -6.1696e-01,  2.1298e-02,  ...,  5.6404e-01,
            6.6752e-01, -1.5076e+00],
          [ 4.8712e-01, -1.4296e+00, -1.2915e+00,  ..., -1.6405e+00,
            3.3607e-01, -2.6966e+00],
          [ 4.1837e-01, -9.8499e-01, -3.5222e-02,  ..., -1.7220e-01,
           -2.0918e-01,  3.4825e-02],
          ...,
          [-1.1339e+00, -3.2579e-01,  6.7619e-01,  ..., -1.5426e+00,
           -1.9515e+00,  5.1229e-01],
          [-1.7551e+00, -6.6145e-01, -8.6196e-01,  ..., -1.1973e-01,
            9.2693e-02, -8.3518e-01],
          [ 5.3598e-01, -1.0185e+00, -1.0261e-01,  ..., -2.9648e+00,
            1.5463e+00, -1.3949e-01]],

         [[ 1.4701e+00,  1.3737e+00,  1.9956e+00,  ..., -6.1765e-02,
           -2.1836e-02, -2.3352e-01],
          [ 1.5898e+00,  7.5188e-01,  1.2918e+00,  ...,  2.9170e-01,
            4.7040e-01, -1.3936e-01],
          [ 3.8367e-02,  8.2116e-02,  8.5211e-01,  ...,  4.3911e-01,
           -8.7742e-01,  1.6678e+00],
          ...,
          [-5.8667e-01,  4.2650e-01,  1.2358e+00,  ...,  4.9508e-01,
           -1.7640e+00,  7.5134e-01],
          [-1.5257e+00,  3.3204e-01, -4.0525e-01,  ..., -4.5664e-01,
            1.0411e+00, -6.7079e-01],
          [-1.4586e-01, -1.3357e+00, -1.3083e-01,  ..., -4.8108e-01,
            8.4058e-01,  4.8972e-01]],

         [[-6.5139e-01, -8.3864e-02, -5.0726e-01,  ..., -1.2955e+00,
           -1.7731e+00,  2.0270e-03],
          [ 5.6447e-01, -4.3915e-01, -5.6635e-01,  ..., -9.7440e-02,
           -1.8280e-01, -9.4630e-01],
          [ 9.9024e-02, -1.8613e+00, -2.3198e-01,  ...,  5.4247e-01,
            3.3030e-01,  5.7333e-01],
          ...,
          [-5.9371e-01, -4.4316e-01,  5.7401e-01,  ...,  1.4442e+00,
           -8.1024e-01,  4.5189e-01],
          [-4.4987e-01,  8.4775e-01, -4.5821e-01,  ...,  1.0086e-01,
           -2.0822e+00, -4.1624e-01],
          [ 1.1466e+00, -2.7306e-01,  2.2303e-01,  ...,  1.3072e+00,
            1.1065e+00, -1.9689e-01]]]], device='cuda:2',
       grad_fn=<AddBackward0>)
>>>

and i found that the accuracy is good before the neck layer in vit model. but after neck layer, the results in gpu and cpu are very different. i test it on torch1.13.1+cu118, torch2.2.2+cu118,and the bug always happen

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant