Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMP OP&Test] Norm bf16 #51083

Merged
merged 24 commits into from
Mar 20, 2023
Merged

[AMP OP&Test] Norm bf16 #51083

merged 24 commits into from
Mar 20, 2023

Conversation

201716010711
Copy link
Contributor

PR types

Others

PR changes

Others

Describe

add norm op test and support bf16 norm

@paddle-bot
Copy link

paddle-bot bot commented Mar 1, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -62,7 +62,7 @@ __global__ void Normalize(const T* x,
MT reduce_result = BlockReduce(temp_storage).Sum(sum);

if (threadIdx.x == 0) {
norm = square_root(reduce_result + static_cast<MT>(eps));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这句不用改

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -86,7 +86,7 @@ void NormKernel(const Context& ctx,

auto xdim = in_x->dims();
if (axis < 0) axis = xdim.size() + axis;
T eps = static_cast<T>(epsilon);
float eps = epsilon;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个变量删了就行

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -157,6 +159,40 @@ def init_test_case(self):
self.epsilon = 1e-8


@unittest.skipIf(
not core.is_compiled_with_cuda(),
"core is not compiled with CUDA and not support the bfloat16",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"and not support the bfloat16" 这句可以去掉

Shixiaowei02
Shixiaowei02 previously approved these changes Mar 8, 2023
self.check_output_with_place(core.CUDAPlace(0))

def test_check_grad(self):
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个地方为什么直接pass?

wanghuancoder
wanghuancoder previously approved these changes Mar 8, 2023
Copy link
Contributor

@wanghuancoder wanghuancoder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -119,11 +121,11 @@ def init_dtype(self):
self.dtype = "float16"

def test_check_output(self):
self.check_output_with_place(fluid.core.CUDAPlace(0), atol=5e-2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

无需设置默认值


def test_check_grad(self):
self.check_grad_with_place(
fluid.core.CUDAPlace(0), ['X'], 'Out', max_relative_error=0.05
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

尝试使用默认值

self.python_out_sig = ["out"]

def test_check_output(self):
self.check_output_with_place(core.CUDAPlace(0), atol=1e-2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

默认值无需设置

core.CUDAPlace(0),
['X'],
'Out',
user_defined_grads=self.gradient,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.gradient 这个参数并没有计算啊,尝试删除

'Out',
user_defined_grads=self.gradient,
check_eager=True,
max_relative_error=1e-2,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

默认值不用设置

@ZzSean ZzSean changed the title Norm bf16 [AMP OP&Test] Norm bf16 Mar 20, 2023
Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZzSean ZzSean merged commit 90cb9a0 into PaddlePaddle:develop Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants