Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add type promotion for complex and real number. #63842

Merged
merged 46 commits into from
May 9, 2024

Conversation

zxcd
Copy link
Contributor

@zxcd zxcd commented Apr 24, 2024

PR Category

Others

PR Types

New features

Description

card-78750

As there were unreasonable type promotion in Paddle, which the previous logic was aligned to the left tensor, like:

a = paddle.ones([3,3], dtype= ''int64")
b = paddle.ones([3,3], dtype = 'float32')
a+b # int64

This behavior will be fixed more in line with mathematical logic, like:

a = paddle.ones([3,3], dtype= ''int64")
b = paddle.ones([3,3], dtype = 'float32')
a+b # float32

Furthermore, after discussion, we will limit the behavior of automatic type promotion to floating-point numbers, and between real and complex numbers in Tensor and Tensor. Tensor and Scalar will still support all dtypes.

## Before:
a = paddle.ones([3,3], dtype= ''int32")
b = paddle.ones([3,3], dtype = 'int64')
a+b # int64

## After:
a = paddle.ones([3,3], dtype= ''int32")
b = paddle.ones([3,3], dtype = 'int64')
a+b # raise TypeError

Those PR #60638 , #59518 fixed the behavior in few APIs between floating-point numbers in Tensor and Tensor.

This PR will support all binary operation API, and type promotion between real and complex numbers in Tensor and Tensor. Also the behavior between Tensor and Scalar will be corrected.

Copy link

paddle-bot bot commented Apr 24, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

jeff41404
jeff41404 previously approved these changes Apr 26, 2024
Copy link
Contributor

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, All comments in PR #61163 has been modified, this PR is to handle conflicts with other PRs

Copy link

paddle-ci-bot bot commented May 8, 2024

Sorry to inform you that 46238aa's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jeff41404 jeff41404 merged commit 3f10cae into PaddlePaddle:develop May 9, 2024
31 checks passed
co63oc pushed a commit to co63oc/Paddle that referenced this pull request May 10, 2024
* add type promotion for complex and real number.

* fix

* reduce api support

* add more api support

* fix

* fix

* remove matmul

* add T+S logic.

* fix bug

* fix unittest

* fix

* fix

* fix unittest

* fix gumbel

* rm print

* fix more unittests.

* fix test_llama_group_log_softmax.py

* fix bug, and add 0-d + 0-d logic.

* rm print

* fix behavior of bool and int

* add unittest for all type promotion.

* rm unintest which is unsupport dtype

* fix

* fix

* add error unittest

* fix increase unittest

* bug fix

* fixed by comment

* remove useless code.

* fix

* fix

* fix TypePromotionForZeroDimTensor

* add inplace API support, add special case can skip type promotion (add x=float32,y=float16/bfloat16).

* add broatcast support for MultiPrecisionAddKernelImpl.
co63oc pushed a commit to co63oc/Paddle that referenced this pull request May 11, 2024
* add type promotion for complex and real number.

* fix

* reduce api support

* add more api support

* fix

* fix

* remove matmul

* add T+S logic.

* fix bug

* fix unittest

* fix

* fix

* fix unittest

* fix gumbel

* rm print

* fix more unittests.

* fix test_llama_group_log_softmax.py

* fix bug, and add 0-d + 0-d logic.

* rm print

* fix behavior of bool and int

* add unittest for all type promotion.

* rm unintest which is unsupport dtype

* fix

* fix

* add error unittest

* fix increase unittest

* bug fix

* fixed by comment

* remove useless code.

* fix

* fix

* fix TypePromotionForZeroDimTensor

* add inplace API support, add special case can skip type promotion (add x=float32,y=float16/bfloat16).

* add broatcast support for MultiPrecisionAddKernelImpl.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants