Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support both use_calc_stream and sync_op in all_reduce #45282

Merged
merged 1 commit into from
Aug 31, 2022

Conversation

LiYuRio
Copy link
Contributor

@LiYuRio LiYuRio commented Aug 19, 2022

PR types

New features

PR changes

APIs

Describe

In the new communication library, we designed ProcessGroup to manage different communication group. Inside each process_group has its own stream which all communications in this group will be done on this stream. For high level API, like distributed.all_reduce, we use use_calc_stream to indicate whether this operation is sync or not. Notice that frequently add unnecessary cuda events may lead to low performance on some model. In order to achieve high performance, this pr add a new API name distributed.stream.all_reduce. This new API provided use_calc_stream and sync_op both.

  • sync_op, indicate whether communication is sync or not.
  • use_calc_stream, do communicate on calc stream, save the time of switching stream. Only work when sync_op is true.

对应中文文档,PaddlePaddle/docs#5225

@paddle-bot
Copy link

paddle-bot bot commented Aug 19, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments.

@LiYuRio LiYuRio force-pushed the dev_refine_allreduce branch 13 times, most recently from bef166b to be027c4 Compare August 29, 2022 03:37
Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -89,6 +95,17 @@ class ProcessGroupNCCL : public ProcessGroup {
return std::string(NCCL_BACKEND_NAME);
}

std::shared_ptr<ProcessGroup::Task> AllReduce(
std::vector<phi::DenseTensor>& in_tensors, // NOLINT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in_tensors will be modified?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will be modified in the following pr

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@XieYunshen XieYunshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 单测时间设置

@gongweibao gongweibao merged commit ce4775c into PaddlePaddle:develop Aug 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants