Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fleet_executor] Add entrance of FleetExecutor in AnalysisPredictor for distributed inference #39992

Merged
merged 7 commits into from
Mar 2, 2022

Conversation

FeixLiu
Copy link
Contributor

@FeixLiu FeixLiu commented Feb 28, 2022

PR types

Others

PR changes

Others

Describe

Add entrance of fleet executor into AnalysisPredictor. Add some helper methods to init NCCL environment for distributed inference.

To use the fleet executor for inference, these configures should be set in DistConfig

bool use_dist_model_ // whether use DistModel or not
std::vector<std::string> trainer_endpoints_ // all trainers' endpoints
std::string current_endpoint_ // current trainer's endpoint
int64_t nranks_ // total ranks (number of trainers)
int64_t local_rank_ // local rank
std::string comm_init_config_ // converter config path (used to init the comm)

Note that, use_dist_model_ muse be set true by calling

EnableDistModel(true);

nranks and rank are set simultaneously by calling

SetRanks(int64_t nranks, int64_t, ranks);

trainer_endpoints and current_endpoint are also set simultaneously by calling

SetEndpoints(std::vector<std::string> trainer_endpoints, std::string current_endpoint);

DistConfig should be set to AnalysisConfig by calling

SetDistConfig(dConfig);

The converter config should some sections like this:

[ring_id -> ranks]
0,0,1,2,3,4,5,6,7
1,0,1,2,3
2,4,5,6,7
21,0,1
22,1,2
23,2,3
24,3,4
25,4,5
26,5,6
27,6,7
[rank -> ring_ids]
0,0,1,21
1,0,1,21,22
2,0,1,22,23
3,0,1,23,24
4,0,2,24,25
5,0,2,25,26
6,0,2,26,27
7,0,2,27

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

wangxicoding
wangxicoding previously approved these changes Feb 28, 2022
Copy link
Contributor

@wangxicoding wangxicoding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use VLOG(3) for all output logs while you will see redundancy logs you don't need and
var's name should express its clear meaning.

paddle/fluid/distributed/fleet_executor/carrier.cc Outdated Show resolved Hide resolved
paddle/fluid/distributed/fleet_executor/carrier.cc Outdated Show resolved Hide resolved
python/paddle/fluid/executor.py Show resolved Hide resolved
@gongweibao
Copy link
Contributor

gongweibao commented Mar 1, 2022

Add a concise description of PR like #37725

wangxicoding
wangxicoding previously approved these changes Mar 1, 2022
gongweibao
gongweibao previously approved these changes Mar 1, 2022
Copy link
Contributor

@gongweibao gongweibao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@qingqing01 qingqing01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Mar 1, 2022
@PaddlePaddle PaddlePaddle unlocked this conversation Mar 1, 2022
@FeixLiu FeixLiu closed this Mar 1, 2022
@FeixLiu FeixLiu reopened this Mar 1, 2022
Copy link
Member

@shangzhizhou shangzhizhou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@FeixLiu FeixLiu changed the title [fleet_executor] Update for connect [fleet_executor] Add entrance of FleetExecutor in AnalysisPredictor for distributed inference Mar 2, 2022
Copy link
Contributor

@Superjomn Superjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@FeixLiu FeixLiu merged commit 244ae31 into PaddlePaddle:develop Mar 2, 2022
@FeixLiu FeixLiu deleted the update_for_connect branch March 2, 2022 07:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants