Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

base_score changes predictions significantly #35

Open
27518 opened this issue Aug 16, 2019 · 0 comments
Open

base_score changes predictions significantly #35

27518 opened this issue Aug 16, 2019 · 0 comments

Comments

@27518
Copy link

27518 commented Aug 16, 2019

Hello,
Thanks for creating this useful package. The waterfall plots are quite informative and intuitive.
I found that when I varied the base_score in the buildExplainer function, the predicted values output in the showWaterfall function varied significantly. Concerned about the accuracy of predicted values in the actual xgboost model, the predicted values varied, but not nearly as much. Is this an error? Or am I doing something incorrectly? Should the base_score entered in the buildExplainer always match whatever was entered in the actual xgboost model?

This is what I observed for a single predicted outcome:
base_score = 0.5: pred = .48 in both xgb predict function and explainer waterfall function
base_score = 0.2: pred = .43 in xgb predict function, but explainer waterfall function was .18.*
base_score = 0.85: pred = .53 in xgb predict function, but explainer waterfall function was .83.*
*Note: in all three examples, the xgb model used in the explainer function had a base_score of 0.5. Therefore, it varied from the base_score entered in the explainer in the 2nd and 3rd examples.

Thanks for any suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant