Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃悰 fix: http_request_kwargs in HFClientVLLM #1250

Merged
merged 1 commit into from
Jul 8, 2024

Conversation

MohammedAlhajji
Copy link
Contributor

I was running into SSL certificate issues when connecting to our deployed model via the VLLM client.

I noticed HFClientTGI has this argument http_request_kwargs to be used in the generate step. Thought it would be helpful to add it to HFClientVLLM

Please let me know if any changes are needed. If this PR is not useful, please feel free to close it as well

@okhat
Copy link
Collaborator

okhat commented Jul 6, 2024

Happy to merge this but curious if it will break anything for @XenonMolecule or @arnavsinghvi11 , e.g. caches

@arnavsinghvi11
Copy link
Collaborator

Thanks @MohammedAlhajji ! this won't break any existing caches but left a small comment to check out to ensure http_request_kwargs is present for all model requests

@MohammedAlhajji
Copy link
Contributor Author

@arnavsinghvi11 Thanks for catching that. I squashed the commits and updated the PR

@arnavsinghvi11
Copy link
Collaborator

Thanks @MohammedAlhajji !

@arnavsinghvi11 arnavsinghvi11 merged commit 28c70ec into stanfordnlp:main Jul 8, 2024
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants