-
Notifications
You must be signed in to change notification settings - Fork 823
Issues: pytorch/serve
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Docker swarm with TorchServe workflow
triaged
Issue has been reviewed and triaged
workflowx
Issues related to workflow / ensemble models
#3206
opened Jun 26, 2024 by
KD1994
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
java
Pull requests that update Java code
#3204
opened Jun 24, 2024 by
aalbersk
torchserve bloom7b1 demo Load model failed
kfserving
triaged
Issue has been reviewed and triaged
#3202
opened Jun 22, 2024 by
zqc2011hy
Handling of subsequent RegisterModel calls to Management gRPC endpoint with same model & version
grpc
#3199
opened Jun 21, 2024 by
mihaidusmanu
Worker dead and yet describe_model gives me
worker.status: Ready
#3191
opened Jun 13, 2024 by
MohamedAliRashad
Running segment_anything_fast example locally
triaged
Issue has been reviewed and triaged
#3186
opened Jun 11, 2024 by
yousofaly
ImportError: cannot import name 'packaging' from 'pkg_resources'
#3176
opened Jun 5, 2024 by
Sieltek
Two-way authentication/Mutual SSL in gRPC
enhancement
New feature or request
#3172
opened Jun 3, 2024 by
MohamedAliRashad
The service crashes if the model takes a long time to respond
grpc
#3168
opened May 31, 2024 by
yurkoff-mv
NotImplementedError: Cannot copy out of meta tensor; no data! + Models not generating output text
#3167
opened May 31, 2024 by
bjorquera1
Make torchserve-kfs docker image multiplatform
enhancement
New feature or request
kfserving
#3161
opened May 25, 2024 by
DanielTemesgen
Limit resource in docker compose and worker in model
triaged
Issue has been reviewed and triaged
#3150
opened May 21, 2024 by
ToanLyHoa
Continuous Batching does not work with newest transformer issue
#3148
opened May 21, 2024 by
udaij12
question to model inference optimization
triaged
Issue has been reviewed and triaged
#3134
opened May 4, 2024 by
geraldstanje
If micro_batch_size of micro-batch is set to 1, then model inference is still batch processing?
#3120
opened Apr 29, 2024 by
pengxin233
CUDA out of Memory with low Memory Utilization (CUDA error: device-side assert triggered)
#3114
opened Apr 25, 2024 by
emilwallner
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.