Skip to content

Commit

Permalink
Fix fp16 (--half) support for TritonRemoteModel model type (#10787)
Browse files Browse the repository at this point in the history
* Fix fp16 (--half) support for TritonRemoteModel

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
  • Loading branch information
3 people committed May 16, 2023
1 parent 5deff14 commit ec2b853
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion models/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,7 @@ def __init__(self, weights='yolov5s.pt', device=torch.device('cpu'), dnn=False,
super().__init__()
w = str(weights[0] if isinstance(weights, list) else weights)
pt, jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle, triton = self._model_type(w)
fp16 &= pt or jit or onnx or engine # FP16
fp16 &= pt or jit or onnx or engine or triton # FP16
nhwc = coreml or saved_model or pb or tflite or edgetpu # BHWC formats (vs torch BCWH)
stride = 32 # default stride
cuda = torch.cuda.is_available() and device.type != 'cpu' # use CUDA
Expand Down

0 comments on commit ec2b853

Please sign in to comment.