Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example for object detection with ultralytics YOLO v8 model #2508

Merged
merged 13 commits into from
Aug 25, 2023

Conversation

agunapal
Copy link
Collaborator

@agunapal agunapal commented Jul 30, 2023

Description

Example for Object Detection with YOLO v8

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • New feature (non-breaking change which adds functionality)
  • This change requires a documentation update

Feature/Issue validation/testing

Logs:

(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ torchserve --start --model-store model_store --ncs
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2023-07-29T23:59:47,802 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2023-07-29T23:59:47,854 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml
2023-07-29T23:59:47,927 [INFO ] main org.pytorch.serve.ModelServer - 
Torchserve version: 0.8.1
TS Home: /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages
Current directory: /home/ubuntu/serve/examples/object_detector/yolo/yolov8
Temp directory: /tmp
Metrics config path: /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml
Number of GPUs: 1
Number of CPUs: 8
Max heap size: 7936 M
Python executable: /home/ubuntu/anaconda3/envs/torchserve/bin/python
Config file: N/A
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
Model Store: /home/ubuntu/serve/examples/object_detector/yolo/yolov8/model_store
Initial Models: N/A
Log dir: /home/ubuntu/serve/examples/object_detector/yolo/yolov8/logs
Metrics dir: /home/ubuntu/serve/examples/object_detector/yolo/yolov8/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Enable metrics API: true
Metrics mode: log
Disable system metrics: false
Workflow Store: /home/ubuntu/serve/examples/object_detector/yolo/yolov8/model_store
Model config: N/A
2023-07-29T23:59:47,932 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager -  Loading snapshot serializer plugin...
2023-07-29T23:59:47,949 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2023-07-29T23:59:47,990 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080
2023-07-29T23:59:47,990 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2023-07-29T23:59:47,991 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081
2023-07-29T23:59:47,991 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2023-07-29T23:59:47,992 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082
Model server started.

(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 2023-07-29T23:59:48,786 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:50.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,787 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:31.636730194091797|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,787 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:161.9974708557129|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,788 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:83.7|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,788 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUtilization.Percent:0.0|#Level:Host,DeviceId:0|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,788 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUsed.Megabytes:0.0|#Level:Host,DeviceId:0|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,788 [INFO ] pool-3-thread-1 TS_METRICS - GPUUtilization.Percent:0.0|#Level:Host,DeviceId:0|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,789 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:30267.390625|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,789 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:1068.859375|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188
2023-07-29T23:59:48,789 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:4.6|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675188

(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ curl -X POST "localhost:8081/models?model_name=yolov8n&url=yolov8n.mar&initial_workers=4&batch_size=2"
2023-07-29T23:59:50,827 [DEBUG] epollEventLoopGroup-3-1 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model yolov8n
2023-07-29T23:59:50,828 [DEBUG] epollEventLoopGroup-3-1 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model yolov8n
2023-07-29T23:59:50,828 [INFO ] epollEventLoopGroup-3-1 org.pytorch.serve.wlm.ModelManager - Model yolov8n loaded.
2023-07-29T23:59:50,829 [DEBUG] epollEventLoopGroup-3-1 org.pytorch.serve.wlm.ModelManager - updateModel: yolov8n, count: 4
2023-07-29T23:59:50,834 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/ubuntu/anaconda3/envs/torchserve/bin/python, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000, --metrics-config, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-07-29T23:59:50,834 [DEBUG] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/ubuntu/anaconda3/envs/torchserve/bin/python, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9001, --metrics-config, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-07-29T23:59:50,834 [DEBUG] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/ubuntu/anaconda3/envs/torchserve/bin/python, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9002, --metrics-config, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-07-29T23:59:50,837 [DEBUG] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/ubuntu/anaconda3/envs/torchserve/bin/python, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9003, --metrics-config, /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml]
2023-07-29T23:59:52,159 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9003, pid=3896
2023-07-29T23:59:52,160 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9003
2023-07-29T23:59:52,168 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Successfully loaded /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-07-29T23:59:52,169 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - [PID]3896
2023-07-29T23:59:52,169 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Torch worker started.
2023-07-29T23:59:52,169 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Python runtime: 3.10.0
2023-07-29T23:59:52,169 [DEBUG] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-yolov8n_1.0 State change null -> WORKER_STARTED
2023-07-29T23:59:52,173 [INFO ] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9003
2023-07-29T23:59:52,173 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9000, pid=3892
2023-07-29T23:59:52,175 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2023-07-29T23:59:52,175 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9001, pid=3893
2023-07-29T23:59:52,176 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9001
2023-07-29T23:59:52,177 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9002, pid=3894
2023-07-29T23:59:52,178 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9002
2023-07-29T23:59:52,180 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9003.
2023-07-29T23:59:52,183 [INFO ] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1690675192183
2023-07-29T23:59:52,186 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Successfully loaded /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-07-29T23:59:52,186 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Successfully loaded /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-07-29T23:59:52,186 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - [PID]3893
2023-07-29T23:59:52,186 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - [PID]3892
2023-07-29T23:59:52,187 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Torch worker started.
2023-07-29T23:59:52,187 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Torch worker started.
2023-07-29T23:59:52,187 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-yolov8n_1.0 State change null -> WORKER_STARTED
2023-07-29T23:59:52,187 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Python runtime: 3.10.0
2023-07-29T23:59:52,187 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Python runtime: 3.10.0
2023-07-29T23:59:52,187 [DEBUG] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-yolov8n_1.0 State change null -> WORKER_STARTED
2023-07-29T23:59:52,187 [INFO ] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001
2023-07-29T23:59:52,188 [INFO ] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2023-07-29T23:59:52,188 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Successfully loaded /home/ubuntu/anaconda3/envs/torchserve/lib/python3.10/site-packages/ts/configs/metrics.yaml.
2023-07-29T23:59:52,188 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - [PID]3894
2023-07-29T23:59:52,189 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Torch worker started.
2023-07-29T23:59:52,189 [DEBUG] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-yolov8n_1.0 State change null -> WORKER_STARTED
2023-07-29T23:59:52,189 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Python runtime: 3.10.0
2023-07-29T23:59:52,189 [INFO ] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1690675192189
2023-07-29T23:59:52,189 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9001.
2023-07-29T23:59:52,189 [INFO ] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9002
2023-07-29T23:59:52,190 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2023-07-29T23:59:52,190 [INFO ] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1690675192190
2023-07-29T23:59:52,190 [INFO ] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD to backend at: 1690675192190
2023-07-29T23:59:52,191 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9002.
2023-07-29T23:59:52,212 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - model_name: yolov8n, batchSize: 2
2023-07-29T23:59:52,213 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - model_name: yolov8n, batchSize: 2
2023-07-29T23:59:52,213 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - model_name: yolov8n, batchSize: 2
2023-07-29T23:59:52,218 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - model_name: yolov8n, batchSize: 2
2023-07-29T23:59:53,454 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Enabled tensor cores
2023-07-29T23:59:53,463 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Enabled tensor cores
2023-07-29T23:59:53,464 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Enabled tensor cores
2023-07-29T23:59:53,475 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - ONNX enabled
2023-07-29T23:59:53,479 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Enabled tensor cores
2023-07-29T23:59:53,480 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - ONNX enabled
2023-07-29T23:59:53,486 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - ONNX enabled
2023-07-29T23:59:53,498 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - ONNX enabled
2023-07-29T23:59:53,776 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-07-29T23:59:53,777 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-07-29T23:59:53,783 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-07-29T23:59:53,792 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - Torch TensorRT enabled
2023-07-29T23:59:55,044 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - '/tmp/models/d3a3edd1c9eb4dce89a442afdfb824cf/index_to_name.json' is missing. Inference output will not include class name.
2023-07-29T23:59:55,046 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-07-29T23:59:55,046 [INFO ] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2833
2023-07-29T23:59:55,046 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-yolov8n_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-07-29T23:59:55,046 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - WorkerLoadTime.Milliseconds:4214.0|#WorkerName:W-9000-yolov8n_1.0,Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,047 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:24.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,074 [INFO ] W-9002-yolov8n_1.0-stdout MODEL_LOG - '/tmp/models/d3a3edd1c9eb4dce89a442afdfb824cf/index_to_name.json' is missing. Inference output will not include class name.
2023-07-29T23:59:55,074 [DEBUG] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-07-29T23:59:55,075 [INFO ] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2856
2023-07-29T23:59:55,075 [DEBUG] W-9002-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-yolov8n_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-07-29T23:59:55,075 [INFO ] W-9002-yolov8n_1.0 TS_METRICS - WorkerLoadTime.Milliseconds:4241.0|#WorkerName:W-9002-yolov8n_1.0,Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,075 [INFO ] W-9002-yolov8n_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:29.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,112 [INFO ] W-9001-yolov8n_1.0-stdout MODEL_LOG - '/tmp/models/d3a3edd1c9eb4dce89a442afdfb824cf/index_to_name.json' is missing. Inference output will not include class name.
2023-07-29T23:59:55,112 [DEBUG] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-07-29T23:59:55,112 [INFO ] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2900
2023-07-29T23:59:55,112 [DEBUG] W-9001-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-yolov8n_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-07-29T23:59:55,113 [INFO ] W-9001-yolov8n_1.0 TS_METRICS - WorkerLoadTime.Milliseconds:4279.0|#WorkerName:W-9001-yolov8n_1.0,Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,113 [INFO ] W-9001-yolov8n_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:24.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,113 [INFO ] W-9003-yolov8n_1.0-stdout MODEL_LOG - '/tmp/models/d3a3edd1c9eb4dce89a442afdfb824cf/index_to_name.json' is missing. Inference output will not include class name.
2023-07-29T23:59:55,113 [DEBUG] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-07-29T23:59:55,114 [INFO ] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2899
2023-07-29T23:59:55,114 [DEBUG] W-9003-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-yolov8n_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-07-29T23:59:55,114 [INFO ] W-9003-yolov8n_1.0 TS_METRICS - WorkerLoadTime.Milliseconds:4280.0|#WorkerName:W-9003-yolov8n_1.0,Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,114 [INFO ] W-9003-yolov8n_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:32.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
2023-07-29T23:59:55,120 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:37464 "POST /models?model_name=yolov8n&url=yolov8n.mar&initial_workers=4&batch_size=2 HTTP/1.1" 200 4432
2023-07-29T23:59:55,121 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675195
{
  "status": "Model \"yolov8n\" Version: 1.0 registered with 4 initial workers"
}
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ 
(torchserve) ubuntu@ip-172-31-7-107:~/serve/examples/object_detector/yolo/yolov8$ curl http://127.0.0.1:8080/predictions/yolov8n -T persons.jpg  & curl http://127.0.0.1:8080/predictions/yolov8n -T bus.jpg
[1] 3992
2023-07-30T00:00:23,184 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - ts_inference_requests_total.Count:1.0|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675223
2023-07-30T00:00:23,187 [INFO ] epollEventLoopGroup-3-3 TS_METRICS - ts_inference_requests_total.Count:1.0|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675223
2023-07-30T00:00:23,190 [INFO ] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT to backend at: 1690675223190
2023-07-30T00:00:23,192 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_LOG - Backend received inference at: 1690675223
2023-07-30T00:00:24,911 [INFO ] W-9000-yolov8n_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]HandlerTime.Milliseconds:1718.66|#ModelName:yolov8n,Level:Model|#hostname:ip-172-31-7-107,1690675224,4a2f2ad7-07b6-48fd-9420-50fa68e46ffe,9e71337f-9f55-4125-9e1e-96333b26842b, pattern=[METRICS]
2023-07-30T00:00:24,911 [WARN ] W-9000-yolov8n_1.0-stderr MODEL_LOG - 
2023-07-30T00:00:24,911 [WARN ] W-9000-yolov8n_1.0-stderr MODEL_LOG - 0: 640x640 4 persons, 3 benchs, 3 handbags, 1: 640x640 4 persons, 1 bus, 1 stop sign, 76.0ms
2023-07-30T00:00:24,912 [WARN ] W-9000-yolov8n_1.0-stderr MODEL_LOG - Speed: 0.0ms preprocess, 38.0ms inference, 1.5ms postprocess per image at shape (1, 3, 640, 640)
2023-07-30T00:00:24,911 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_METRICS - HandlerTime.ms:1718.66|#ModelName:yolov8n,Level:Model|#hostname:ip-172-31-7-107,requestID:4a2f2ad7-07b6-48fd-9420-50fa68e46ffe,9e71337f-9f55-4125-9e1e-96333b26842b,timestamp:1690675224
2023-07-30T00:00:24,912 [INFO ] W-9000-yolov8n_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - result=[METRICS]PredictionTime.Milliseconds:1718.8|#ModelName:yolov8n,Level:Model|#hostname:ip-172-31-7-107,1690675224,4a2f2ad7-07b6-48fd-9420-50fa68e46ffe,9e71337f-9f55-4125-9e1e-96333b26842b, pattern=[METRICS]
2023-07-30T00:00:24,913 [INFO ] W-9000-yolov8n_1.0-stdout MODEL_METRICS - PredictionTime.ms:1718.8|#ModelName:yolov8n,Level:Model|#hostname:ip-172-31-7-107,requestID:4a2f2ad7-07b6-48fd-9420-50fa68e46ffe,9e71337f-9f55-4125-9e1e-96333b26842b,timestamp:1690675224
2023-07-30T00:00:24,912 [INFO ] W-9000-yolov8n_1.0 ACCESS_LOG - /127.0.0.1:39630 "PUT /predictions/yolov8n HTTP/1.1" 200 1729
2023-07-30T00:00:24,913 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,913 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1722347.547|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,913 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:184.912|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,914 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.job.Job - Waiting time ns: 184912, Backend time ns: 1723718887
2023-07-30T00:00:24,914 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,914 [INFO ] W-9000-yolov8n_1.0 ACCESS_LOG - /127.0.0.1:39628 "PUT /predictions/yolov8n HTTP/1.1" 200 1727
2023-07-30T00:00:24,914 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,914 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - ts_inference_latency_microseconds.Microseconds:1724170.693|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,914 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - ts_queue_latency_microseconds.Microseconds:190.562|#model_name:yolov8n,model_version:default|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,915 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.job.Job - Waiting time ns: 190562, Backend time ns: 1724750981
2023-07-30T00:00:24,915 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - QueueTime.Milliseconds:0.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675224
2023-07-30T00:00:24,915 [DEBUG] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
2023-07-30T00:00:24,915 [INFO ] W-9000-yolov8n_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1720
{
  "person": 4,
  "bus": 1,
  "stop sign": 1
{
  "person": 4,
  "handbag": 3,
}  "bench": 3
}2023-07-30T00:00:24,915 [INFO ] W-9000-yolov8n_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:5.0|#Level:Host|#hostname:ip-172-31-7-107,timestamp:1690675224
[1]+  Done                    curl http://127.0.0.1:8080/predictions/yolov8n -T persons.jpg

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Jul 30, 2023

Codecov Report

Merging #2508 (951f83c) into master (bb4eb8b) will not change coverage.
The diff coverage is n/a.

❗ Current head 951f83c differs from pull request most recent head 07bd3cc. Consider uploading reports for the commit 07bd3cc to get more accurate results

@@           Coverage Diff           @@
##           master    #2508   +/-   ##
=======================================
  Coverage   72.64%   72.64%           
=======================================
  Files          79       79           
  Lines        3733     3733           
  Branches       58       58           
=======================================
  Hits         2712     2712           
  Misses       1017     1017           
  Partials        4        4           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks this was fun to review, a few minor nits

@agunapal agunapal requested a review from lxning August 8, 2023 17:52
@agunapal agunapal merged commit 683608b into master Aug 25, 2023
11 of 13 checks passed
@agunapal agunapal deleted the examples/yolov8 branch August 25, 2023 00:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants