Add llama.cpp backend (#231) #372
test_cli_rocm_pytorch_multi_gpu.yaml
on: push
run_cli_rocm_pytorch_multi_gpu_tests
0s
Annotations
1 error
run_cli_rocm_pytorch_multi_gpu_tests
The self-hosted runner: amd-multi-gpu-mi250-runners-02 lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
|