Skip to content

Add llama.cpp backend (#231) #372

Add llama.cpp backend (#231)

Add llama.cpp backend (#231) #372

Re-run triggered July 30, 2024 09:50
Status Failure
Total duration 14m 22s
Artifacts
run_cli_rocm_pytorch_multi_gpu_tests
0s
run_cli_rocm_pytorch_multi_gpu_tests
Fit to window
Zoom out
Zoom in

Annotations

1 error
run_cli_rocm_pytorch_multi_gpu_tests
The self-hosted runner: amd-multi-gpu-mi250-runners-02 lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.