Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

support different types of computing hardware #5138

Open
hzy46 opened this issue Dec 2, 2020 · 2 comments
Open

support different types of computing hardware #5138

hzy46 opened this issue Dec 2, 2020 · 2 comments

Comments

@hzy46
Copy link
Contributor

hzy46 commented Dec 2, 2020

Motivation

Currently, OpenPAI has supported the most widely used computing devices: Nvidia GPU, AMD GPU and CPU. In addition, it has the potential to support other types of device, e.g. AI computing chips (NPU).

Goal

Decouple OpenPAI services and specific hardware types. One OpenPAI service container can support a list of hardware types.

Requirements

For every type of computing device, the vendor should guarantee:

  • one machine should only have one type of computing device
  • driver and k8s device plugin are successfully deployed in each machine
  • devices work correctly with docker and k8s
  • compatible framework and docker images

MVP with default scheduler

By assuming that there is only one type of computing device in a cluster, we could build a minimal viable solution with the default scheduler by

  1. configure ComputeDevice (default is nvidia.com/gpu) in deployment and record it in configmap
  2. add option to turn off HivdD scheduler in quick start
  3. bypass (or do other) pre-checks according to ComputeDevice in quick start
  4. chage nvidia.com/gpu to ComputeDevice in rest server
  5. change vc resource information when use default scheduler

memory: `${config.taskRoles[taskRole].resourcePerInstance.memoryMB}Mi`,
'github.com/fuse': 1,
'nvidia.com/gpu':
config.taskRoles[taskRole].resourcePerInstance.gpu,
...(infinibandDevice && { 'rdma/hca': 1 }),

Beside the necessary works, we (pai-dev team and device vendor) could make better support by

  • refactor and organize device-related codes in devices subfolders. The basic idea is to quick locate device related codes and isolate codes for different devices (e.g. different device vendors should avoid editing the same file).
    If a component must support diverse types of computing device, there will be a devices folder in it. For PAI services, they should take these files into consideration in build time. And one container will support a list of different machine models. For other components like the deploy script, they should check these files in runtime.
  • provide monitoring tool like nvidia-smi and prometheus exporter
  • update webportal terms

Perfect support with HiveD

By enabling HiveD, we could get better support

  • allow multiple device types in a cluster
  • support virtual clusters
  • topology aware scheduling to guarantee sharing safety of DL scenario

Some extra efforts must be done to achieve this

  1. offer a container runtime for every device type. Container runtime is a modified version of runc adding a custom pre-start hook to all containers. Here are two examples nvidia-container-runtime and runtime for AMD Radeon Open Compute
  2. describe machines and devices in layout.yaml replace master.csv / worker.csv by layout.yaml #5151
  3. make sure HiveD config generation is independent of computing devices
  4. add appropriate environment variables in rest-server when generate pod spec in addition to NVIDIA_VISIBLE_DEVICES and PAI_AMD_VISIBLE_DEVICES.

if (config.taskRoles[taskRole].resourcePerInstance.gpu > 0) {
frameworkTaskRole.task.pod.spec.containers[0].env.push(
{
name: 'NVIDIA_VISIBLE_DEVICES',
valueFrom: {
fieldRef: {
fieldPath: `metadata.annotations['hivedscheduler.microsoft.com/pod-leaf-cell-isolation']`,
},
},
},
{
name: 'PAI_AMD_VISIBLE_DEVICES',
valueFrom: {
fieldRef: {
fieldPath: `metadata.annotations['hivedscheduler.microsoft.com/pod-leaf-cell-isolation']`,
},
},
},
);
}
}

Some optional work items include

  • clarify and unify the machine sku description in layout.yaml and HiveD skus
  • make sku-(cpu,gpu,mem) converting simply, predictably and decoupled with devices CPU/GPU/Memory information to SKU definition API #5148.
  • health report for computing device. This is not mandatory since node-level health check is provided by k8s already.
@hzy46
Copy link
Contributor Author

hzy46 commented Dec 9, 2020

Detailed Work Items for this issue:

If all P0 items are done, we can support different hardwares in default scheduler.
If all P1 items are done, we can support different hardwares in hived scheduler.
P2 items are nice-to-have.

@hzy46
Copy link
Contributor Author

hzy46 commented Dec 24, 2020

Test cases for rest-server:

1. Default Scheduler: Test the resource requirement is correctly specified in pod definition.

  • ./paictl.py service stop -n hivedscheduler cluster-configuration rest-server
  • Modify services-configuration.yaml: disable hivedscheduler
  • Modify layout.yaml: set the cluster workers' computing device type to a.b.com/c e.g. :
machine-sku:
  master-machine: # define a machine sku
    # the resource requirements for all the machines of this sku
    # We use the same memory format as Kubernetes, e.g. Gi, Mi
    # Reference: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory
    mem: 60Gi
    cpu:
      # the number of CPU vcores
      vcore: 24
  gpu-machine:
    computing-device:
      type: a.b.com/c
      model: faked
      count: 4
    mem: 220Gi
    cpu:
      vcore: 24

machine-list:
  - hostname: pai-master # name of the machine, **do not** use upper case alphabet letters for hostname
    hostip: 10.0.0.1
    machine-type: master-machine # only one master-machine supported
    pai-master: "true"
  - hostname: pai-worker1
    hostip: 10.0.0.2
    machine-type: gpu-machine
    pai-worker: "true"
  - hostname: pai-worker2
    hostip: 10.0.0.3
    machine-type: gpu-machine
    pai-worker: "true"
………………
  • push the modified config to k8s
  • ./paictl.py service start -n hivedscheduler cluster-configuration rest-server
  • submit a job from webportal
  • expect there is a a.b.com/c resource request in the pod spec

2. Hived Scheduler: Test the environment varibales is set in pod spec.

  • ./paictl.py service stop -n hivedscheduler cluster-configuration rest-server
  • Modify services-configuration.yaml: enable hivedscheduler; set rest-server.hived-computing-device-envs to TEST,NVIDIA_VISIBLE_DEVICES,HIVED_VISIBLE_DEVICES
  • Modify layout.yaml: set the cluster workers' computing device type back to nvidia.com/gpu
  • push the modified config to k8s
  • ./paictl.py service start -n hivedscheduler cluster-configuration rest-server
  • submit a job from webportal
  • In the pod, expect environment variable TEST is set to something like 0,1,.....

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants