Skip to content

QoS 设置

oilbeater edited this page Jun 27, 2022 · 9 revisions

Wiki 下的中文文档将不在维护,请访问我们最新的中文文档网站,获取最新的文档更新。

在v1.9.0版本之前,Kube-OVN 可以通过 Pod 上的 annotation ovn.kubernetes.io/ingress_rateovn.kubernetes.io/egress_rate 来控制 Pod 的双向带宽,其单位为 Mbit/s。我们可以在创建时设定 QoS 也可以在 Pod 运行时通过更改 annotation 来动态调整 QoS。

从v1.9.0版本开始,Kube-OVN开始支持linux-htb和linux-netem类型的QoS设置。其中linux-htb是基于优先级的Qos设置,linux-netem是模拟设备干扰丢包等的Qos设置。

linux-htb Qos

linux-htb Qos是基于优先级的Qos设置。新增了一个CRD资源,用于设置Qos的优先级。 CRD定义如下:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: htbqoses.kubeovn.io
spec:
  group: kubeovn.io
  versions:
    - name: v1
      served: true
      storage: true
      additionalPrinterColumns:
      - name: PRIORITY
        type: string
        jsonPath: .spec.priority
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                priority:
                  type: string					# Value in range 0 to 4,294,967,295.
  scope: Cluster
  names:
    plural: htbqoses
    singular: htbqos
    kind: HtbQos
    shortNames:
      - htbqos

CRD spec参数只有一个字段,即htbqoses.spec.priority,参数取值代表了优先级的大小。在镜像初始化中预置了三个CRD实例,分别是

mac@bogon kube-ovn % kubectl get htbqos
NAME            PRIORITY
htbqos-high     100
htbqos-low      300
htbqos-medium   200

优先级顺序是相对的,priority取值越小,Qos优先级越高。

subnet crd增加了字段subnet.Spec.HtbQos,用于指定subnet绑定的HtbQos实例,取值参考如下

mac@bogon kube-ovn % kubectl get subnet test -o yaml
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: test
spec:
  cidrBlock: 192.168.0.0/16
  default: false
  gatewayType: distributed
  htbqos: htbqos-high
  ...

当subnet绑定了HtbQos实例之后,该subnet下的所有pod都拥有相同的优先级设置。

Pod新增了annotation ovn.kubernetes.io/priority,取值内容为具体的priority 数值,如ovn.kubernetes.io/priority: "50",可以用于单独设置Pod的Qos优先级参数。

当Pod所在subnet指定了HtbQos参数,同时Pod又设置了Qos优先级annotation时,以Pod annotation取值为准。

对于带宽设置,仍然是基于Pod单独设置的,使用之前的annotation ovn.kubernetes.io/ingress_rateovn.kubernetes.io/egress_rate,用于控制 Pod 的双向带宽。

linux-netem QoS

Pod新增了annotation ovn.kubernetes.io/latencyovn.kubernetes.io/limitovn.kubernetes.io/loss,用于linux-netem类型Qos参数设置。

latency 为设置的pod流量延迟参数,取值为整形数值,单位为ms。

limit 为qdisc队列可容纳的最大数据包数,取值为整形数值,例如1000。

loss 为设置的报文丢包概率,取值为float类型,取值范围在0-100之间,例如取值为20,则为设置20%的丢包概率。

注意

linux-htb QoS 和 linux-netem QoS 是两种类型的Qos,同一个Pod不能同时支持两种类型的Qos,因此需要注意annotation设置不要冲突,不能出现两种Qos annotation同时设置的情况。

对于确实设置了两种annotation的情况,linux-netem QoS更多是用于调试使用,因此实际生效的Qos使用linux-htb QoS的设置。

验证

优先级参数

subnet指定htbqos实例htbqos: htbqos-high,pod同时指定annotation ovn.kubernetes.io/priority: "50",查看实际priority设置

mac@bogon kube-ovn % kubectl get pod -n test -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE   READINESS GATES
test-57dbcb6dbd-7z9bc   1/1     Running   0          7d18h   192.168.0.2   kube-ovn-worker   <none>           <none>
test-57dbcb6dbd-vh6dq   1/1     Running   0          7d18h   192.168.0.3   kube-ovn-worker   <none>           <none>
mac@bogon kube-ovn % kubectl ko nbctl lsp-list test
af7553e0-beda-4af1-a5d4-26eb836df6ef (test-57dbcb6dbd-7z9bc.test)
cefd0820-50ee-40e5-acb6-980ea6b1bbfd (test-57dbcb6dbd-vh6dq.test)
b22bf97c-544e-4569-a0ee-6e77386c4181 (test-ovn-cluster)
mac@bogon kube-ovn % kubectl ko vsctl kube-ovn-worker list qos
_uuid               : 90d1a865-887d-4271-9874-b23b06b7d8ff
external_ids        : {iface-id=test-57dbcb6dbd-7z9bc.test, pod="test/test-57dbcb6dbd-7z9bc"}
other_config        : {}
queues              : {0=a8a3dda7-8c08-474a-848e-c9f45faba9e1}
type                : linux-htb

_uuid               : d63bb9b9-e58c-4292-b8af-97743ddc26ef
external_ids        : {iface-id=test-57dbcb6dbd-vh6dq.test, pod="test/test-57dbcb6dbd-vh6dq"}
other_config        : {}
queues              : {0=405e4b3d-38fc-42e8-876b-1db6c1c65aab}
type                : linux-htb

_uuid               : b6a25e6f-5153-4b38-ac5c-1252ace9af28
external_ids        : {}
other_config        : {}
queues              : {}
type                : linux-noop
mac@bogon kube-ovn % kubectl ko vsctl kube-ovn-worker list queue
_uuid               : 405e4b3d-38fc-42e8-876b-1db6c1c65aab
dscp                : []
external_ids        : {iface-id=test-57dbcb6dbd-vh6dq.test, pod="test/test-57dbcb6dbd-vh6dq"}
other_config        : {priority="50"}

_uuid               : a8a3dda7-8c08-474a-848e-c9f45faba9e1
dscp                : []
external_ids        : {iface-id=test-57dbcb6dbd-7z9bc.test, pod="test/test-57dbcb6dbd-7z9bc"}
other_config        : {priority="100"}
mac@bogon kube-ovn %

创建 Pod 时设定 QoS

apiVersion: v1
kind: Pod
metadata:
  name: qos
  namespace: ls1
  annotations:
    ovn.kubernetes.io/ingress_rate: "3"
    ovn.kubernetes.io/egress_rate: "1"
spec:
  containers:
  - name: qos
    image: nginx:alpine

动态调整 QoS

kubectl annotate --overwrite  pod nginx-74d5899f46-d7qkn ovn.kubernetes.io/ingress_rate=3

测试 QoS 调整

部署性能测试需要的容器

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: perf
  namespace: ls1
  labels:
    app: perf
spec:
  selector:
    matchLabels:
      app: perf
  template:
    metadata:
      labels:
        app: perf
    spec:
      containers:
      - name: nginx
        image: kubeovn/perf

进入其中一个 Pod 并开启 iperf3 server

[root@node2 ~]# kubectl exec -it perf-4n4gt -n ls1 sh
/ # iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

进入另一个 Pod 请求之前的 Pod

[root@node2 ~]# kubectl exec -it perf-d4mqc -n ls1 sh
/ # iperf3 -c 10.66.0.12
Connecting to host 10.66.0.12, port 5201
[  4] local 10.66.0.14 port 51544 connected to 10.66.0.12 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  86.4 MBytes   725 Mbits/sec    3    350 KBytes
[  4]   1.00-2.00   sec  89.9 MBytes   754 Mbits/sec  118    473 KBytes
[  4]   2.00-3.00   sec   101 MBytes   848 Mbits/sec  184    586 KBytes
[  4]   3.00-4.00   sec   104 MBytes   875 Mbits/sec  217    671 KBytes
[  4]   4.00-5.00   sec   111 MBytes   935 Mbits/sec  175    772 KBytes
[  4]   5.00-6.00   sec   100 MBytes   840 Mbits/sec  658    598 KBytes
[  4]   6.00-7.00   sec   106 MBytes   890 Mbits/sec  742    668 KBytes
[  4]   7.00-8.00   sec   102 MBytes   857 Mbits/sec  764    724 KBytes
[  4]   8.00-9.00   sec  97.4 MBytes   817 Mbits/sec  1175    764 KBytes
[  4]   9.00-10.00  sec   111 MBytes   934 Mbits/sec  1083    838 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1010 MBytes   848 Mbits/sec  5119             sender
[  4]   0.00-10.00  sec  1008 MBytes   846 Mbits/sec                  receiver

iperf Done.
/ #

修改第一个 Pod 的入口带宽 QoS

[root@node2 ~]# kubectl annotate --overwrite  pod perf-4n4gt -n ls1 ovn.kubernetes.io/ingress_rate=30

再次从第二个 Pod 测试第一个 Pod 带宽

/ # iperf3 -c 10.66.0.12
Connecting to host 10.66.0.12, port 5201
[  4] local 10.66.0.14 port 52372 connected to 10.66.0.12 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  3.66 MBytes  30.7 Mbits/sec    2   76.1 KBytes
[  4]   1.00-2.00   sec  3.43 MBytes  28.8 Mbits/sec    0    104 KBytes
[  4]   2.00-3.00   sec  3.50 MBytes  29.4 Mbits/sec    0    126 KBytes
[  4]   3.00-4.00   sec  3.50 MBytes  29.3 Mbits/sec    0    144 KBytes
[  4]   4.00-5.00   sec  3.43 MBytes  28.8 Mbits/sec    0    160 KBytes
[  4]   5.00-6.00   sec  3.43 MBytes  28.8 Mbits/sec    0    175 KBytes
[  4]   6.00-7.00   sec  3.50 MBytes  29.3 Mbits/sec    0    212 KBytes
[  4]   7.00-8.00   sec  3.68 MBytes  30.9 Mbits/sec    0    294 KBytes
[  4]   8.00-9.00   sec  3.74 MBytes  31.4 Mbits/sec    0    398 KBytes
[  4]   9.00-10.00  sec  3.80 MBytes  31.9 Mbits/sec    0    526 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  35.7 MBytes  29.9 Mbits/sec    2             sender
[  4]   0.00-10.00  sec  34.5 MBytes  29.0 Mbits/sec                  receiver

iperf Done.
/ #
Clone this wiki locally