diff --git a/README.md b/README.md index 150c55872..fa59a6369 100644 --- a/README.md +++ b/README.md @@ -38,7 +38,7 @@ The gateway provides network egress capabilities for Kubernetes clusters. ### CRDs -* EgressNode +* EgressTunnel * EgressGateway * EgressPolicy * EgressClusterPolicy diff --git a/charts/crds/egressgateway.spidernet.io_egressclusterpolicies.yaml b/charts/crds/egressgateway.spidernet.io_egressclusterpolicies.yaml index 6e3718d98..59a10aa71 100644 --- a/charts/crds/egressgateway.spidernet.io_egressclusterpolicies.yaml +++ b/charts/crds/egressgateway.spidernet.io_egressclusterpolicies.yaml @@ -31,9 +31,9 @@ spec: jsonPath: .status.eip.ipv6 name: ipv6 type: string - - description: egressNode + - description: egressTunnel jsonPath: .status.node - name: egressNode + name: egressTunnel type: string name: v1beta1 schema: diff --git a/docs/README.en.md b/docs/README.en.md index c9e61afc2..5915c2d59 100644 --- a/docs/README.en.md +++ b/docs/README.en.md @@ -24,7 +24,7 @@ There are two clusters A and B. Cluster A is VMWare-based and runs mainly Databa ### CRDs -* EgressNode +* EgressTunnel * EgressGateway * EgressPolicy * EgressClusterPolicy diff --git a/docs/README.zh.md b/docs/README.zh.md index 56cd4f0ef..c5a96771d 100644 --- a/docs/README.zh.md +++ b/docs/README.zh.md @@ -24,7 +24,7 @@ EgressGateway 项目为 Kubernetes 提供 Egress 能力。 ### CRDs -* EgressNode +* EgressTunnel * EgressGateway * EgressPolicy * EgressClusterPolicy diff --git a/docs/concepts/Architecture.zh.md b/docs/concepts/Architecture.zh.md index 89169ec42..ba5407310 100644 --- a/docs/concepts/Architecture.zh.md +++ b/docs/concepts/Architecture.zh.md @@ -5,26 +5,26 @@ EgressGateway 由控制面和数据面 2 部分组成,控制面由 4 个控制 ## Controller -### EgressNode reconcile loop (a) +### EgressTunnel reconcile loop (a) #### 初始化 1. 从 ConfigMap 配置文件中获取双栈开启情况及对应的隧道 CIDR 2. 通过节点名称根据算法生成唯一的标签值 -3. 会检查 Node 是否有对应的 EgressNode,没有的话就创建对应的 EgressNode,且状态设置为 `Pending`。有隧道 IP 则将 IP 与节点绑定,绑定前会检查 IP 是否合法,不合法则将状态设置为 `Pending` +3. 会检查 Node 是否有对应的 EgressTunnel,没有的话就创建对应的 EgressTunnel,且状态设置为 `Pending`。有隧道 IP 则将 IP 与节点绑定,绑定前会检查 IP 是否合法,不合法则将状态设置为 `Pending` -#### EgressNode Event +#### EgressTunnel Event -- Del:先释放隧道 IP,再删除。如果 EgressNode 对应的节点还存在,重新创建 EgressNode +- Del:先释放隧道 IP,再删除。如果 EgressTunnel 对应的节点还存在,重新创建 EgressTunnel - Other: - phase != `Init` || phase != `Ready`:则分配 IP,分配成功将状态设置为 `Init`,分配失败将状态设置为 `Failed`。这里是全局唯一会分配隧道 IP 的地方 - mark != algorithm(NodeName):该字段禁止修改,直接报错返回 #### Node Event -- Del:删除对应的 EgressNode +- Del:删除对应的 EgressTunnel - Other: - - 节点对应的 EgressNode 不存在,则创建 EgressNode + - 节点对应的 EgressTunnel 不存在,则创建 EgressTunnel - 无隧道 IP,设置 phase 为 `Pending` - 有隧道 IP,校验隧道是否合法,不合法则设置 phase 为 `Pending` - 隧道 IP 合法,校验 IP 是否分配给本节点,不是则设置 phase 为 `Pending` @@ -40,11 +40,11 @@ EgressGateway 由控制面和数据面 2 部分组成,控制面由 4 个控制 - Other: * EIP 减少,如果 EIP 被引用,禁止修改。分配 IPV4 与 IPV6 时,要求一一对应,所以两者的个数需要一致。 - * 如果 nodeSelector 被修改,从 status 获取旧的 Node 信息,与最新的 Node 进行对比。将删除节点上的 EIP 重新分配到新的 Node 上。更新对应 EgressNode 中的 EIP 信息。 + * 如果 nodeSelector 被修改,从 status 获取旧的 Node 信息,与最新的 Node 进行对比。将删除节点上的 EIP 重新分配到新的 Node 上。更新对应 EgressTunnel 中的 EIP 信息。 #### EgressPolicy Event -- Del:列出 EgressPolicy 找到被引用的 EgressGateway,再对 EgressPolicy 与 EgressGateway 解绑。解绑需要做的事情有,找到对应的 EIP 信息。如果使用了 EIP,则判断是否需要回收 EIP。如果此时 EIP 已经没有 policy 使用,则回收 EIP,更新自身及 EgressNode 的 EIP 信息。 +- Del:列出 EgressPolicy 找到被引用的 EgressGateway,再对 EgressPolicy 与 EgressGateway 解绑。解绑需要做的事情有,找到对应的 EIP 信息。如果使用了 EIP,则判断是否需要回收 EIP。如果此时 EIP 已经没有 policy 使用,则回收 EIP,更新自身及 EgressTunnel 的 EIP 信息。 - Other: * EgressPolicy 不能修改绑定的 EgressGateway。如果允许修改,则列出 EgressGateway 找到原先绑定的 EgressGateway,进行解绑。再对新的进行绑定。 * 新增 EgressPolicy,则将 EgressPolicy 与 EgressGateway 进行绑定,绑定中,判断是否需要分配 EIP。 diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 565901bc7..c3b8d7933 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -85,7 +85,7 @@ nav: - Architecture: concepts/Architecture.md - Datapath: concepts/Datapath.md - reference: - - CRD EgressNode: reference/EgressNode.md + - CRD EgressTunnel: reference/EgressTunnel.md - CRD EgressGateway: reference/EgressGateway.md - CRD EgressPolicy: reference/EgressPolicy.md - CRD EgressClusterPolicy: reference/EgressClusterPolicy.md diff --git a/docs/proposal/01-egress-gateway/EgressGateway.md b/docs/proposal/01-egress-gateway/EgressGateway.md index f9bf65b9e..ff1759567 100644 --- a/docs/proposal/01-egress-gateway/EgressGateway.md +++ b/docs/proposal/01-egress-gateway/EgressGateway.md @@ -2,7 +2,7 @@ ### CRDS -The egress gateway model abstracts three Custom Resource Definitions (CRDs): `EgressNode` , `EgressNode` and `EgressGatewayPolicy`. They are cluster scoped CRDs. +The egress gateway model abstracts three Custom Resource Definitions (CRDs): `EgressTunnel` , `EgressTunnel` and `EgressGatewayPolicy`. They are cluster scoped CRDs. #### EgressGateway ```yaml @@ -38,10 +38,10 @@ status: * `ipv4` address list. * `ipv6` address list. -#### EgressNode +#### EgressTunnel ```yaml apiVersion: egressgateway.spidernet.io/v1 -kind: EgressNode +kind: EgressTunnel metadata: name: "node1" spec: @@ -55,10 +55,10 @@ status: physicalInterfaceIPv6: "" ``` -The `EgressNode` CRD stores vxlan tunnel information, which is generated by the Controller from the Node CR. +The `EgressTunnel` CRD stores vxlan tunnel information, which is generated by the Controller from the Node CR. * status - * `phase` indicates the status of EgressNode. If 'Ready' has been assigned and the tunnel has been built, 'Pending' is waiting for IP assignment, 'Init' succeeds in assigning the tunnel IP address, and 'Failed' fails to assign the tunnel IP address. + * `phase` indicates the status of EgressTunnel. If 'Ready' has been assigned and the tunnel has been built, 'Pending' is waiting for IP assignment, 'Init' succeeds in assigning the tunnel IP address, and 'Failed' fails to assign the tunnel IP address. * `vxlanIPv4IP` field represents the IPv4 address of VXLAN tunnel. * `vxlanIPv6IP` field represents the IPv6 address of VXLAN tunnel. * `tunnelMac` field represents the MAC address of IPv4 VXLAN tunnel Interface. @@ -164,10 +164,10 @@ Controller consists of Webhook Validator and Reconcile Flow. -Controller has 2 control processes, the first Watch cluster nodes, generate tunnel IP address and MAC address for Node, then `Create` or `Update` EgressNode CR Status. The second control flow watch `EgressNode` and `Egressgateway`, sync match node list from `labelSelector`, election egress gateway node. +Controller has 2 control processes, the first Watch cluster nodes, generate tunnel IP address and MAC address for Node, then `Create` or `Update` EgressTunnel CR Status. The second control flow watch `EgressTunnel` and `Egressgateway`, sync match node list from `labelSelector`, election egress gateway node. ### Agent -Agent has two control processes, the first Watch `EgressNode` CR, which manages node tunnel, and node tunnel is a pluggable interface that can be replaced by Geneve. The second control process manages datapath policy, which watches `EgressNode`, `EgressGateway` and `Egresspolicy`, and sends them to the host through the police interface. It is currently implemented by a combination of *ipset*, *iptables*, and *route*, and it can be replaced by *eBPF*. +Agent has two control processes, the first Watch `EgressTunnel` CR, which manages node tunnel, and node tunnel is a pluggable interface that can be replaced by Geneve. The second control process manages datapath policy, which watches `EgressTunnel`, `EgressGateway` and `Egresspolicy`, and sends them to the host through the police interface. It is currently implemented by a combination of *ipset*, *iptables*, and *route*, and it can be replaced by *eBPF*. diff --git a/docs/proposal/02-egress-node/EgressNode-zh_CN.md b/docs/proposal/02-egress-node/EgressTunnel-zh_CN.md similarity index 59% rename from docs/proposal/02-egress-node/EgressNode-zh_CN.md rename to docs/proposal/02-egress-node/EgressTunnel-zh_CN.md index 9392c46cf..72618c2e0 100644 --- a/docs/proposal/02-egress-node/EgressNode-zh_CN.md +++ b/docs/proposal/02-egress-node/EgressTunnel-zh_CN.md @@ -1,8 +1,8 @@ -## EgressNode CRD +## EgressTunnel CRD ```yaml apiVersion: egressgateway.spidernet.io/v1 -kind: EgressNode +kind: EgressTunnel metadata: name: "node1" spec: @@ -20,7 +20,7 @@ status: 字段说明 * status - * `phase` 表示 EgressNode 的状态,’Ready’ 隧道IP已分配,且隧道已建成,’Pending’ 等待分配IP,’Init’ 分配隧道 IP 成功,’Failed’ 隧道 IP 分配失败 + * `phase` 表示 EgressTunnel 的状态,’Ready’ 隧道IP已分配,且隧道已建成,’Pending’ 等待分配IP,’Init’ 分配隧道 IP 成功,’Failed’ 隧道 IP 分配失败 * `vxlanIPv4IP` 隧道 IPV4 地址 * `vxlanIPv6IP` 隧道 IPV6 地址 * `tunnelMac` 隧道 Mac 地址 @@ -33,22 +33,22 @@ status: ### 初始化 1. 从 CM中获取 IPv4、IPv6 及对应的 CIDR -2. 会检查node 是否有对应的 EgressNode,没有的话就创建对应的EgressNode,且状态设置为 “pending”。有隧道 IP 则将 IP 与节点绑定,绑定前会检查 IP 是否合法,不合法则将状态设置为 “Pending” +2. 会检查node 是否有对应的 EgressTunnel,没有的话就创建对应的EgressTunnel,且状态设置为 “pending”。有隧道 IP 则将 IP 与节点绑定,绑定前会检查 IP 是否合法,不合法则将状态设置为 “Pending” ### 节点事件: -- 删除事件:删除对应的 EgressNode -- 其他事件:如果没有对应的 EgressNode,则创建 EgressNode -- 其他事件:如果有对应的 EgressNode,则对EgressNode进行校验。校验逻辑如下: +- 删除事件:删除对应的 EgressTunnel +- 其他事件:如果没有对应的 EgressTunnel,则创建 EgressTunnel +- 其他事件:如果有对应的 EgressTunnel,则对EgressTunnel进行校验。校验逻辑如下: - - 无隧道IP,将状态置为 “Pending” 如果有隧道IP,判断是否合法,不合法,就将状态置为 “Pending” 如果合法,校验 IP 是否已分配,如果已分配,且分配给其他节点了,则将状态置为 “Pending” - 未分配给其他节点,就分配给本 “EgressNode”,将状态设置为 “Init” + 未分配给其他节点,就分配给本 “EgressTunnel”,将状态设置为 “Init” 如果已分配,且就是分配给本节点的,则将状态设置为 “Init” -### EgressNode事件: -- 删除事件:先释放IP。如果 EgressNode 对应的节点存在,则释放IP,重新创建 EgressNode。 -- 其他事件:如果 EgressNode 状态为 “Init” 或 者“Ready” 时,不做任何处理。如果不是,则分配 IP,分配成功将状态设置为 “Init”,分配失败将状态设置为 “Failed”。这里是全局唯一会分配隧道 IP 的地方 +### EgressTunnel事件: +- 删除事件:先释放IP。如果 EgressTunnel 对应的节点存在,则释放IP,重新创建 EgressTunnel。 +- 其他事件:如果 EgressTunnel 状态为 “Init” 或 者“Ready” 时,不做任何处理。如果不是,则分配 IP,分配成功将状态设置为 “Init”,分配失败将状态设置为 “Failed”。这里是全局唯一会分配隧道 IP 的地方 ## 分配隧道 IP diff --git a/docs/proposal/03-egress-ip/README_zh-CN.md b/docs/proposal/03-egress-ip/README_zh-CN.md index d4f7ba889..90ab6ad78 100644 --- a/docs/proposal/03-egress-ip/README_zh-CN.md +++ b/docs/proposal/03-egress-ip/README_zh-CN.md @@ -24,13 +24,13 @@ ### CRD -#### EgressNode +#### EgressTunnel 用于记录跨节点通信的隧道网卡信息。集群级资源,与 Kubernetes Node 资源名称一一对应。 ```yaml apiVersion: egressgateway.spidernet.io/v1beta1 -kind: EgressNode +kind: EgressTunnel metadata: name: "node1" status: diff --git a/docs/reference/EgressNode.en.md b/docs/reference/EgressTunnel.en.md similarity index 81% rename from docs/reference/EgressNode.en.md rename to docs/reference/EgressTunnel.en.md index b91b7a5fb..0da5504d5 100644 --- a/docs/reference/EgressNode.en.md +++ b/docs/reference/EgressTunnel.en.md @@ -1,8 +1,8 @@ -The EgressNode CRD is used to record tunnel network interface information for cross-node communication. It is a cluster scope resource that corresponds one-to-one with the Kubernetes Node resource name. +The EgressTunnel CRD is used to record tunnel network interface information for cross-node communication. It is a cluster scope resource that corresponds one-to-one with the Kubernetes Node resource name. ```yaml apiVersion: egressgateway.spidernet.io/v1beta1 -kind: EgressNode +kind: EgressTunnel metadata: name: "node1" status: diff --git a/docs/reference/EgressNode.zh.md b/docs/reference/EgressTunnel.zh.md similarity index 84% rename from docs/reference/EgressNode.zh.md rename to docs/reference/EgressTunnel.zh.md index d3b868938..57de2bd2a 100644 --- a/docs/reference/EgressNode.zh.md +++ b/docs/reference/EgressTunnel.zh.md @@ -1,8 +1,8 @@ -EgressNode CRD 用于记录跨节点通信的隧道网卡信息。这是一个集群级资源,它与 Kubernetes Node 资源名称一一对应。 +EgressTunnel CRD 用于记录跨节点通信的隧道网卡信息。这是一个集群级资源,它与 Kubernetes Node 资源名称一一对应。 ```yaml apiVersion: egressgateway.spidernet.io/v1beta1 -kind: EgressNode +kind: EgressTunnel metadata: name: "node1" status: diff --git a/docs/usage/Install.zh.md b/docs/usage/Install.zh.md index e93b2baa1..e9e283d5b 100644 --- a/docs/usage/Install.zh.md +++ b/docs/usage/Install.zh.md @@ -169,7 +169,7 @@ EgressPolicy 对象是租户级别的,因此,它务必创建在 selected 应 3. 查看 EgressPolicy 的状态 $ kubectl get EgressPolicy -A - NAMESPACE NAME GATEWAY IPV4 IPV6 EGRESSNODE + NAMESPACE NAME GATEWAY IPV4 IPV6 EGRESSTUNNEL default test default 172.22.0.110 egressgateway-worker2 $ kubectl get EgressPolicy test -o yaml diff --git a/docs/usage/Uninstall.en.md b/docs/usage/Uninstall.en.md index 3f0027d00..e73dc7cd5 100644 --- a/docs/usage/Uninstall.en.md +++ b/docs/usage/Uninstall.en.md @@ -67,10 +67,10 @@ To ensure that the running applications are not affected before uninstalling Egr It is worth noting that before uninstalling EgressGateway, it is recommended to back up related data and ensure that the uninstall operation does not affect the ongoing business applications. -4. During the uninstallation process, sometimes the EgressNodes CRD of EgressGateway may remain in a waiting state for deletion. If you encounter this situation, you can try using the following command to resolve the issue: +4. During the uninstallation process, sometimes the EgressTunnels CRD of EgressGateway may remain in a waiting state for deletion. If you encounter this situation, you can try using the following command to resolve the issue: ```shell - kubectl patch crd egressnodes.egressgateway.spidernet.io -p '{"metadata":{"finalizers": []}}' --type=merge + kubectl patch crd egresstunnels.egressgateway.spidernet.io -p '{"metadata":{"finalizers": []}}' --type=merge ``` This command removes the finalizer in the EgressGateway CRD, allowing Kubernetes to delete it. This issue is caused by the controller-manager, and we are monitoring the Kubernetes team's progress on fixing it. diff --git a/docs/usage/Uninstall.zh.md b/docs/usage/Uninstall.zh.md index 5fdb3b95b..c1b6fcc12 100644 --- a/docs/usage/Uninstall.zh.md +++ b/docs/usage/Uninstall.zh.md @@ -67,10 +67,10 @@ 需要注意的是,在卸载 EgressGateway 之前,建议先备份相关数据,并确保卸载操作不会影响正在使用的业务应用。 -4. 在卸载过程中,有时候会遇到 EgressGateway 的 EgressNodes CRD 一直处于等待删除的情况。如果您遇到了这种情况,可以尝试使用下面的命令解决问题: +4. 在卸载过程中,有时候会遇到 EgressGateway 的 EgressTunnels CRD 一直处于等待删除的情况。如果您遇到了这种情况,可以尝试使用下面的命令解决问题: ```shell - kubectl patch crd egressnodes.egressgateway.spidernet.io -p '{"metadata":{"finalizers": []}}' --type=merge + kubectl patch crd egresstunnels.egressgateway.spidernet.io -p '{"metadata":{"finalizers": []}}' --type=merge ``` 这个命令的作用是删除 EgressGateway CRD 中的 finalizer,从而允许 Kubernetes 删除这个 CRD。此问题是由 controller-manager 引起的,我们正在关注 Kubernetes 团队对此问题的修复情况。 diff --git a/pkg/agent/agent.go b/pkg/agent/agent.go index 6272c0a0f..b4f2fab02 100644 --- a/pkg/agent/agent.go +++ b/pkg/agent/agent.go @@ -68,7 +68,7 @@ func New(cfg *config.Config) (types.Service, error) { metrics.RegisterMetricCollectors() - err = newEgressNodeController(mgr, cfg, log) + err = newEgressTunnelController(mgr, cfg, log) if err != nil { return nil, fmt.Errorf("failed to create node controller: %w", err) } diff --git a/pkg/agent/police.go b/pkg/agent/police.go index 03ea35544..bef574a72 100644 --- a/pkg/agent/police.go +++ b/pkg/agent/police.go @@ -196,7 +196,7 @@ func (r *policeReconciler) initApplyPolicy() error { node := new(egressv1.EgressTunnel) err := r.client.Get(context.Background(), types.NamespacedName{Name: val.NodeName}, node) if err != nil { - r.log.Error(err, "failed to get egress node, skip building rule of policy") + r.log.Error(err, "failed to get egress tunnel, skip building rule of policy") continue } policyName := policy.Name @@ -521,7 +521,7 @@ func buildNatStaticRule(base uint32) map[string][]iptables.Rule { Match: iptables.MatchCriteria{}.MarkMatchesWithMask(base, 0xffffffff), Action: iptables.AcceptAction{}, Comment: []string{ - "Accept for egress traffic from pod going to EgressNode", + "Accept for egress traffic from pod going to EgressTunnel", }, }, { @@ -712,14 +712,14 @@ func buildFilterStaticRule(base uint32) map[string][]iptables.Rule { Match: iptables.MatchCriteria{}.MarkMatchesWithMask(base, 0xffffffff), Action: iptables.AcceptAction{}, Comment: []string{ - "Accept for egress traffic from pod going to EgressNode", + "Accept for egress traffic from pod going to EgressTunnel", }, }}, "OUTPUT": {{ Match: iptables.MatchCriteria{}.MarkMatchesWithMask(base, 0xffffffff), Action: iptables.AcceptAction{}, Comment: []string{ - "Accept for egress traffic from pod going to EgressNode", + "Accept for egress traffic from pod going to EgressTunnel", }, }}, } @@ -732,14 +732,14 @@ func buildMangleStaticRule(base uint32) map[string][]iptables.Rule { Match: iptables.MatchCriteria{}.MarkMatchesWithMask(base, 0xff000000), Action: iptables.SetMaskedMarkAction{Mark: base, Mask: 0xffffffff}, Comment: []string{ - "Accept for egress traffic from pod going to EgressNode", + "Accept for egress traffic from pod going to EgressTunnel", }, }}, "POSTROUTING": {{ Match: iptables.MatchCriteria{}.MarkMatchesWithMask(base, 0xffffffff), Action: iptables.AcceptAction{}, Comment: []string{ - "Accept for egress traffic from pod going to EgressNode", + "Accept for egress traffic from pod going to EgressTunnel", }, }}, "PREROUTING": {{ diff --git a/pkg/agent/vxlan.go b/pkg/agent/vxlan.go index 572a0d3a2..f3cf0389b 100644 --- a/pkg/agent/vxlan.go +++ b/pkg/agent/vxlan.go @@ -58,7 +58,7 @@ func (r *vxlanReconciler) Reconcile(ctx context.Context, req reconcile.Request) log.Info("reconciling") switch kind { case "EgressTunnel": - return r.reconcileEgressNode(ctx, newReq, log) + return r.reconcileEgressTunnel(ctx, newReq, log) case "EgressGateway": return r.reconcileEgressGateway(ctx, newReq, log) default: @@ -67,14 +67,14 @@ func (r *vxlanReconciler) Reconcile(ctx context.Context, req reconcile.Request) } func (r *vxlanReconciler) reconcileEgressGateway(ctx context.Context, req reconcile.Request, log logr.Logger) (reconcile.Result, error) { - egressNodeMap, err := r.getEgressNodeByEgressGateway(ctx, req.Name) + egressTunnelMap, err := r.getEgressTunnelByEgressGateway(ctx, req.Name) if err != nil { r.log.Error(err, "vxlan reconcile egress gateway") return reconcile.Result{}, err } r.peerMap.Range(func(key string, val vxlan.Peer) bool { - if _, ok := egressNodeMap[key]; ok { + if _, ok := egressTunnelMap[key]; ok { err = r.ruleRoute.Ensure(r.cfg.FileConfig.VXLAN.Name, val.IPv4, val.IPv6, val.Mark, val.Mark) if err != nil { r.log.Error(err, "vxlan reconcile EgressGateway with error") @@ -86,8 +86,8 @@ func (r *vxlanReconciler) reconcileEgressGateway(ctx context.Context, req reconc return reconcile.Result{}, nil } -// reconcileEgressNode -func (r *vxlanReconciler) reconcileEgressNode(ctx context.Context, req reconcile.Request, log logr.Logger) (reconcile.Result, error) { +// reconcileEgressTunnel +func (r *vxlanReconciler) reconcileEgressTunnel(ctx context.Context, req reconcile.Request, log logr.Logger) (reconcile.Result, error) { node := new(egressv1.EgressTunnel) deleted := false err := r.client.Get(ctx, req.NamespacedName, node) @@ -108,7 +108,7 @@ func (r *vxlanReconciler) reconcileEgressNode(ctx context.Context, req reconcile r.peerMap.Delete(req.Name) err := r.ensureRoute() if err != nil { - log.Error(err, "delete egress node, ensure route with error") + log.Error(err, "delete egress tunnel, ensure route with error") } } return reconcile.Result{}, nil @@ -153,15 +153,15 @@ func (r *vxlanReconciler) reconcileEgressNode(ctx context.Context, req reconcile r.peerMap.Store(node.Name, peer) err = r.ensureRoute() if err != nil { - log.Error(err, "add egress node, ensure route with error") + log.Error(err, "add egress tunnel, ensure route with error") } - egressNodeMap, err := r.listEgressNode(ctx) + egressTunnelMap, err := r.listEgressTunnel(ctx) if err != nil { return reconcile.Result{}, err } - if _, ok := egressNodeMap[node.Name]; ok { - // if it is egressnode + if _, ok := egressTunnelMap[node.Name]; ok { + // if it is egresstunnel err = r.ruleRoute.Ensure(r.cfg.FileConfig.VXLAN.Name, peer.IPv4, peer.IPv6, peer.Mark, peer.Mark) if err != nil { r.log.Error(err, "ensure vxlan link") @@ -171,7 +171,7 @@ func (r *vxlanReconciler) reconcileEgressNode(ctx context.Context, req reconcile return reconcile.Result{}, nil } - err = r.ensureEgressNodeStatus(node) + err = r.ensureEgressTunnelStatus(node) if err != nil { return reconcile.Result{}, err } @@ -179,7 +179,7 @@ func (r *vxlanReconciler) reconcileEgressNode(ctx context.Context, req reconcile return reconcile.Result{}, nil } -func (r *vxlanReconciler) getEgressNodeByEgressGateway(ctx context.Context, name string) (map[string]struct{}, error) { +func (r *vxlanReconciler) getEgressTunnelByEgressGateway(ctx context.Context, name string) (map[string]struct{}, error) { res := make(map[string]struct{}) egw := &egressv1.EgressGateway{} err := r.client.Get(ctx, types.NamespacedName{Name: name}, egw) @@ -195,7 +195,7 @@ func (r *vxlanReconciler) getEgressNodeByEgressGateway(ctx context.Context, name return res, nil } -func (r *vxlanReconciler) listEgressNode(ctx context.Context) (map[string]struct{}, error) { +func (r *vxlanReconciler) listEgressTunnel(ctx context.Context) (map[string]struct{}, error) { list := &egressv1.EgressGatewayList{} err := r.client.List(ctx, list) if err != nil { @@ -211,7 +211,7 @@ func (r *vxlanReconciler) listEgressNode(ctx context.Context) (map[string]struct return res, nil } -func (r *vxlanReconciler) ensureEgressNodeStatus(node *egressv1.EgressTunnel) error { +func (r *vxlanReconciler) ensureEgressTunnelStatus(node *egressv1.EgressTunnel) error { needUpdate := false if r.version() == 4 && node.Status.Tunnel.Parent.IPv4 == "" { @@ -223,7 +223,7 @@ func (r *vxlanReconciler) ensureEgressNodeStatus(node *egressv1.EgressTunnel) er } if needUpdate { - err := r.updateEgressNodeStatus(node, r.version()) + err := r.updateEgressTunnelStatus(node, r.version()) if err != nil { return err } @@ -236,7 +236,7 @@ func (r *vxlanReconciler) ensureEgressNodeStatus(node *egressv1.EgressTunnel) er return nil } -func (r *vxlanReconciler) updateEgressNodeStatus(node *egressv1.EgressTunnel, version int) error { +func (r *vxlanReconciler) updateEgressTunnelStatus(node *egressv1.EgressTunnel, version int) error { parent, err := r.getParent(version) if err != nil { return err @@ -283,7 +283,7 @@ func (r *vxlanReconciler) updateEgressNodeStatus(node *egressv1.EgressTunnel, ve // calculate whether the state has changed, update if the status changes. vtep := r.parseVTEP(node.Status) if vtep != nil { - phase := egressv1.EgressNodeReady + phase := egressv1.EgressTunnelReady if node.Status.Phase != phase { needUpdate = true node.Status.Phase = phase @@ -309,7 +309,7 @@ func (r *vxlanReconciler) updateEgressNodeStatus(node *egressv1.EgressTunnel, ve return nil } -func (r *vxlanReconciler) parseVTEP(status egressv1.EgressNodeStatus) *vxlan.Peer { +func (r *vxlanReconciler) parseVTEP(status egressv1.EgressTunnelStatus) *vxlan.Peer { var ipv4 *net.IP var ipv6 *net.IP ready := true @@ -386,7 +386,7 @@ func (r *vxlanReconciler) keepVXLAN() { } } - err := r.updateEgressNodeStatus(nil, r.version()) + err := r.updateEgressTunnelStatus(nil, r.version()) if err != nil { r.log.Error(err, "update EgressTunnel status") time.Sleep(time.Second) @@ -415,12 +415,12 @@ func (r *vxlanReconciler) keepVXLAN() { markMap := make(map[int]struct{}) r.peerMap.Range(func(key string, val vxlan.Peer) bool { - egressNodeMap, err := r.listEgressNode(context.Background()) + egressTunnelMap, err := r.listEgressTunnel(context.Background()) if err != nil { - r.log.Error(err, "ensure vxlan list EgressNode with error") + r.log.Error(err, "ensure vxlan list EgressTunnel with error") return false } - if _, ok := egressNodeMap[key]; ok && val.Mark != 0 { + if _, ok := egressTunnelMap[key]; ok && val.Mark != 0 { markMap[val.Mark] = struct{}{} err = r.ruleRoute.Ensure(r.cfg.FileConfig.VXLAN.Name, val.IPv4, val.IPv6, val.Mark, val.Mark) if err != nil { @@ -496,7 +496,7 @@ func parseMarkToInt(mark string) (int, error) { return i32, nil } -func newEgressNodeController(mgr manager.Manager, cfg *config.Config, log logr.Logger) error { +func newEgressTunnelController(mgr manager.Manager, cfg *config.Config, log logr.Logger) error { ruleRoute := route.NewRuleRoute(log) r := &vxlanReconciler{ diff --git a/pkg/controller/controller.go b/pkg/controller/controller.go index dafe77747..3cf93c93d 100644 --- a/pkg/controller/controller.go +++ b/pkg/controller/controller.go @@ -15,7 +15,7 @@ import ( runtimeWebhook "sigs.k8s.io/controller-runtime/pkg/webhook" "github.com/spidernet-io/egressgateway/pkg/config" - "github.com/spidernet-io/egressgateway/pkg/controller/egress_cluster_info" + egressclusterinfo "github.com/spidernet-io/egressgateway/pkg/controller/egress_cluster_info" "github.com/spidernet-io/egressgateway/pkg/controller/metrics" "github.com/spidernet-io/egressgateway/pkg/controller/webhook" "github.com/spidernet-io/egressgateway/pkg/egressgateway" @@ -79,9 +79,9 @@ func New(cfg *config.Config) (types.Service, error) { return nil, fmt.Errorf("failed to create egress cluster policy controller: %w", err) } - err = newEgressNodeController(mgr, log, cfg) + err = newEgressTunnelController(mgr, log, cfg) if err != nil { - return nil, fmt.Errorf("failed to create egress node controller: %w", err) + return nil, fmt.Errorf("failed to create egress tunnel controller: %w", err) } err = egressclusterinfo.NewEgressClusterInfoController(mgr, log) if err != nil { diff --git a/pkg/controller/egress_tunnel.go b/pkg/controller/egress_tunnel.go index e75d76d55..637f6db83 100644 --- a/pkg/controller/egress_tunnel.go +++ b/pkg/controller/egress_tunnel.go @@ -58,10 +58,10 @@ var ( ) var ( - egressNodeFinalizers = "egressgateway.spidernet.io/egressnode" + egressTunnelFinalizers = "egressgateway.spidernet.io/egresstunnel" ) -func egressNodeControllerMetricCollectors() []prometheus.Collector { +func egressTunnelControllerMetricCollectors() []prometheus.Collector { return []prometheus.Collector{ countNumIPAllocateNextCalls, countNumIPReleaseCalls, @@ -87,11 +87,11 @@ func (r *egReconciler) Reconcile(ctx context.Context, req reconcile.Request) (re } r.doOnce.Do(func() { - r.log.Info("first reconcile of egressnode controller, init egressnode") + r.log.Info("first reconcile of egresstunnel controller, init egresstunnel") redo: - err := r.initEgressNode() + err := r.initEgressTunnel() if err != nil { - r.log.Error(err, "init egress node controller with error") + r.log.Error(err, "init egress tunnel controller with error") time.Sleep(time.Second) goto redo } @@ -109,28 +109,28 @@ func (r *egReconciler) Reconcile(ctx context.Context, req reconcile.Request) (re } } -// reconcileEGN reconcile egress node +// reconcileEGN reconcile egress tunnel // goal: -// - update egress node +// - update egress tunnel func (r *egReconciler) reconcileEGN(ctx context.Context, req reconcile.Request, log logr.Logger) (reconcile.Result, error) { deleted := false - egressnode := new(egressv1.EgressTunnel) - err := r.client.Get(ctx, req.NamespacedName, egressnode) + egresstunnel := new(egressv1.EgressTunnel) + err := r.client.Get(ctx, req.NamespacedName, egresstunnel) if err != nil { if !errors.IsNotFound(err) { return reconcile.Result{Requeue: true}, err } deleted = true } - deleted = deleted || !egressnode.GetDeletionTimestamp().IsZero() + deleted = deleted || !egresstunnel.GetDeletionTimestamp().IsZero() if deleted { - if len(egressnode.Finalizers) > 0 { + if len(egresstunnel.Finalizers) > 0 { // For the existence of Node, when the user manually deletes EgressTunnel, // we first release the EgressTunnel and then regenerate it. - err := r.releaseEgressNode(*egressnode, log, func() error { - cleanFinalizers(egressnode) - err = r.client.Update(context.Background(), egressnode) + err := r.releaseEgressTunnel(*egresstunnel, log, func() error { + cleanFinalizers(egresstunnel) + err = r.client.Update(context.Background(), egresstunnel) if err != nil { return err } @@ -144,7 +144,7 @@ func (r *egReconciler) reconcileEGN(ctx context.Context, req reconcile.Request, return reconcile.Result{Requeue: false}, nil } - err = r.keepEgressNode(*egressnode, log) + err = r.keepEgressTunnel(*egresstunnel, log) if err != nil { return reconcile.Result{Requeue: true}, err } @@ -154,7 +154,7 @@ func (r *egReconciler) reconcileEGN(ctx context.Context, req reconcile.Request, func cleanFinalizers(node *egressv1.EgressTunnel) { for i, item := range node.Finalizers { - if item == egressNodeFinalizers { + if item == egressTunnelFinalizers { node.Finalizers = append(node.Finalizers[:i], node.Finalizers[i+1:]...) } } @@ -177,15 +177,15 @@ func (r *egReconciler) reconcileNode(ctx context.Context, req reconcile.Request, deleted = deleted || !node.GetDeletionTimestamp().IsZero() if deleted { - egressNode := new(egressv1.EgressTunnel) - err := r.client.Get(ctx, req.NamespacedName, egressNode) + egressTunnel := new(egressv1.EgressTunnel) + err := r.client.Get(ctx, req.NamespacedName, egressTunnel) if err != nil { if !errors.IsNotFound(err) { return reconcile.Result{Requeue: true}, err } return reconcile.Result{}, nil } - err = r.deleteEgressNode(*egressNode, log) + err = r.deleteEgressTunnel(*egressTunnel, log) if err != nil { return reconcile.Result{Requeue: true}, err } @@ -195,11 +195,11 @@ func (r *egReconciler) reconcileNode(ctx context.Context, req reconcile.Request, en := new(egressv1.EgressTunnel) err = r.client.Get(ctx, req.NamespacedName, en) if err != nil { - log.Info("create egress node") + log.Info("create egress tunnel") if !errors.IsNotFound(err) { return reconcile.Result{Requeue: true}, err } - err := r.createEgressNode(ctx, node.Name, log) + err := r.createEgressTunnel(ctx, node.Name, log) if err != nil { return reconcile.Result{Requeue: true}, err } @@ -209,21 +209,21 @@ func (r *egReconciler) reconcileNode(ctx context.Context, req reconcile.Request, return reconcile.Result{Requeue: false}, nil } -func (r *egReconciler) createEgressNode(ctx context.Context, name string, log logr.Logger) error { - log.V(1).Info("try to create egress node") - egressNode := &egressv1.EgressTunnel{ObjectMeta: metav1.ObjectMeta{ +func (r *egReconciler) createEgressTunnel(ctx context.Context, name string, log logr.Logger) error { + log.V(1).Info("try to create egress tunnel") + egressTunnel := &egressv1.EgressTunnel{ObjectMeta: metav1.ObjectMeta{ Name: name, - Finalizers: []string{egressNodeFinalizers}, + Finalizers: []string{egressTunnelFinalizers}, }} - err := r.client.Create(ctx, egressNode) + err := r.client.Create(ctx, egressTunnel) if err != nil { - return fmt.Errorf("failed to create egress node: %v", err) + return fmt.Errorf("failed to create egress tunnel: %v", err) } - log.V(1).Info("create egress node succeeded") + log.V(1).Info("create egress tunnel succeeded") return nil } -func (r *egReconciler) releaseEgressNode(node egressv1.EgressTunnel, log logr.Logger, commit func() error) error { +func (r *egReconciler) releaseEgressTunnel(node egressv1.EgressTunnel, log logr.Logger, commit func() error) error { rollback := make([]func(), 0) var err error @@ -236,12 +236,12 @@ func (r *egReconciler) releaseEgressNode(node egressv1.EgressTunnel, log logr.Lo }() if node.Status.Mark != "" { - log.V(1).Info("try to release egress node mark", "mark", node.Status.Mark) + log.V(1).Info("try to release egress tunnel mark", "mark", node.Status.Mark) err := r.mark.Release(node.Status.Mark) if err != nil { - return fmt.Errorf("failed to release egress node mark: %v", err) + return fmt.Errorf("failed to release egress tunnel mark: %v", err) } - log.V(1).Info("release egress node mark succeeded", "mark", node.Status.Mark) + log.V(1).Info("release egress tunnel mark succeeded", "mark", node.Status.Mark) countNumMarkReleaseCalls.Inc() rollback = append(rollback, func() { @@ -249,33 +249,33 @@ func (r *egReconciler) releaseEgressNode(node egressv1.EgressTunnel, log logr.Lo }) } if node.Status.Tunnel.IPv4 != "" && r.allocatorV4 != nil { - log.V(1).Info("try to release egress node tunnel ipv4", "ipv4", node.Status.Tunnel.IPv4) + log.V(1).Info("try to release egress tunnel tunnel ipv4", "ipv4", node.Status.Tunnel.IPv4) ip := net.ParseIP(node.Status.Tunnel.IPv4) if ipv4 := ip.To4(); ipv4 != nil { err := r.allocatorV4.Release(ipv4) if err != nil { - return fmt.Errorf("failed to release egress node tunnel ipv4: %v", err) + return fmt.Errorf("failed to release egress tunnel tunnel ipv4: %v", err) } countNumIPReleaseCallsIpv4.Inc() } - log.V(1).Info("release egress node ipv4 succeeded", "ipv4", node.Status.Tunnel.IPv4) + log.V(1).Info("release egress tunnel ipv4 succeeded", "ipv4", node.Status.Tunnel.IPv4) rollback = append(rollback, func() { _ = r.allocatorV4.Allocate(ip) }) } if node.Status.Tunnel.IPv6 != "" && r.allocatorV6 != nil { - log.V(1).Info("try to release egress node tunnel ipv6", "ipv6", node.Status.Tunnel.IPv6) + log.V(1).Info("try to release egress tunnel tunnel ipv6", "ipv6", node.Status.Tunnel.IPv6) ip := net.ParseIP(node.Status.Tunnel.IPv6) if ipv6 := ip.To16(); ipv6 != nil { err := r.allocatorV6.Release(ipv6) if err != nil { - return fmt.Errorf("failed to release egress node tunnel ipv6: %v", err) + return fmt.Errorf("failed to release egress tunnel tunnel ipv6: %v", err) } countNumIPReleaseCallsIpv6.Inc() } - log.V(1).Info("release egress node ipv6 succeeded", "ipv6", node.Status.Tunnel.IPv6) + log.V(1).Info("release egress tunnel ipv6 succeeded", "ipv6", node.Status.Tunnel.IPv6) rollback = append(rollback, func() { _ = r.allocatorV6.Allocate(ip) @@ -285,14 +285,14 @@ func (r *egReconciler) releaseEgressNode(node egressv1.EgressTunnel, log logr.Lo return commit() } -func (r *egReconciler) deleteEgressNode(node egressv1.EgressTunnel, log logr.Logger) error { - err := r.releaseEgressNode(node, log, func() error { - log.V(1).Info("try to delete egress node") +func (r *egReconciler) deleteEgressTunnel(node egressv1.EgressTunnel, log logr.Logger) error { + err := r.releaseEgressTunnel(node, log, func() error { + log.V(1).Info("try to delete egress tunnel") err := r.client.Delete(context.Background(), &node) if err != nil { return err } - log.V(1).Info("delete egress node succeeded") + log.V(1).Info("delete egress tunnel succeeded") return nil }) if err != nil { @@ -381,18 +381,18 @@ func (r *egReconciler) reBuildCache(node egressv1.EgressTunnel, log logr.Logger) } if needUpdate { - log.V(1).Info("try to update egress node") - err := r.updateEgressNode(*newNode) + log.V(1).Info("try to update egress tunnel") + err := r.updateEgressTunnel(*newNode) if err != nil { - return fmt.Errorf("rebuild failed to update egress node: %v", err) + return fmt.Errorf("rebuild failed to update egress tunnel: %v", err) } - log.V(1).Info("update egress node succeeded") + log.V(1).Info("update egress tunnel succeeded") } return nil } -func (r *egReconciler) keepEgressNode(node egressv1.EgressTunnel, log logr.Logger) error { +func (r *egReconciler) keepEgressTunnel(node egressv1.EgressTunnel, log logr.Logger) error { rollback := make([]func(), 0) var err error needUpdate := false @@ -465,37 +465,37 @@ func (r *egReconciler) keepEgressNode(node egressv1.EgressTunnel, log logr.Logge } if needUpdate { - err := r.updateEgressNode(*newNode) + err := r.updateEgressTunnel(*newNode) if err != nil { - return fmt.Errorf("rebuild failed to update egress node: %v", err) + return fmt.Errorf("rebuild failed to update egress tunnel: %v", err) } } return nil } -func (r *egReconciler) updateEgressNode(node egressv1.EgressTunnel) error { - phase := egressv1.EgressNodeInit +func (r *egReconciler) updateEgressTunnel(node egressv1.EgressTunnel) error { + phase := egressv1.EgressTunnelInit if node.Status.Tunnel.Parent.Name == "" { - phase = egressv1.EgressNodeInit + phase = egressv1.EgressTunnelInit } if node.Status.Mark == "" { - phase = egressv1.EgressNodePending + phase = egressv1.EgressTunnelPending } if node.Status.Tunnel.IPv4 == "" && r.allocatorV4 != nil { - phase = egressv1.EgressNodePending + phase = egressv1.EgressTunnelPending } if node.Status.Tunnel.IPv6 == "" && r.allocatorV6 != nil { - phase = egressv1.EgressNodePending + phase = egressv1.EgressTunnelPending } if node.Status.Tunnel.MAC == "" { - phase = egressv1.EgressNodePending + phase = egressv1.EgressTunnelPending } node.Status.Phase = phase err := r.client.Status().Update(context.Background(), &node) if err != nil { - return fmt.Errorf("rebuild failed to update egress node: %v", err) + return fmt.Errorf("rebuild failed to update egress tunnel: %v", err) } return nil } @@ -511,7 +511,7 @@ func generateMACAddress(nodeName string) (string, error) { return hw.String(), nil } -func (r *egReconciler) initEgressNode() error { +func (r *egReconciler) initEgressTunnel() error { nodes := &egressv1.EgressTunnelList{} err := r.client.List(context.Background(), nodes) if err != nil { @@ -539,12 +539,12 @@ func (r *egReconciler) initEgressNode() error { end := time.Now() delta := end.Sub(start) - r.log.Info("rebuild egressnode cache", "total", len(nodes.Items), "speed", delta) + r.log.Info("rebuild egresstunnel cache", "total", len(nodes.Items), "speed", delta) return nil } -func newEgressNodeController(mgr manager.Manager, log logr.Logger, cfg *config.Config) error { +func newEgressTunnelController(mgr manager.Manager, log logr.Logger, cfg *config.Config) error { if cfg == nil { return fmt.Errorf("cfg can not be nil") } @@ -583,19 +583,19 @@ func newEgressNodeController(mgr manager.Manager, log logr.Logger, cfg *config.C } } - log.Info("new egressnode controller") - c, err := controller.New("egressnode", mgr, controller.Options{Reconciler: r}) + log.Info("new egresstunnel controller") + c, err := controller.New("egresstunnel", mgr, controller.Options{Reconciler: r}) if err != nil { return err } - log.Info("egressnode controller watch EgressTunnel") + log.Info("egresstunnel controller watch EgressTunnel") if err := c.Watch(source.Kind(mgr.GetCache(), &egressv1.EgressTunnel{}), handler.EnqueueRequestsFromMapFunc(utils.KindToMapFlat("EgressTunnel"))); err != nil { return fmt.Errorf("failed to watch EgressTunnel: %w", err) } - log.Info("egressnode controller watch Node") + log.Info("egresstunnel controller watch Node") if err := c.Watch(source.Kind(mgr.GetCache(), &corev1.Node{}), handler.EnqueueRequestsFromMapFunc(utils.KindToMapFlat("Node"))); err != nil { return fmt.Errorf("failed to watch Node: %w", err) diff --git a/pkg/controller/egress_node_test.go b/pkg/controller/egress_tunnel_test.go similarity index 85% rename from pkg/controller/egress_node_test.go rename to pkg/controller/egress_tunnel_test.go index 92fb5c849..38a1c4a0b 100644 --- a/pkg/controller/egress_node_test.go +++ b/pkg/controller/egress_tunnel_test.go @@ -11,7 +11,7 @@ import ( "github.com/cilium/ipam/service/ipallocator" "github.com/stretchr/testify/assert" corev1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/apis/meta/v1" + v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client/fake" @@ -30,7 +30,7 @@ type TestNodeReq struct { expRequeue bool } -func TestEgressNodeCtrlForEgressNode(t *testing.T) { +func TestEgressTunnelCtrlForEgressTunnel(t *testing.T) { cfg := &config.Config{ EnvConfig: config.EnvConfig{}, FileConfig: config.FileConfig{EnableIPv4: true, EnableIPv6: false}, @@ -40,7 +40,7 @@ func TestEgressNodeCtrlForEgressNode(t *testing.T) { &corev1.Node{ObjectMeta: v1.ObjectMeta{Name: "node1"}}, &egressv1.EgressTunnel{ ObjectMeta: v1.ObjectMeta{Name: "node1"}, - Status: egressv1.EgressNodeStatus{}, + Status: egressv1.EgressTunnelStatus{}, }, } @@ -88,36 +88,36 @@ func TestEgressNodeCtrlForEgressNode(t *testing.T) { assert.Equal(t, req.expRequeue, res.Requeue) } - egressNode := &egressv1.EgressTunnel{} + egressTunnel := &egressv1.EgressTunnel{} - err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressNode) + err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressTunnel) if err != nil { t.Fatal(err) } - if egressNode.Status.Mark == "" { + if egressTunnel.Status.Mark == "" { t.Fatal("mark is empty") } - if egressNode.Status.Tunnel.MAC == "" { + if egressTunnel.Status.Tunnel.MAC == "" { t.Fatal("mac is empty") } - if egressNode.Status.Tunnel.IPv4 == "" { + if egressTunnel.Status.Tunnel.IPv4 == "" { t.Fatal("ipv4 is empty") } - err = reconciler.client.Delete(ctx, egressNode) + err = reconciler.client.Delete(ctx, egressTunnel) if err != nil { t.Fatal(err) } - err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressNode) + err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressTunnel) if err != nil { } else { - t.Fatal("expect deleted egress node, but got one") + t.Fatal("expect deleted egress tunnel, but got one") } } -func TestEgressNodeCtrlForNode(t *testing.T) { +func TestEgressTunnelCtrlForNode(t *testing.T) { cfg := &config.Config{} node := &corev1.Node{ObjectMeta: v1.ObjectMeta{Name: "node1"}} initialObjects := []client.Object{node} @@ -161,8 +161,8 @@ func TestEgressNodeCtrlForNode(t *testing.T) { assert.Equal(t, req.expRequeue, res.Requeue) } - egressNode := &egressv1.EgressTunnel{} - err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressNode) + egressTunnel := &egressv1.EgressTunnel{} + err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressTunnel) if err != nil { t.Fatal(err) } @@ -180,9 +180,9 @@ func TestEgressNodeCtrlForNode(t *testing.T) { assert.Equal(t, req.expRequeue, res.Requeue) } - err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressNode) + err = reconciler.client.Get(ctx, types.NamespacedName{Name: "node1"}, egressTunnel) if err != nil { - } else if egressNode.DeletionTimestamp.IsZero() { - t.Fatal("expect deleted egress node, but got one") + } else if egressTunnel.DeletionTimestamp.IsZero() { + t.Fatal("expect deleted egress tunnel, but got one") } } diff --git a/pkg/controller/webhook/validate_test.go b/pkg/controller/webhook/validate_test.go index 14273cd2b..a008bf18d 100644 --- a/pkg/controller/webhook/validate_test.go +++ b/pkg/controller/webhook/validate_test.go @@ -457,7 +457,7 @@ func TestUpdateEgressPolicy(t *testing.T) { } } -func TestValidateEgressNode(t *testing.T) { +func TestValidateEgressTunnel(t *testing.T) { ctx := context.Background() cases := map[string]struct { @@ -470,7 +470,7 @@ func TestValidateEgressNode(t *testing.T) { ObjectMeta: metav1.ObjectMeta{ Name: "node1", }, - Spec: v1beta1.EgressNodeSpec{}, + Spec: v1beta1.EgressTunnelSpec{}, }, expAllow: true, }, diff --git a/pkg/egressgateway/egress_gateway.go b/pkg/egressgateway/egress_gateway.go index 616922807..062d839f0 100644 --- a/pkg/egressgateway/egress_gateway.go +++ b/pkg/egressgateway/egress_gateway.go @@ -116,7 +116,7 @@ func (r egnReconciler) reconcileNode(ctx context.Context, req reconcile.Request, if err == nil { egw.Status.NodeList = append(egw.Status.NodeList, egress.EgressIPStatus{Name: node.Name, Status: string(egt.Status.Phase)}) } else { - egw.Status.NodeList = append(egw.Status.NodeList, egress.EgressIPStatus{Name: node.Name, Status: string(egress.EgressNodeFailed)}) + egw.Status.NodeList = append(egw.Status.NodeList, egress.EgressIPStatus{Name: node.Name, Status: string(egress.EgressTunnelFailed)}) } r.log.V(1).Info("update egress gateway status", "status", egw.Status) @@ -208,7 +208,7 @@ func (r egnReconciler) reconcileEGW(ctx context.Context, req reconcile.Request, if err == nil { perNodeMap[node.Name] = egress.EgressIPStatus{Name: node.Name, Status: string(egt.Status.Phase)} } else { - perNodeMap[node.Name] = egress.EgressIPStatus{Name: node.Name, Status: string(egress.EgressNodeFailed)} + perNodeMap[node.Name] = egress.EgressIPStatus{Name: node.Name, Status: string(egress.EgressTunnelFailed)} } isUpdate = true } @@ -246,7 +246,7 @@ func (r egnReconciler) reconcileEGW(ctx context.Context, req reconcile.Request, readyNum := 0 policyNum := 0 for _, node := range perNodeMap { - if node.Status == string(egress.EgressNodeReady) { + if node.Status == string(egress.EgressTunnelReady) { readyNum++ policyNum += len(node.Eips) } @@ -338,7 +338,7 @@ func (r egnReconciler) reconcileEGT(ctx context.Context, req reconcile.Request, egw := item.DeepCopy() // If the node is not in success state, the policy on the node is reassigned - if egt.Status.Phase != egress.EgressNodeReady { + if egt.Status.Phase != egress.EgressTunnelReady { for _, node := range egw.Status.NodeList { if node.Name != egt.Name { perNodeMap[node.Name] = node @@ -357,14 +357,14 @@ func (r egnReconciler) reconcileEGT(ctx context.Context, req reconcile.Request, } else { for _, node := range egw.Status.NodeList { if node.Name == egt.Name { - if node.Status != string(egress.EgressNodeReady) { - perNodeMap[node.Name] = egress.EgressIPStatus{Name: node.Name, Eips: node.Eips, Status: string(egress.EgressNodeReady)} + if node.Status != string(egress.EgressTunnelReady) { + perNodeMap[node.Name] = egress.EgressIPStatus{Name: node.Name, Eips: node.Eips, Status: string(egress.EgressTunnelReady)} // When the first gateway node of an egw recovers, you need to rebind the policy that references the egw readyNum := 0 policyNum := 0 for _, node := range perNodeMap { - if node.Status == string(egress.EgressNodeReady) { + if node.Status == string(egress.EgressTunnelReady) { readyNum++ policyNum += len(node.Eips) } @@ -734,7 +734,7 @@ func (r egnReconciler) reAllocatorPolicy(ctx context.Context, policy egress.Poli ipv4 = pi.ipv4 if len(ipv4) != 0 { perNode = GetNodeByIP(ipv4, *egw) - if nodeMap[perNode].Status != string(egress.EgressNodeReady) { + if nodeMap[perNode].Status != string(egress.EgressTunnelReady) { perNode = "" } @@ -766,7 +766,7 @@ func (r egnReconciler) reAllocatorPolicy(ctx context.Context, policy egress.Poli ipv6 = egw.Spec.Ippools.Ipv6DefaultEIP perNode = GetNodeByIP(ipv4, *egw) - if nodeMap[perNode].Status != string(egress.EgressNodeReady) { + if nodeMap[perNode].Status != string(egress.EgressTunnelReady) { perNode = "" } @@ -798,7 +798,7 @@ func (r egnReconciler) allocatorNode(selNodePolicy string, nodeMap map[string]eg perNodePolicyNum := 0 i := 0 for _, node := range nodeMap { - if node.Status != string(egress.EgressNodeReady) { + if node.Status != string(egress.EgressTunnelReady) { continue } diff --git a/pkg/k8s/apis/v1beta1/egressclusterpolicy_types.go b/pkg/k8s/apis/v1beta1/egressclusterpolicy_types.go index a469b93bb..1a02a3362 100644 --- a/pkg/k8s/apis/v1beta1/egressclusterpolicy_types.go +++ b/pkg/k8s/apis/v1beta1/egressclusterpolicy_types.go @@ -23,7 +23,7 @@ type EgressClusterPolicyList struct { // +kubebuilder:printcolumn:JSONPath=".spec.egressGatewayName",description="egressGatewayName",name="gateway",type=string // +kubebuilder:printcolumn:JSONPath=".status.eip.ipv4",description="ipv4",name="ipv4",type=string // +kubebuilder:printcolumn:JSONPath=".status.eip.ipv6",description="ipv6",name="ipv6",type=string -// +kubebuilder:printcolumn:JSONPath=".status.node",description="egressNode",name="egressNode",type=string +// +kubebuilder:printcolumn:JSONPath=".status.node",description="egressTunnel",name="egressTunnel",type=string type EgressClusterPolicy struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata"` diff --git a/pkg/k8s/apis/v1beta1/egressnode_types.go b/pkg/k8s/apis/v1beta1/egresstunnel_types.go similarity index 75% rename from pkg/k8s/apis/v1beta1/egressnode_types.go rename to pkg/k8s/apis/v1beta1/egresstunnel_types.go index feeaf3981..6caf917f6 100644 --- a/pkg/k8s/apis/v1beta1/egressnode_types.go +++ b/pkg/k8s/apis/v1beta1/egresstunnel_types.go @@ -7,7 +7,7 @@ import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) -// EgressTunnelList egress node list +// EgressTunnelList egress tunnel list // +kubebuilder:object:root=true type EgressTunnelList struct { metav1.TypeMeta `json:",inline"` @@ -29,17 +29,17 @@ type EgressTunnel struct { metav1.TypeMeta `json:",inline"` metav1.ObjectMeta `json:"metadata"` - Spec EgressNodeSpec `json:"spec,omitempty"` - Status EgressNodeStatus `json:"status,omitempty"` + Spec EgressTunnelSpec `json:"spec,omitempty"` + Status EgressTunnelStatus `json:"status,omitempty"` } -type EgressNodeSpec struct{} +type EgressTunnelSpec struct{} -type EgressNodeStatus struct { +type EgressTunnelStatus struct { // +kubebuilder:validation:Optional Tunnel Tunnel `json:"tunnel,omitempty"` // +kubebuilder:validation:Enum=Pending;Init;Failed;Ready;"" - Phase EgressNodePhase `json:"phase,omitempty"` + Phase EgressTunnelPhase `json:"phase,omitempty"` // +kubebuilder:validation:Optional Mark string `json:"mark,omitempty"` } @@ -64,17 +64,17 @@ type Parent struct { IPv6 string `json:"ipv6,omitempty"` } -type EgressNodePhase string +type EgressTunnelPhase string const ( - // EgressNodePending wait for tunnel address available - EgressNodePending EgressNodePhase = "Pending" - // EgressNodeInit Init tunnel address - EgressNodeInit EgressNodePhase = "Init" - // EgressNodeFailed allocate tunnel address failed - EgressNodeFailed EgressNodePhase = "Failed" - // EgressNodeReady tunnel is available - EgressNodeReady EgressNodePhase = "Ready" + // EgressTunnelPending wait for tunnel address available + EgressTunnelPending EgressTunnelPhase = "Pending" + // EgressTunnelInit Init tunnel address + EgressTunnelInit EgressTunnelPhase = "Init" + // EgressTunnelFailed allocate tunnel address failed + EgressTunnelFailed EgressTunnelPhase = "Failed" + // EgressTunnelReady tunnel is available + EgressTunnelReady EgressTunnelPhase = "Ready" ) func init() { diff --git a/pkg/k8s/apis/v1beta1/zz_generated.deepcopy.go b/pkg/k8s/apis/v1beta1/zz_generated.deepcopy.go index 97ce29e6b..f819bf8bf 100644 --- a/pkg/k8s/apis/v1beta1/zz_generated.deepcopy.go +++ b/pkg/k8s/apis/v1beta1/zz_generated.deepcopy.go @@ -572,37 +572,6 @@ func (in *EgressIPStatus) DeepCopy() *EgressIPStatus { return out } -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *EgressNodeSpec) DeepCopyInto(out *EgressNodeSpec) { - *out = *in -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EgressNodeSpec. -func (in *EgressNodeSpec) DeepCopy() *EgressNodeSpec { - if in == nil { - return nil - } - out := new(EgressNodeSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *EgressNodeStatus) DeepCopyInto(out *EgressNodeStatus) { - *out = *in - out.Tunnel = in.Tunnel -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EgressNodeStatus. -func (in *EgressNodeStatus) DeepCopy() *EgressNodeStatus { - if in == nil { - return nil - } - out := new(EgressNodeStatus) - in.DeepCopyInto(out) - return out -} - // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *EgressPolicy) DeepCopyInto(out *EgressPolicy) { *out = *in @@ -759,6 +728,37 @@ func (in *EgressTunnelList) DeepCopyObject() runtime.Object { return nil } +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *EgressTunnelSpec) DeepCopyInto(out *EgressTunnelSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EgressTunnelSpec. +func (in *EgressTunnelSpec) DeepCopy() *EgressTunnelSpec { + if in == nil { + return nil + } + out := new(EgressTunnelSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *EgressTunnelStatus) DeepCopyInto(out *EgressTunnelStatus) { + *out = *in + out.Tunnel = in.Tunnel +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new EgressTunnelStatus. +func (in *EgressTunnelStatus) DeepCopy() *EgressTunnelStatus { + if in == nil { + return nil + } + out := new(EgressTunnelStatus) + in.DeepCopyInto(out) + return out +} + // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. func (in *Eip) DeepCopyInto(out *Eip) { *out = *in diff --git a/test/doc/reliability.md b/test/doc/reliability.md index 075830d4c..43890048f 100644 --- a/test/doc/reliability.md +++ b/test/doc/reliability.md @@ -6,6 +6,6 @@ | R00002 | Use `kwok` to create 10 `Node`, create `Deployment` with 1000 replicas, create `Policy` and set `PodSelector` to match `Deployment`,
After restarting `Deployment` successfully, all matched `Pod`'s egress IP in the real node is `eip` | p3 | false | | | | R00003 | Use `kwok` to create 1000 `Node`, create EgressGateway and set `NodeSelector` to match the created 1000 `node`, `EgressGatewayStatus.NodeList` will be updated as expected.
Change `NodeSelector` to not match the created `node`, `EgressGatewayStatus.NodeList` will be updated as expected | p3 | false | | | | R00004 | Use `kwok` to create 10 `Node`, create 1000 single-replicas `Deployment`, and create 1000 `Policy` correspondingly, set `EgressIP.AllocatorPolicy` to `RR` mode,
after creating successfully `eip` will be evenly distributed on each node | p3 | false | | | -| R00005 | When the node where `eip` takes effect is shut down, `eip` will take effect to another node matching `NodeSelector`, and `egressGatewayStatus` and `EgressClusterStatus` are updated as expected, and the `EgressNode` corresponding to the shutdown node ` will be deleted and the egress IP will be accessed as expected | p3 | false | | | -| R00006 | After shutting down all nodes matched by `NodeSelector` in `egressGateway`,
`Pod`’s egress IP will be changed from `eip` to non-`eip`, `egressGatewayStatus.NodeList` will be empty, and the related `EgressIgnoreCIDR.NodeIP` will be deleted and the `EgressNode` corresponding to the shutdown node will be deleted.
After one of the `node` is turned on, `egressgateway` will recover in a short time and record the recovery time, and `eip` will be revalidated as the egress IP of `Pod`, and the `nodeIP` will be added to `EgressIgnoreCIDR.NodeIP` and `node` related information in `egressGatewayStatus.NodeList` is updated correctly,
after all boots, `eip` will only take effect on the first recovered `node`, and `EgressIgnoreCIDR.NodeIP` is updated correct | p3 | false | | | +| R00005 | When the node where `eip` takes effect is shut down, `eip` will take effect to another node matching `NodeSelector`, and `egressGatewayStatus` and `EgressClusterStatus` are updated as expected, and the `EgressTunnel` corresponding to the shutdown node ` will be deleted and the egress IP will be accessed as expected | p3 | false | | | +| R00006 | After shutting down all nodes matched by `NodeSelector` in `egressGateway`,
`Pod`’s egress IP will be changed from `eip` to non-`eip`, `egressGatewayStatus.NodeList` will be empty, and the related `EgressIgnoreCIDR.NodeIP` will be deleted and the `EgressTunnel` corresponding to the shutdown node will be deleted.
After one of the `node` is turned on, `egressgateway` will recover in a short time and record the recovery time, and `eip` will be revalidated as the egress IP of `Pod`, and the `nodeIP` will be added to `EgressIgnoreCIDR.NodeIP` and `node` related information in `egressGatewayStatus.NodeList` is updated correctly,
after all boots, `eip` will only take effect on the first recovered `node`, and `EgressIgnoreCIDR.NodeIP` is updated correct | p3 | false | | | | R00007 | Restart each component in the cluster (including calico, kube-proxy) `Pod` in turn. During the restart process, the access IP to outside the cluster is the set `eip` before, and the traffic cannot be interrupted. After the cluster returns to normal, `egressgateway` The individual `cr` state of the component is correct | p1 | false | | | diff --git a/test/doc/reliability_zh.md b/test/doc/reliability_zh.md index 7795c126a..32c8a01c6 100644 --- a/test/doc/reliability_zh.md +++ b/test/doc/reliability_zh.md @@ -7,8 +7,8 @@ | R00002 | Use `kwok` to create 10 `Node`, create `Deployment` with 1000 replicas, create `Policy` and set `PodSelector` to match `Deployment`,
After restarting `Deployment` successfully, all matched `Pod`'s egress IP in the real node is `eip` | p3 | false | | | | R00003 | Use `kwok` to create 1000 `Node`, create EgressGateway and set `NodeSelector` to match the created 1000 `node`, `EgressGatewayStatus.NodeList` will be updated as expected.
Change `NodeSelector` to not match the created `node`, `EgressGatewayStatus.NodeList` will be updated as expected | p3 | false | | | | R00004 | Use `kwok` to create 10 `Node`, create 1000 single-replicas `Deployment`, and create 1000 `Policy` correspondingly, set `EgressIP.AllocatorPolicy` to `RR` mode,
after creating successfully `eip` will be evenly distributed on each node | p3 | false | | | -| R00005 | When the node where `eip` takes effect is shut down, `eip` will take effect to another node matching `NodeSelector`, and `egressGatewayStatus` and `EgressClusterStatus` are updated as expected, and the `EgressNode` corresponding to the shutdown node ` will be deleted and the egress IP will be accessed as expected | p3 | false | | | -| R00006 | After shutting down all nodes matched by `NodeSelector` in `egressGateway`,
`Pod`’s egress IP will be changed from `eip` to non-`eip`, `egressGatewayStatus.NodeList` will be empty, and the related `EgressIgnoreCIDR.NodeIP` will be deleted and the `EgressNode` corresponding to the shutdown node will be deleted.
After one of the `node` is turned on, `egressgateway` will recover in a short time and record the recovery time, and `eip` will be revalidated as the egress IP of `Pod`, and the `nodeIP` will be added to `EgressIgnoreCIDR.NodeIP` and `node` related information in `egressGatewayStatus.NodeList` is updated correctly,
after all boots, `eip` will only take effect on the first recovered `node`, and `EgressIgnoreCIDR.NodeIP` is updated correct | p3 | false | | | +| R00005 | When the node where `eip` takes effect is shut down, `eip` will take effect to another node matching `NodeSelector`, and `egressGatewayStatus` and `EgressClusterStatus` are updated as expected, and the `EgressTunnel` corresponding to the shutdown node ` will be deleted and the egress IP will be accessed as expected | p3 | false | | | +| R00006 | After shutting down all nodes matched by `NodeSelector` in `egressGateway`,
`Pod`’s egress IP will be changed from `eip` to non-`eip`, `egressGatewayStatus.NodeList` will be empty, and the related `EgressIgnoreCIDR.NodeIP` will be deleted and the `EgressTunnel` corresponding to the shutdown node will be deleted.
After one of the `node` is turned on, `egressgateway` will recover in a short time and record the recovery time, and `eip` will be revalidated as the egress IP of `Pod`, and the `nodeIP` will be added to `EgressIgnoreCIDR.NodeIP` and `node` related information in `egressGatewayStatus.NodeList` is updated correctly,
after all boots, `eip` will only take effect on the first recovered `node`, and `EgressIgnoreCIDR.NodeIP` is updated correct | p3 | false | | | | R00007 | Restart each component in the cluster (including calico, kube-proxy) `Pod` in turn. During the restart process, the access IP to outside the cluster is the set `eip` before, and the traffic cannot be interrupted. After the cluster returns to normal, `egressgateway` The individual `cr` state of the component is correct | p1 | false | | | --> # Reliability E2E 用例 @@ -19,6 +19,6 @@ | R00002 | 使用 `kwok` 创建 10 个 `Node`,创建 1000 个副本的 `Deployment`,创建 `Policy` 并设置 `PodSelector`,使之与 `Deployment` 匹配,
重启 `Deployment` 成功后, 真实节点中匹配到的所有 `Pod` 的出口 IP 为 `eip` | p3 | false | | | | R00003 | 使用 `kwok` 创建 1000 个 `Node`,创建 EgressGateway 并设置 `NodeSelector` 匹配创建的 1000 个 `node`,`EgressGatewayStatus.NodeList` 会如期更新。
更改 `NodeSelector` 使之与创建的 `node` 不匹配,`EgressGatewayStatus.NodeList` 会如期更新 | p3 | false | | | | R00004 | 使用 `kwok` 创建 10 个 `Node`,创建 1000 个单副本的 `Deployment`,并对应创建 1000 个 `Policy` 设置 `EgressIP.AllocatorPolicy` 为轮询模式,
创建成功后 `eip` 会在各个节点上平均分配 | p3 | false | | | -| R00005 | 当关机 `eip` 生效的节点后,`eip` 会生效到另外匹配 `NodeSelector` 的节点上,
并且 `egressGatewayStatus` 及 `EgressClusterStatus` 如预期更新,与被关机的节点对应的 `EgressNode` 将被删除,出口 IP 如预期访问 | p3 | false | | | -| R00006 | 当关机 `egressGateway` 中 `NodeSelector` 匹配的所有节点后,
`Pod` 的出口 IP 将由 `eip` 改为非 `eip`,`egressGatewayStatus.NodeList` 将为空,相关的 `EgressIgnoreCIDR.NodeIP` 将被删除,与被关机的节点对应的 `EgressNode` 将被删除。
将其中一个 `node` 开机后,`egressgateway` 会在短时间内恢复并记录恢复时间,并且 `eip` 重新生效为 `Pod` 的出口 IP,`EgressIgnoreCIDR.NodeIP` 将对应的 `nodeIP` 添加并且 `egressGatewayStatus.NodeList` 中 `node` 相关信息更新正确,
全部开机最后 `eip` 只会生效在第一个恢复的 `node` 上,`EgressIgnoreCIDR.NodeIP` 更新正确 | p3 | false | | | +| R00005 | 当关机 `eip` 生效的节点后,`eip` 会生效到另外匹配 `NodeSelector` 的节点上,
并且 `egressGatewayStatus` 及 `EgressClusterStatus` 如预期更新,与被关机的节点对应的 `EgressTunnel` 将被删除,出口 IP 如预期访问 | p3 | false | | | +| R00006 | 当关机 `egressGateway` 中 `NodeSelector` 匹配的所有节点后,
`Pod` 的出口 IP 将由 `eip` 改为非 `eip`,`egressGatewayStatus.NodeList` 将为空,相关的 `EgressIgnoreCIDR.NodeIP` 将被删除,与被关机的节点对应的 `EgressTunnel` 将被删除。
将其中一个 `node` 开机后,`egressgateway` 会在短时间内恢复并记录恢复时间,并且 `eip` 重新生效为 `Pod` 的出口 IP,`EgressIgnoreCIDR.NodeIP` 将对应的 `nodeIP` 添加并且 `egressGatewayStatus.NodeList` 中 `node` 相关信息更新正确,
全部开机最后 `eip` 只会生效在第一个恢复的 `node` 上,`EgressIgnoreCIDR.NodeIP` 更新正确 | p3 | false | | | | R00007 | 依次重启集群中各个组件(包含 calico,kube-proxy)`Pod`, 重启过程中访问集群外部的出口 IP 为设置好的 `eip`,并且业务不能断流, 等待集群恢复正常后,`egressgateway` 组件的各个 `cr` 状态正确 | p1 | false | | | diff --git a/test/e2e/common/egressnode.go b/test/e2e/common/egressnode.go deleted file mode 100644 index c95a3ef23..000000000 --- a/test/e2e/common/egressnode.go +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright 2022 Authors of spidernet-io -// SPDX-License-Identifier: Apache-2.0 - -package common - -import ( - "time" - - "sigs.k8s.io/controller-runtime/pkg/client" - - . "github.com/onsi/ginkgo/v2" - . "github.com/onsi/gomega" - - "github.com/spidernet-io/e2eframework/framework" - egressv1 "github.com/spidernet-io/egressgateway/pkg/k8s/apis/v1beta1" - "github.com/spidernet-io/egressgateway/test/e2e/tools" -) - -func GetEgressNode(f *framework.Framework, name string, egressNode *egressv1.EgressTunnel) error { - key := client.ObjectKey{ - Name: name, - } - return f.GetResource(key, egressNode) -} - -func ListEgressNodes(f *framework.Framework, opt ...client.ListOption) (*egressv1.EgressTunnelList, error) { - egressNodeList := &egressv1.EgressTunnelList{} - e := f.ListResource(egressNodeList, opt...) - if e != nil { - return nil, e - } - return egressNodeList, nil -} - -// GetEgressNodes return []string of the egressNodes name -func GetEgressNodes(f *framework.Framework, opt ...client.ListOption) (egressNodes []string, e error) { - egressNodeList, e := ListEgressNodes(f, opt...) - if e != nil { - return nil, e - } - for _, item := range egressNodeList.Items { - egressNodes = append(egressNodes, item.Name) - } - return -} - -// CheckEgressNodeStatus check the status of the egressNode cr, parameter 'nodes' is the cluster's nodes name -func CheckEgressNodeStatus(f *framework.Framework, nodes []string, opt ...client.ListOption) { - egressNodes, e := GetEgressNodes(f, opt...) - Expect(e).NotTo(HaveOccurred()) - - Expect(tools.IsSameSlice(egressNodes, nodes)).To(BeTrue()) - - // get IP version - enableV4, enableV6, e := GetIPVersion(f) - Expect(e).NotTo(HaveOccurred()) - - for _, node := range nodes { - egressNodeObj := &egressv1.EgressTunnel{} - e = GetEgressNode(f, node, egressNodeObj) - Expect(e).NotTo(HaveOccurred()) - GinkgoWriter.Printf("egressNodeObj: %v\n", egressNodeObj) - - // check egressNode status - status := egressNodeObj.Status - // check phase - Expect(status.Phase).To(Equal(egressv1.EgressNodeReady)) - // check physicalInterface - Expect(CheckEgressNodeInterface(node, status.Tunnel.Parent.Name, time.Second*10)).To(BeTrue()) - // check mac - Expect(CheckEgressNodeMac(node, status.Tunnel.MAC, time.Second*10)).To(BeTrue()) - - if enableV4 { - // check vxlan ip - Expect(CheckEgressNodeIP(node, status.Tunnel.IPv4, time.Second*10)).To(BeTrue()) - // check node ip - Expect(CheckNodeIP(node, status.Tunnel.Parent.Name, status.Tunnel.Parent.IPv4, time.Second*10)).To(BeTrue()) - } - if enableV6 && !enableV4 { - // check vxlan ip - Expect(CheckEgressNodeIP(node, status.Tunnel.IPv6, time.Second*10)).To(BeTrue()) - // check node ip - Expect(CheckNodeIP(node, status.Tunnel.Parent.Name, status.Tunnel.Parent.IPv6, time.Second*10)).To(BeTrue()) - } - if enableV6 && enableV4 { - // check vxlan ip - Expect(CheckEgressNodeIP(node, status.Tunnel.IPv6, time.Second*10)).To(BeTrue()) - } - } -} diff --git a/test/e2e/common/egresstunnel.go b/test/e2e/common/egresstunnel.go new file mode 100644 index 000000000..ce5680bab --- /dev/null +++ b/test/e2e/common/egresstunnel.go @@ -0,0 +1,90 @@ +// Copyright 2022 Authors of spidernet-io +// SPDX-License-Identifier: Apache-2.0 + +package common + +import ( + "time" + + "sigs.k8s.io/controller-runtime/pkg/client" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + + "github.com/spidernet-io/e2eframework/framework" + egressv1 "github.com/spidernet-io/egressgateway/pkg/k8s/apis/v1beta1" + "github.com/spidernet-io/egressgateway/test/e2e/tools" +) + +func GetEgressTunnel(f *framework.Framework, name string, egressTunnel *egressv1.EgressTunnel) error { + key := client.ObjectKey{ + Name: name, + } + return f.GetResource(key, egressTunnel) +} + +func ListEgressTunnels(f *framework.Framework, opt ...client.ListOption) (*egressv1.EgressTunnelList, error) { + egressTunnelList := &egressv1.EgressTunnelList{} + e := f.ListResource(egressTunnelList, opt...) + if e != nil { + return nil, e + } + return egressTunnelList, nil +} + +// GetEgressTunnels return []string of the egressTunnels name +func GetEgressTunnels(f *framework.Framework, opt ...client.ListOption) (egressTunnels []string, e error) { + egressTunnelList, e := ListEgressTunnels(f, opt...) + if e != nil { + return nil, e + } + for _, item := range egressTunnelList.Items { + egressTunnels = append(egressTunnels, item.Name) + } + return +} + +// CheckEgressTunnelStatus check the status of the egressTunnel cr, parameter 'nodes' is the cluster's nodes name +func CheckEgressTunnelStatus(f *framework.Framework, nodes []string, opt ...client.ListOption) { + egressTunnels, e := GetEgressTunnels(f, opt...) + Expect(e).NotTo(HaveOccurred()) + + Expect(tools.IsSameSlice(egressTunnels, nodes)).To(BeTrue()) + + // get IP version + enableV4, enableV6, e := GetIPVersion(f) + Expect(e).NotTo(HaveOccurred()) + + for _, node := range nodes { + egressTunnelObj := &egressv1.EgressTunnel{} + e = GetEgressTunnel(f, node, egressTunnelObj) + Expect(e).NotTo(HaveOccurred()) + GinkgoWriter.Printf("egressTunnelObj: %v\n", egressTunnelObj) + + // check egressTunnel status + status := egressTunnelObj.Status + // check phase + Expect(status.Phase).To(Equal(egressv1.EgressTunnelReady)) + // check physicalInterface + Expect(CheckEgressTunnelInterface(node, status.Tunnel.Parent.Name, time.Second*10)).To(BeTrue()) + // check mac + Expect(CheckEgressTunnelMac(node, status.Tunnel.MAC, time.Second*10)).To(BeTrue()) + + if enableV4 { + // check vxlan ip + Expect(CheckEgressTunnelIP(node, status.Tunnel.IPv4, time.Second*10)).To(BeTrue()) + // check node ip + Expect(CheckNodeIP(node, status.Tunnel.Parent.Name, status.Tunnel.Parent.IPv4, time.Second*10)).To(BeTrue()) + } + if enableV6 && !enableV4 { + // check vxlan ip + Expect(CheckEgressTunnelIP(node, status.Tunnel.IPv6, time.Second*10)).To(BeTrue()) + // check node ip + Expect(CheckNodeIP(node, status.Tunnel.Parent.Name, status.Tunnel.Parent.IPv6, time.Second*10)).To(BeTrue()) + } + if enableV6 && enableV4 { + // check vxlan ip + Expect(CheckEgressTunnelIP(node, status.Tunnel.IPv6, time.Second*10)).To(BeTrue()) + } + } +} diff --git a/test/e2e/common/ip.go b/test/e2e/common/ip.go index 269f9367f..45c45c56f 100644 --- a/test/e2e/common/ip.go +++ b/test/e2e/common/ip.go @@ -19,7 +19,7 @@ import ( "github.com/spidernet-io/egressgateway/test/e2e/tools" ) -func CheckEgressNodeIP(nodeName string, ip string, duration time.Duration) bool { +func CheckEgressTunnelIP(nodeName string, ip string, duration time.Duration) bool { command := fmt.Sprintf("ip a show %s | grep %s", EGRESS_VXLAN_INTERFACE_NAME, ip) if _, err := tools.ExecInKindNode(nodeName, command, duration); err != nil { return false @@ -27,7 +27,7 @@ func CheckEgressNodeIP(nodeName string, ip string, duration time.Duration) bool return true } -func CheckEgressNodeMac(nodeName string, mac string, duration time.Duration) bool { +func CheckEgressTunnelMac(nodeName string, mac string, duration time.Duration) bool { command := fmt.Sprintf("ip l show %s | grep %s", EGRESS_VXLAN_INTERFACE_NAME, mac) if _, err := tools.ExecInKindNode(nodeName, command, duration); err != nil { return false @@ -35,7 +35,7 @@ func CheckEgressNodeMac(nodeName string, mac string, duration time.Duration) boo return true } -func CheckEgressNodeInterface(nodeName string, nic string, duration time.Duration) bool { +func CheckEgressTunnelInterface(nodeName string, nic string, duration time.Duration) bool { command := fmt.Sprintf("ip r l default | grep %s", nic) if _, err := tools.ExecInKindNode(nodeName, command, duration); err != nil { return false diff --git a/test/e2e/egressgateway/default_egressgateway_test.go b/test/e2e/egressgateway/default_egressgateway_test.go index 3008bdfae..1acd92be3 100644 --- a/test/e2e/egressgateway/default_egressgateway_test.go +++ b/test/e2e/egressgateway/default_egressgateway_test.go @@ -85,6 +85,10 @@ var _ = Describe("Test default egress gateway", Label("DefaultEgressGateway", "G }) It("test namespace default egress gateway", func() { + // create the default egress gateway of default ns + err := f.CreateResource(nsDefaultEgw) + Expect(err).NotTo(HaveOccurred()) + ns, err := f.GetNamespace("default") Expect(err).NotTo(HaveOccurred()) diff --git a/test/e2e/egressnode/egressnode_suite_test.go b/test/e2e/egresstunnel/egresstuneel_suite_test.go similarity index 85% rename from test/e2e/egressnode/egressnode_suite_test.go rename to test/e2e/egresstunnel/egresstuneel_suite_test.go index 309a9387e..127e37991 100644 --- a/test/e2e/egressnode/egressnode_suite_test.go +++ b/test/e2e/egresstunnel/egresstuneel_suite_test.go @@ -1,7 +1,7 @@ // Copyright 2022 Authors of spidernet-io // SPDX-License-Identifier: Apache-2.0 -package egressnode_test +package egresstunnel_test import ( "testing" @@ -10,9 +10,9 @@ import ( . "github.com/onsi/gomega" ) -func TestEgressnode(t *testing.T) { +func TestEgresstunnel(t *testing.T) { RegisterFailHandler(Fail) - RunSpecs(t, "Egressnode Suite") + RunSpecs(t, "Egresstunnel Suite") } // diff --git a/test/e2e/egressnode/egressnode_test.go b/test/e2e/egresstunnel/egresstunnel_test.go similarity index 51% rename from test/e2e/egressnode/egressnode_test.go rename to test/e2e/egresstunnel/egresstunnel_test.go index 2d550c3ac..7144593c7 100644 --- a/test/e2e/egressnode/egressnode_test.go +++ b/test/e2e/egresstunnel/egresstunnel_test.go @@ -1,16 +1,16 @@ // Copyright 2022 Authors of spidernet-io // SPDX-License-Identifier: Apache-2.0 -package egressnode_test +package egresstunnel_test //import ( // . "github.com/onsi/ginkgo/v2" // "github.com/spidernet-io/egressgateway/test/e2e/common" //) // -//var _ = Describe("Egressnode", func() { -// PIt("get and check egressnodes", func() { -// // check egressnode status -// common.CheckEgressNodeStatus(f, nodes) +//var _ = Describe("Egresstunnel", func() { +// PIt("get and check egresstunnels", func() { +// // check egresstunnel status +// common.CheckEgressTunnelStatus(f, nodes) // }) //})