cordon节点,drain驱逐节点,delete 节点( 三 )

3.3 uncordon节点要让节点恢复调度pod , uncordon即可 。
uncordon k8scloude2节点 , k8scloude2节点状态变为Ready , 恢复调度 。
#需要uncordon[root@k8scloude1 deploy]# kubectl uncordon k8scloude2node/k8scloude2 uncordoned[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0四.drain节点4.1 drain节点概览在对节点执行维护(例如内核升级、硬件维护等)之前 ,  可以使用 kubectl drain 从节点安全地逐出所有 Pods 。安全的驱逐过程允许 Pod 的容器 体面地终止 ,  并确保满足指定的 PodDisruptionBudgets , PodDisruptionBudget 是一个对象 , 用于定义可能对一组 Pod 造成的最大干扰 。。说明: 默认情况下 ,  kubectl drain 将忽略节点上不能杀死的特定系统 Pod; 'drain' 驱逐或删除除镜像 pod 之外的所有 pod(不能通过 API 服务器删除) 。如果有 daemon set-managed pods , drain 不会在没有 --ignore-daemonsets 的情况下继续进行 , 并且无论如何它都不会删除任何 daemon set-managed pods , 因为这些 pods 将立即被 daemon set 控制器替换 , 它会忽略不可调度的标记 。如果有任何 pod 既不是镜像 pod , 也不是由复制控制器、副本集、守护程序集、有状态集或作业管理的 , 那么除非您使用 --force , 否则 drain 不会删除任何 pod 。如果一个或多个 pod 的管理资源丢失 ,  --force 也将允许继续删除 。
kubectl drain 的成功返回 , 表明所有的 Pods(除了上一段中描述的被排除的那些) ,  已经被安全地逐出(考虑到期望的终止宽限期和你定义的 PodDisruptionBudget) 。然后就可以安全地关闭节点 ,  比如关闭物理机器的电源 , 如果它运行在云平台上 , 则删除它的虚拟机 。
4.2 drain 节点查看node状态和pod
[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0[root@k8scloude1 deploy]# kubectl get pod -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-58wnd1/1Running065s10.244.112.167k8scloude2<none><none>nginx-6cf858f6cf-5rrk41/1Running065s10.244.112.164k8scloude2<none><none>nginx-6cf858f6cf-86wxr1/1Running065s10.244.251.237k8scloude3<none><none>nginx-6cf858f6cf-89wj91/1Running065s10.244.112.168k8scloude2<none><none>nginx-6cf858f6cf-9njrj1/1Running065s10.244.251.236k8scloude3<none><none>nginx-6cf858f6cf-hchtb1/1Running065s10.244.251.234k8scloude3<none><none>nginx-6cf858f6cf-mb2ft1/1Running065s10.244.112.166k8scloude2<none><none>nginx-6cf858f6cf-nq6zv1/1Running065s10.244.112.169k8scloude2<none><none>nginx-6cf858f6cf-pl7ww1/1Running065s10.244.251.235k8scloude3<none><none>nginx-6cf858f6cf-sf2w61/1Running065s10.244.112.165k8scloude2<none><none>pod11/1Running036m10.244.251.216k8scloude3<none><none>drain驱逐节点:drain=cordon+evicted
drain k8scloude2节点 , --delete-emptydir-data删除数据 , --ignore-daemonsets忽略daemonsets
[root@k8scloude1 deploy]# kubectl drain k8scloude2node/k8scloude2 cordonederror: unable to drain node "k8scloude2", aborting command...There are pending nodes to be drained: k8scloude2cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-k5dmjcannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-nsbfs, kube-system/kube-proxy-lpj8z[root@k8scloude1 deploy]# kubectl get nodeNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready,SchedulingDisabled<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0[root@k8scloude1 deploy]# kubectl drain k8scloude2 --ignore-daemonsetsnode/k8scloude2 already cordonederror: unable to drain node "k8scloude2", aborting command...There are pending nodes to be drained: k8scloude2error: cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-k5dmj[root@k8scloude1 deploy]# kubectl drain k8scloude2 --ignore-daemonsets --force --delete-emptydir-datanode/k8scloude2 already cordonedWARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-nsbfs, kube-system/kube-proxy-lpj8zevicting pod pod/nginx-6cf858f6cf-sf2w6evicting pod pod/nginx-6cf858f6cf-5rrk4evicting pod kube-system/metrics-server-bcfb98c76-k5dmjevicting pod pod/nginx-6cf858f6cf-58wndevicting pod pod/nginx-6cf858f6cf-mb2ftevicting pod pod/nginx-6cf858f6cf-89wj9evicting pod pod/nginx-6cf858f6cf-nq6zvpod/nginx-6cf858f6cf-5rrk4 evictedpod/nginx-6cf858f6cf-mb2ft evictedpod/nginx-6cf858f6cf-sf2w6 evictedpod/nginx-6cf858f6cf-58wnd evictedpod/nginx-6cf858f6cf-nq6zv evictedpod/nginx-6cf858f6cf-89wj9 evictedpod/metrics-server-bcfb98c76-k5dmj evictednode/k8scloude2 evicted

经验总结扩展阅读