cordon节点,drain驱逐节点,delete 节点( 四 )

查看pod , k8scloude2节点被drain之后 , pod都调度到了k8scloude3节点 。
节点被drain驱逐的本质就是删除节点上的pod , k8scloude2节点被drain驱逐之后 , k8scloude2上运行的pod会被删除 。
deploy是一个控制器 , 会监控pod的副本数 , 当k8scloude2上的pod被驱逐之后 , 副本数少于10 , 于是在可调度的节点创建pod,补足副本数 。
单独的pod不具备再生性 , 删除之后就真删除了 , 如果k8scloude3被驱逐 , 则pod pod1会被删除 , 其他可调度节点也不会再生一个pod1 。
[root@k8scloude1 deploy]# kubectl get pod -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-7gh4z1/1Running084s10.244.251.240k8scloude3<none><none>nginx-6cf858f6cf-7lmfd1/1Running085s10.244.251.238k8scloude3<none><none>nginx-6cf858f6cf-86wxr1/1Running06m14s10.244.251.237k8scloude3<none><none>nginx-6cf858f6cf-9bn2b1/1Running085s10.244.251.243k8scloude3<none><none>nginx-6cf858f6cf-9njrj1/1Running06m14s10.244.251.236k8scloude3<none><none>nginx-6cf858f6cf-bqk2w1/1Running084s10.244.251.241k8scloude3<none><none>nginx-6cf858f6cf-hchtb1/1Running06m14s10.244.251.234k8scloude3<none><none>nginx-6cf858f6cf-hjddp1/1Running084s10.244.251.244k8scloude3<none><none>nginx-6cf858f6cf-pl7ww1/1Running06m14s10.244.251.235k8scloude3<none><none>nginx-6cf858f6cf-sgxfg1/1Running084s10.244.251.242k8scloude3<none><none>pod11/1Running041m10.244.251.216k8scloude3<none><none>查看node节点状态
[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready,SchedulingDisabled<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.04.3 uncordon节点要取消drain某个节点 , 直接uncordon即可 , 没有undrain操作 。
[root@k8scloude1 deploy]# kubectl undrain k8scloude2Error: unknown command "undrain" for "kubectl"Did you mean this?drainRun 'kubectl --help' for usage.uncordon k8scloude2节点 , 节点恢复调度
[root@k8scloude1 deploy]# kubectl uncordon k8scloude2node/k8scloude2 uncordoned[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0把deploy副本数变为0 , 再变为10 , 再观察pod分布
[root@k8scloude1 deploy]# kubectl scale deploy nginx --replicas=0deployment.apps/nginx scaled[root@k8scloude1 deploy]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running052m10.244.251.216k8scloude3<none><none>[root@k8scloude1 deploy]# kubectl scale deploy nginx --replicas=10deployment.apps/nginx scaledk8scloude2节点恢复可调度pod状态
[root@k8scloude1 deploy]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-4sqj81/1Running06s10.244.112.172k8scloude2<none><none>nginx-6cf858f6cf-cjqxv1/1Running06s10.244.112.176k8scloude2<none><none>nginx-6cf858f6cf-fk69r1/1Running06s10.244.112.175k8scloude2<none><none>nginx-6cf858f6cf-ghznd1/1Running06s10.244.112.173k8scloude2<none><none>nginx-6cf858f6cf-hnxzs1/1Running06s10.244.251.246k8scloude3<none><none>nginx-6cf858f6cf-hshnm1/1Running06s10.244.112.171k8scloude2<none><none>nginx-6cf858f6cf-jb5sh1/1Running06s10.244.112.170k8scloude2<none><none>nginx-6cf858f6cf-l9xlm1/1Running06s10.244.112.174k8scloude2<none><none>nginx-6cf858f6cf-pgjlb1/1Running06s10.244.251.247k8scloude3<none><none>nginx-6cf858f6cf-rlnh61/1Running06s10.244.251.245k8scloude3<none><none>pod11/1Running052m10.244.251.216k8scloude3<none><none>

经验总结扩展阅读