cordon节点,drain驱逐节点,delete 节点( 五 )

删除deploy , 删除pod 。
[root@k8scloude1 deploy]# kubectl delete -f nginx.yamldeployment.apps "nginx" deleted[root@k8scloude1 deploy]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted[root@k8scloude1 deploy]# kubectl get pods -o wideNo resources found in pod namespace.五.delete 节点5.1 delete节点概览delete 删除节点就直接把一个节点就k8s集群中删除了 , delete 节点之前需要先drain 节点 。
关于delete节点以及重装节点的详细内容 , 请查看博客《模拟重装Kubernetes(k8s)集群:删除k8s集群然后重装》https://www.cnblogs.com/renshengdezheli/p/16686997.html
5.2 delete节点kubectl drain 安全驱逐节点上面所有的 pod , --ignore-daemonsets往往需要指定的 , 这是因为deamonset会忽略SchedulingDisabled标签(使用kubectl drain时会自动给节点打上不可调度SchedulingDisabled标签),因此deamonset控制器控制的pod被删除后 , 可能马上又在此节点上启动起来,这样就会成为死循环 。因此这里忽略daemonset 。
[root@k8scloude1 ~]# kubectl drain k8scloude3 --ignore-daemonsetsnode/k8scloude3 cordonedWARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wmz4r, kube-system/kube-proxy-84gcxevicting pod kube-system/calico-kube-controllers-6b9fbfff44-rl2mhpod/calico-kube-controllers-6b9fbfff44-rl2mh evictednode/k8scloude3 evictedk8scloude3变为SchedulingDisabled
[root@k8scloude1 ~]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master64mv1.21.0k8scloude2Ready<none>56mv1.21.0k8scloude3Ready,SchedulingDisabled<none>56mv1.21.0删除节点k8scloude3
[root@k8scloude1 ~]# kubectl delete nodes k8scloude3node "k8scloude3" deleted[root@k8scloude1 ~]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master65mv1.21.0k8scloude2Ready<none>57mv1.21.0

经验总结扩展阅读