九 pod:污点taint 与容忍度tolerations( 四 )

给k8scloude2节点再添加一个污点
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep TaintsTaints:wudian=true:NoSchedule[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedulenode/k8scloude2 tainted[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep TaintsTaints:wudian=true:NoSchedule[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:wudian=true:NoSchedulezang=shide:NoScheduleUnschedulable:false[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 TaintsTaints:wudian=true:NoSchedulezang=shide:NoSchedule创建pod,tolerations参数表示容忍2个污点:wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点 。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:tolerations:- key: "wudian"operator: "Equal"value: "true"effect: "NoSchedule"- key: "zang"operator: "Equal"value: "shide"effect: "NoSchedule"nodeSelector:taint: Tcontainers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created查看pod,k8scloude2节点就算有2个污点也能运行pod
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running06s10.244.112.179k8scloude2<none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted创建pod,tolerations参数表示容忍污点:wudian=true:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点 。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:tolerations:- key: "wudian"operator: "Equal"value: "true"effect: "NoSchedule"nodeSelector:taint: Tcontainers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created查看pod,一个节点有两个污点值,但是yaml文件只容忍一个,所以pod创建不成功 。
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod10/1Pending08s<none><none><none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted[root@k8scloude1 pod]# kubectl get pods -o wideNo resources found in pod namespace.【九 pod:污点taint 与容忍度tolerations】取消k8scloude2污点
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:wudian=true:NoSchedulezang=shide:NoScheduleUnschedulable:false#取消污点[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang-node/k8scloude2 untainted[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian-node/k8scloude2 untainted[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 TaintsTaints:node-role.kubernetes.io/master:NoScheduleUnschedulable:falseLease:[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:<none>Unschedulable:falseLease:[root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 TaintsTaints:<none>Unschedulable:falseLease:

经验总结扩展阅读