给k8scloude2节点再添加一个污点
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep TaintsTaints:wudian=true:NoSchedule[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang=shide:NoSchedulenode/k8scloude2 tainted[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep TaintsTaints:wudian=true:NoSchedule[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:wudian=true:NoSchedulezang=shide:NoScheduleUnschedulable:false[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A1 TaintsTaints:wudian=true:NoSchedulezang=shide:NoSchedule
创建pod,tolerations参数表示容忍2个污点:wudian=true:NoSchedule和zang=shide:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点 。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:tolerations:- key: "wudian"operator: "Equal"value: "true"effect: "NoSchedule"- key: "zang"operator: "Equal"value: "shide"effect: "NoSchedule"nodeSelector:taint: Tcontainers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created
查看pod,k8scloude2节点就算有2个污点也能运行pod
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running06s10.244.112.179k8scloude2<none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted
创建pod,tolerations参数表示容忍污点:wudian=true:NoSchedule,nodeSelector:taint: T参数表示pod运行在标签为nodeSelector=taint: T的节点 。
[root@k8scloude1 pod]# vim schedulepod4.yaml [root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:tolerations:- key: "wudian"operator: "Equal"value: "true"effect: "NoSchedule"nodeSelector:taint: Tcontainers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created
查看pod,一个节点有两个污点值,但是yaml文件只容忍一个,所以pod创建不成功 。
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod10/1Pending08s<none><none><none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted[root@k8scloude1 pod]# kubectl get pods -o wideNo resources found in pod namespace.
【九 pod:污点taint 与容忍度tolerations】取消k8scloude2污点
[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:wudian=true:NoSchedulezang=shide:NoScheduleUnschedulable:false#取消污点[root@k8scloude1 pod]# kubectl taint node k8scloude2 zang-node/k8scloude2 untainted[root@k8scloude1 pod]# kubectl taint node k8scloude2 wudian-node/k8scloude2 untainted[root@k8scloude1 pod]# kubectl describe nodes k8scloude1 | grep -A2 TaintsTaints:node-role.kubernetes.io/master:NoScheduleUnschedulable:falseLease:[root@k8scloude1 pod]# kubectl describe nodes k8scloude2 | grep -A2 TaintsTaints:<none>Unschedulable:falseLease:[root@k8scloude1 pod]# kubectl describe nodes k8scloude3 | grep -A2 TaintsTaints:<none>Unschedulable:falseLease:
经验总结扩展阅读
- 2023年农历七月廿九宜订婚吗 2023年9月13日是订婚的黄道吉日吗
- 2023年9月13日嫁娶好吗 2023年农历七月廿九嫁娶吉日
- 2023年农历七月廿九领证吉日 2023年农历七月廿九宜领证吗
- 2023年农历七月廿九求婚吉日 2023年9月13日是求婚吉日吗
- 2023年9月13日提亲吉日一览表 2023年农历七月廿九提亲吉日
- 2023年9月13日定亲行吗 2023年农历七月廿九定亲吉日
- 2023年9月13日是举办婚礼的黄道吉日吗 2023年农历七月廿九举办婚礼吉日
- 2023年9月13日喝喜酒好吗 2023年农历七月廿九喝喜酒吉日
- 2023年农历七月廿九搬家吉日 2023年农历七月廿九宜搬家吗
- 2023年农历七月廿九宜搬迁吗 2023年农历七月廿九搬迁吉日