八 pod:pod的调度——将 Pod 指派给节点( 五 )

列出含有标签k8snodename=k8scloude2的节点
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2#列出含有标签k8snodename=k8scloude2的节点[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2NAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeled对所有节点设置标签
[root@k8scloude1 pod]# kubectl label nodes --all k8snodename=cloudenode/k8scloude1 labelednode/k8scloude2 labelednode/k8scloude3 labeled列出含有标签k8snodename=cloude的节点
#列出含有标签k8snodename=cloude的节点[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloudeNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master7d1hv1.21.0k8scloude2Ready<none>7d1hv1.21.0k8scloude3Ready<none>7d1hv1.21.0#删除标签[root@k8scloude1 pod]# kubectl label nodes --all k8snodename-node/k8scloude1 labelednode/k8scloude2 labelednode/k8scloude3 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloudeNo resources found--overwrite参数,标签的覆盖
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2node/k8scloude2 labeled#标签的覆盖[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloudeerror: 'k8snodename' already has a value (k8scloude2), and --overwrite is false#--overwrite参数,标签的覆盖[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude --overwritenode/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2No resources found[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloudeNAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeledTips:如果不想在k8scloude1的ROLES里看到control-plane,则可以通过取消标签达到目的:kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane- 进行取消标签
[root@k8scloude1 pod]# kubectl get nodes --show-labelsNAMESTATUSROLESAGEVERSIONLABELSk8scloude1Readycontrol-plane,master7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=k8scloude2Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linuxk8scloude3Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux[root@k8scloude1 pod]# kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane-3.4.3 通过标签控制pod在哪个节点运行给k8scloude2节点打上标签k8snodename=k8scloude2
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2node/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2NAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl get podsNo resources found in pod namespace.创建pod,nodeSelector:k8snodename: k8scloude2 指定pod运行在标签为k8snodename=k8scloude2的节点上
[root@k8scloude1 pod]# vim schedulepod4.yaml[root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:nodeSelector:k8snodename: k8scloude2containers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created可以看到pod运行在k8scloude2节点

经验总结扩展阅读