Ubuntu 22.04
Sponsored Link

Kubernetes : नोड्स हटाएँ2023/09/04

 
मौजूदा Kubernetes क्लस्टर से नोड्स निकालें।
यह उदाहरण निम्न प्रकार के वातावरण पर आधारित है और इसमें से एक नोड [snode03.srv.world] को हटा दें।
-----------+---------------------------+--------------------------+--------------+
           |                           |                          |              |
       eth0|10.0.0.25              eth0|10.0.0.71             eth0|10.0.0.72     |
+----------+-----------+   +-----------+-----------+   +-----------+-----------+ |
|  [ ctrl.srv.world ]  |   |  [snode01.srv.world]  |   |  [snode02.srv.world]  | |
|     Control Plane    |   |      Worker Node      |   |      Worker Node      | |
+----------------------+   +-----------------------+   +-----------------------+ |
                                                                                 |
------------+--------------------------------------------------------------------+
            |
        eth0|10.0.0.73
+-----------+-----------+
|  [snode03.srv.world]  |
|      Worker Node      |
+-----------------------+

[1] मास्टर नोड पर एक नोड हटाएँ.
# लक्ष्य नोड को हटाने की तैयारी करें
# --ignore-daemonsets ⇒ DaemonSet में पॉड्स को अनदेखा करें
# --delete-emptydir-data ⇒ उन पॉड्स को अनदेखा करें जिनमें emptyDir वॉल्यूम हैं
# --force ⇒ उन पॉड्स को भी हटा दें जिन्हें पॉड के रूप में बनाया गया था, परिनियोजन या अन्य के रूप में नहीं

root@ctrl:~#
kubectl drain snode03.srv.world --ignore-daemonsets --delete-emptydir-data --force

Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-762x8, kube-system/kube-proxy-lm25n
node/snode03.srv.world drained

# कुछ मिनट बाद सत्यापित करें

root@ctrl:~#
kubectl get nodes snode03.srv.world

NAME                STATUS                     ROLES    AGE   VERSION
snode03.srv.world   Ready,SchedulingDisabled   <none>   14m   v1.25.3

# हटाएँ विधि चलाएँ

root@ctrl:~#
kubectl delete node snode03.srv.world

node "snode03.srv.world" deleted

root@ctrl:~#
kubectl get nodes

NAME                STATUS   ROLES           AGE   VERSION
ctrl.srv.world      Ready    control-plane   26h   v1.25.3
snode01.srv.world   Ready    <none>          25h   v1.25.3
snode02.srv.world   Ready    <none>          25h   v1.25.3
[2] हटाए गए नोड पर, kubeadm सेटिंग्स रीसेट करें।
root@snode03:~#
kubeadm reset

[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
W1103 04:37:53.839439    7438 removeetcdmember.go:85] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
मिलान सामग्री