CentOS Stream 10
Sponsored Link

Kubernetes : Add Control Plane Node2025/01/24

 

Add new Control Plane Nodes to existing Kubernetes Cluster.

This example is based on the cluster environment like follows.
It adds [dlp-1.srv.world (10.0.0.31)] as Control Plane Node to this cluster.

*Note
When etcd is started on the Control Plane, the fault tolerance of etcd is 0 for 1-2 units, so in a configuration with 2 Control Planes, if one of them goes down, it will no longer be possible to connect to etcd and the cluster will not be able to be used normally.

+----------------------+   +----------------------+
|  [ ctrl.srv.world ]  |   |   [ dlp.srv.world ]  |
|     Manager Node     |   |     Control Plane    |
+-----------+----------+   +-----------+----------+
        eth0|10.0.0.25             eth0|10.0.0.30
            |                          |
------------+--------------------------+-----------
            |                          |
        eth0|10.0.0.51             eth0|10.0.0.52
+-----------+----------+   +-----------+----------+
| [ node01.srv.world ] |   | [ node02.srv.world ] |
|     Worker Node#1    |   |     Worker Node#2    |
+----------------------+   +----------------------+

[1]

On a new Node, Configure common settings to join in Cluster, refer to here.

[2] Add proxy setting for new Control Plane on Manager Node.
[root@ctrl ~]#
vi /etc/nginx/nginx.conf
# add new Control Plane
stream {
    upstream k8s-api {
        server 10.0.0.30:6443;
        server 10.0.0.31:6443;
    }
    server {
        listen 6443;
        proxy_pass k8s-api;
    }
}

[root@ctrl ~]#
systemctl reload nginx
[3] Confirm join command on existing Control Plane Node and also transfer certificate files to new Node with any user.
[root@dlp ~]#
cd /etc/kubernetes/pki

[root@dlp pki]#
tar czvf kube-certs.tar.gz sa.pub sa.key ca.crt ca.key front-proxy-ca.crt front-proxy-ca.key etcd/ca.crt etcd/ca.key

[root@dlp pki]#
scp kube-certs.tar.gz centos@10.0.0.31:/tmp
[root@dlp pki]#
kubeadm token create --print-join-command

kubeadm join 10.0.0.25:6443 --token 6m8ev5.3jrji3mal6c7mgw8 --discovery-token-ca-cert-hash sha256:17b33be257174fc86fa06066a5ebdbdb84d9b397f86d893a54d328ac3a1a44dd
[4] Run join command you confirmed on a new Node with [--control-plane] option.
# copy certificates transferred from existing Control Plane

[root@dlp-1 ~]#
mkdir /etc/kubernetes/pki

[root@dlp-1 ~]#
tar zxvf /tmp/kube-certs.tar.gz -C /etc/kubernetes/pki
# if Firewalld is running, allow related services

[root@dlp-1 ~]#
firewall-cmd --add-service={kube-apiserver,kube-control-plane,kube-control-plane-secure,kube-controller-manager,kube-controller-manager-secure,kube-scheduler,kube-scheduler-secure,kubelet,kubelet-readonly,etcd-server,etcd-client,http,https,dns}

success
[root@dlp-1 ~]#
firewall-cmd --add-port={179/tcp,4789/udp}

success
[root@dlp-1 ~]#
firewall-cmd --add-masquerade

success
[root@dlp-1 ~]#
firewall-cmd --runtime-to-permanent

success
[root@dlp-1 ~]#
kubeadm join 10.0.0.25:6443 --token 6m8ev5.3jrji3mal6c7mgw8 \
--discovery-token-ca-cert-hash sha256:17b33be257174fc86fa06066a5ebdbdb84d9b397f86d893a54d328ac3a1a44dd \
--control-plane

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp-1.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.31 10.0.0.25]

.....
.....

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
[5] Verify settings on Manager Node. That's OK if the status of new Node turns to [STATUS = Ready].
[root@ctrl ~]#
kubectl get nodes

NAME               STATUS   ROLES           AGE    VERSION
dlp-1.srv.world    Ready    control-plane   63s    v1.31.5
dlp.srv.world      Ready    control-plane   2d2h   v1.31.5
node01.srv.world   Ready    <none>          2d2h   v1.31.5
node02.srv.world   Ready    <none>          2d2h   v1.31.5

[root@ctrl ~]#
kubectl get pods -A -o wide | grep dlp-1

kube-system   calico-node-8ckkz                          1/1     Running   0          2m6s   10.0.0.31        dlp-1.srv.world    <none>           <none>
kube-system   etcd-dlp-1.srv.world                       1/1     Running   0          2m4s   10.0.0.31        dlp-1.srv.world    <none>           <none>
kube-system   kube-apiserver-dlp-1.srv.world             1/1     Running   0          2m4s   10.0.0.31        dlp-1.srv.world    <none>           <none>
kube-system   kube-controller-manager-dlp-1.srv.world    1/1     Running   0          2m4s   10.0.0.31        dlp-1.srv.world    <none>           <none>
kube-system   kube-proxy-thzrl                           1/1     Running   0          2m6s   10.0.0.31        dlp-1.srv.world    <none>           <none>
kube-system   kube-scheduler-dlp-1.srv.world             1/1     Running   0          2m1s   10.0.0.31        dlp-1.srv.world    <none>           <none>
Matched Content