Debian 12 bookworm
Sponsored Link

Kubernetes : Configure Control Plane Node2023/07/28

 

Install Kubeadm to Configure Multi Nodes Kubernetes Cluster.

This example is based on the environment like follows.

-----------+---------------------------+--------------------------+------------
           |                           |                          |
       eth0|10.0.0.25              eth0|10.0.0.71             eth0|10.0.0.72
+----------+-----------+   +-----------+-----------+   +-----------+-----------+
|  [ ctrl.srv.world ]  |   |  [snode01.srv.world]  |   |  [snode02.srv.world]  |
|     Control Plane    |   |      Worker Node      |   |      Worker Node      |
+----------------------+   +-----------------------+   +-----------------------+

[1]
[2]
Configure initial setup on Control Plane Node.
For [control-plane-endpoint], specify the Hostname or IP address that Etcd and Kubernetes API server are run.
For [--pod-network-cidr] option, specify network which Pod Network uses.
There are some plugins for Pod Network. (refer to details below)
  ⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/
On this example, it uses Calico.
root@ctrl:~#
kubeadm init --control-plane-endpoint=10.0.0.25 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///run/containerd/containerd.sock

[init] Using Kubernetes version: v1.30.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ctrl.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.25]

.....
.....

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.0.0.25:6443 --token wmgxmn.abjab1upv8da9bp5 \
        --discovery-token-ca-cert-hash sha256:6b0bceac20f9f9e4dea0c52d8ba3b50d565d7e59ddbdeee6fd7544d140ac78fe \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.25:6443 --token wmgxmn.abjab1upv8da9bp5 \
        --discovery-token-ca-cert-hash sha256:6b0bceac20f9f9e4dea0c52d8ba3b50d565d7e59ddbdeee6fd7544d140ac78fe

# set cluster admin user
# if you set common user as cluster admin, login with it and run [sudo cp/chown ***]

root@ctrl:~#
mkdir -p $HOME/.kube

root@ctrl:~#
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@ctrl:~#
chown $(id -u):$(id -g) $HOME/.kube/config
[3] Configure Pod Network with Calico.
root@ctrl:~#
wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml

root@ctrl:~#
kubectl apply -f calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

# show state : OK if STATUS = Ready

root@ctrl:~#
kubectl get nodes

NAME             STATUS   ROLES           AGE    VERSION
ctrl.srv.world   Ready    control-plane   2m9s   v1.30.3

# show state : OK if all pods are Running

root@ctrl:~#
kubectl get pods -A

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-86996b59f4-8zjlx   1/1     Running   0          35s
kube-system   calico-node-9kklj                          1/1     Running   0          35s
kube-system   coredns-7db6d8ff4d-jmdnv                   1/1     Running   0          2m7s
kube-system   coredns-7db6d8ff4d-klrl7                   1/1     Running   0          2m7s
kube-system   etcd-ctrl.srv.world                        1/1     Running   0          2m21s
kube-system   kube-apiserver-ctrl.srv.world              1/1     Running   0          2m21s
kube-system   kube-controller-manager-ctrl.srv.world     1/1     Running   0          2m21s
kube-system   kube-proxy-gpkt2                           1/1     Running   0          2m7s
kube-system   kube-scheduler-ctrl.srv.world              1/1     Running   0          2m21s
Matched Content