Kubernetes : Configure Control Plane Node2022/11/22 |
Install Kubeadm to Configure Multi Nodes Kubernetes Cluster.
This example is based on the environment like follows.
-----------+---------------------------+--------------------------+------------ | | | eth0|10.0.0.30 eth0|10.0.0.51 eth0|10.0.0.52 +----------+-----------+ +-----------+----------+ +-----------+----------+ | [ dlp.srv.world ] | | [ node01.srv.world ] | | [ node02.srv.world ] | | Control Plane | | Worker Node | | Worker Node | +----------------------+ +----------------------+ +----------------------+ |
[1] |
[2] |
Configure initial setup on Control Plane Node.
For [control-plane-endpoint], specify the Hostname or IP address that Etcd and Kubernetes API server are run.
For [--pod-network-cidr] option, specify network which Pod Network uses.
There are some plugins for Pod Network. (refer to details below)
⇒ https://kubernetes.io/docs/concepts/cluster-administration/networking/
On this example, it selects Calico.
|
[root@dlp ~]# kubeadm init --control-plane-endpoint=10.0.0.30 --pod-network-cidr=192.168.0.0/16 --cri-socket=unix:///var/run/crio/crio.sock
[init] Using Kubernetes version: v1.25.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [dlp.srv.world kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.30]
.....
.....
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.0.30:6443 --token 2i0pks.r5lkvs55ezi792kl \
--discovery-token-ca-cert-hash sha256:6847590e54931d188e6a7b3fbda35ab7e8c1ca8fa915cdd377c44cfc0a027e1a \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
# the command below is necessary to run on Worker Node when he joins to the cluster, so remember it
kubeadm join 10.0.0.30:6443 --token 2i0pks.r5lkvs55ezi792kl \
--discovery-token-ca-cert-hash sha256:6847590e54931d188e6a7b3fbda35ab7e8c1ca8fa915cdd377c44cfc0a027e1a
# set cluster admin user # if you set common user as cluster admin, login with it and run [sudo cp/chown ***] [root@dlp ~]# mkdir -p $HOME/.kube [root@dlp ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@dlp ~]# chown $(id -u):$(id -g) $HOME/.kube/config
|
[3] | Configure Pod Network with Calico. |
[root@dlp ~]# wget https://docs.projectcalico.org/manifests/calico.yaml [root@dlp ~]# kubectl apply -f calico.yaml poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created # show state : OK if STATUS = Ready [root@dlp ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION dlp.srv.world Ready control-plane 9m14s v1.25.3 # show state : OK if all are Running [root@dlp ~]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-798cc86c47-wrdml 1/1 Running 0 80s kube-system calico-node-59pfz 1/1 Running 0 80s kube-system coredns-565d847f94-kk9n7 1/1 Running 0 9m30s kube-system coredns-565d847f94-qkmqc 1/1 Running 0 9m30s kube-system etcd-dlp.srv.world 1/1 Running 1 9m44s kube-system kube-apiserver-dlp.srv.world 1/1 Running 1 9m45s kube-system kube-controller-manager-dlp.srv.world 1/1 Running 1 9m40s kube-system kube-proxy-bwd2p 1/1 Running 0 9m30s kube-system kube-scheduler-dlp.srv.world 1/1 Running 1 9m39s |
Sponsored Link |