OpenShift Origin (OKD) 3.10 : Add Nodes to a Cluster2018/08/22 |
Add Nodes to an existing OpenShift Cluster.
This example is based on the environment like follows.
On this tutotial, add a Compute Node [node03.srv.world (10.0.0.53)] as an example. -----------+-----------------------------------------------------------+------------ |10.0.0.25 |10.0.0.51 |10.0.0.52 +----------+-----------+ +----------+-----------+ +----------+-----------+ | [ ctrl.srv.world ] | | [ node01.srv.world ] | | [ node02.srv.world ] | | (Master Node) | | (Compute Node) | | (Compute Node) | | (Infra Node) | | | | | | (Compute Node) | | | | | +----------------------+ +----------------------+ +----------------------+ |
[1] | On the target Node to be added, Create the same user for Cluster administration with other nodes and also grant root privileges to him. |
[root@node03 ~]#
useradd origin [root@node03 ~]# passwd origin [root@node03 ~]# echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift [root@node03 ~]# chmod 440 /etc/sudoers.d/openshift # if Firewalld is running, allow SSH [root@node03 ~]# firewall-cmd --add-service=ssh --permanent [root@node03 ~]# firewall-cmd --reload |
[2] | On the target Node to be added, install OpenShift Origin 3.10 repository and Docker and so on. |
[root@node03 ~]#
[root@node03 ~]# yum -y install centos-release-openshift-origin310 epel-release docker git pyOpenSSL systemctl start docker [root@node03 ~]# systemctl enable docker |
[3] | On Master Node, login with an admin user of the Cluster and copy SSH public-key to the new Node. |
[origin@ctrl ~]$
vi ~/.ssh/config # add new node
Host ctrl
Hostname ctrl.srv.world
User origin
Host node01
Hostname node01.srv.world
User origin
Host node02
Hostname node02.srv.world
User origin
Host node03
Hostname node03.srv.world
User origin
[origin@ctrl ~]$ ssh-copy-id node03 |
[4] | On Master Node, login with an admin user of the Cluster and run Ansible Playbook for scaleout the Cluster. For [/etc/ansible/hosts] file, Use the latest one when you setup or scaleout the Cluster. |
# add into OSEv3 section [OSEv3:children] masters nodes etcd new_nodes [OSEv3:vars] ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origins openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] openshift_master_default_subdomain=apps.srv.world openshift_docker_insecure_registries=172.30.0.0/16 [masters] ctrl.srv.world openshift_schedulable=true containerized=false [etcd] ctrl.srv.world [nodes] ctrl.srv.world openshift_node_group_name='node-config-master-infra' node01.srv.world openshift_node_group_name='node-config-compute' node02.srv.world openshift_node_group_name='node-config-compute' # add definition for new node (add Infra node feature on this example below) [new_nodes] node03.srv.world openshift_node_group_name='node-config-infra' # run Prerequisites Playbook [origin@ctrl ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ................ ................ PLAY RECAP ********************************************************************* ctrl.srv.world : ok=70 changed=6 unreachable=0 failed=0 localhost : ok=11 changed=0 unreachable=0 failed=0 node01.srv.world : ok=32 changed=5 unreachable=0 failed=0 node02.srv.world : ok=32 changed=5 unreachable=0 failed=0 node03.srv.world : ok=67 changed=20 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : Complete (0:01:40) # run Scaleout Playbook [origin@ctrl ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml ................ ................ PLAY RECAP ********************************************************************* ctrl.srv.world : ok=67 changed=1 unreachable=0 failed=0 localhost : ok=23 changed=0 unreachable=0 failed=0 node03.srv.world : ok=159 changed=58 unreachable=0 failed=0 INSTALLER STATUS *************************************************************** Initialization : Complete (0:01:11) Node Bootstrap Preparation : Complete (0:04:34) Node Join : Complete (0:00:14) # show status [origin@ctrl ~]$ oc get nodes --show-labels=true NAME STATUS ROLES AGE VERSION LABELS ctrl.srv.world Ready infra,master 2h v1.10.0+b81c8f8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ctrl.srv.world,node-role.kubernetes.io/infra=true,node-role.kubernetes.io/master=true node01.srv.world Ready compute 2h v1.10.0+b81c8f8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node01.srv.world,node-role.kubernetes.io/compute=true node02.srv.world Ready compute 2h v1.10.0+b81c8f8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02.srv.world,node-role.kubernetes.io/compute=true node03.srv.world Ready infra 2m v1.10.0+b81c8f8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node03.srv.world,node-role.kubernetes.io/infra=true |
[5] | After finishing to add new Nodes, Open [/etc/ansible/hosts] again and move new definitions to existing [nodes] section like follows. |
# remove new_nodes [OSEv3:children] masters nodes etcd new_nodes [OSEv3:vars] ansible_ssh_user=origin ansible_become=true openshift_deployment_type=origins openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] openshift_master_default_subdomain=apps.srv.world openshift_docker_insecure_registries=172.30.0.0/16 [masters] ctrl.srv.world openshift_schedulable=true containerized=false [etcd] ctrl.srv.world [nodes] ctrl.srv.world openshift_node_group_name='node-config-master-infra' node01.srv.world openshift_node_group_name='node-config-compute' node02.srv.world openshift_node_group_name='node-config-compute' node03.srv.world openshift_node_group_name='node-config-infra' # remove these [new_nodes] section and move definition to [nodes] section above [new_nodes] node03.srv.world openshift_node_group_name='node-config-infra' |
[6] | By the way, if you'd like to add Master Node, it's possible to configure the same way like follows. |
[OSEv3:children] masters nodes new_masters ..... ..... [new_masters] node03.srv.world openshift_node_group_name='node-config-master'[origin@ctrl ~]$ ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-master/scaleup.yml |
Sponsored Link |