Pacemaker : Set LVM Shared Storage2020/02/24 |
Configure Active/Passive HA-LVM (High Availability LVM) volume in Cluster.
This example is based on the environment like follows.
Before this setting, Configure basic settings of Cluster and set Fence device first. +--------------------+ | [ ISCSI Target ] | | storage.srv.world | +---------+----------+ 10.0.0.50| | +----------------------+ | +----------------------+ | [ Cluster Node#1 ] |10.0.0.51 | 10.0.0.52| [ Cluster Node#2 ] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | +----------------------+ +----------------------+ |
[1] |
Create a storage for share on ISCSI Target, refer to here.
On this example, it created ISCSI storage as IQN [iqn.2020-02.world.srv:storage.target02] with [10G] size. |
[2] |
On all Cluster Nodes,
Login to ISCSI Target, refer to here.
|
[3] | On all Cluster Nodes, Change LVM System ID. |
[root@node01 ~]#
vi /etc/lvm/lvm.conf # line 1217: change system_id_source = " uname "
|
[4] | On a Node in Cluster, Set LVM on shared storage. [sdb] on the example below is shared storage from ISCSI Target. |
# set LVM [root@node01 ~]# parted --script /dev/sdb "mklabel msdos" [root@node01 ~]# parted --script /dev/sdb "mkpart primary 0% 100%" [root@node01 ~]# parted --script /dev/sdb "set 1 lvm on"
# create physical volume [root@node01 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created. # create volume group [root@node01 ~]# vgcreate vg_ha /dev/sdb1 Volume group "vg_ha" successfully created with system ID node01.srv.world # confirm the value of [System ID] equals the value of [$ uname -n] [root@node01 ~]# vgs -o+systemid VG #PV #LV #SN Attr VSize VFree System ID cl 1 2 0 wz--n- <29.00g 0 vg_ha 1 0 0 wz--n- 9.99g 9.99g node01.srv.world # create logical volume [root@node01 ~]# lvcreate -l 100%FREE -n lv_ha vg_ha Logical volume "lv_ha" created. # format with ext4 [root@node01 ~]# mkfs.ext4 /dev/vg_ha/lv_ha
# deactivate volume group [root@node01 ~]# vgchange vg_ha -an 0 logical volume(s) in volume group "vg_ha" now active |
[5] | On other Nodes except a Node of [4], Scan LVM volumes to find new volume. |
[root@node02 ~]# lvm pvscan --cache --activate ay pvscan[1932] PV /dev/vda2 online, VG cl is complete. pvscan[1932] PV /dev/sdb1 ignore foreign VG. pvscan[1932] VG cl run autoactivation. 2 logical volume(s) in volume group "cl" now active |
[6] | On a Node of [4], Set shared storage as a Cluster resource. |
# [lvm_ha] ⇒ any name # [vgname=***] ⇒ volume group name # [--group] ⇒ any name [root@node01 ~]# pcs resource create lvm_ha ocf:heartbeat:LVM-activate vgname=vg_ha vg_access_mode=system_id --group ha_group # confirm status # OK if LVM resource is [Started] [root@node01 ~]# pcs status Cluster name: ha_cluster Stack: corosync Current DC: node02.srv.world (version 2.0.2-3.el8_1.2-744a30d655) - partition with quorum Last updated: Fri Feb 20 01:34:06 2020 Last change: Fri Feb 20 01:34:01 2020 by root via cibadmin on node01.srv.world 2 nodes configured 2 resources configured Online: [ node01.srv.world node02.srv.world ] Full list of resources: scsi-shooter (stonith:fence_scsi): Started node01.srv.world Resource Group: ha_group lvm_ha (ocf::heartbeat:LVM-activate): Started node02.srv.world Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled |
Sponsored Link |