Ubuntu 24.04
Sponsored Link

Pacemaker : Set LVM Shared Storage2024/07/23

 

Configure Active/Passive HA-LVM (High Availability LVM) volume in Cluster.

This example is based on the environment like follows.
Before this setting, Configure basic settings of Cluster and Configure Fence device first.

                        +--------------------+
                        | [  ISCSI Target  ] |
                        |    dlp.srv.world   |
                        +---------+----------+
                         10.0.0.30|
                                  |
+----------------------+          |          +----------------------+
| [  Cluster Node#1  ] |10.0.0.51 | 10.0.0.52| [  Cluster Node#2  ] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |                     |                      |
+----------------------+                     +----------------------+

[1]
Create a storage for share on ISCSI Target, refer to here.
On this example, it created ISCSI storage as IQN [iqn.2022-01.world.srv:dlp.target02] with [10G] size.
[2] On all Cluster Nodes, Change LVM System ID.
root@node01:~#
vi /etc/lvm/lvm.conf
# line 1357 : uncomment and change

system_id_source = "
uname
"
[3] On a Node in Cluster, Set LVM on shared storage.
[sdb] on the example below is shared storage from ISCSI Target.
# current session

root@node01:~#
iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01 (non-flash)
# discover

root@node01:~#
iscsiadm -m discovery -t sendtargets -p 10.0.0.30

10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01
10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target02
# login

root@node01:~#
iscsiadm -m node --login --target iqn.2024-04.world.srv:dlp.target02
root@node01:~#
iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01 (non-flash)
tcp: [2] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target02 (non-flash)
# set LVM

root@node01:~#
parted --script /dev/sdb "mklabel gpt"

root@node01:~#
parted --script /dev/sdb "mkpart primary 0% 100%"

root@node01:~#
parted --script /dev/sdb "set 1 lvm on"
# create physical volume

root@node01:~#
pvcreate /dev/sdb1

  Physical volume "/dev/sdb1" successfully created.

# create volume group

root@node01:~#
vgcreate vg_ha /dev/sdb1

  Volume group "vg_ha" successfully created with system ID node01.srv.world

# confirm the value of [System ID] equals the value of [$ uname -n]

root@node01:~#
vgs -o+systemid

  VG        #PV #LV #SN Attr   VSize   VFree  System ID
  ubuntu-vg   1   1   0 wz--n- <28.00g     0
  vg_ha       1   0   0 wz--n-  <9.98g <9.98g node01.srv.world

# create logical volume

root@node01:~#
lvcreate -l 100%FREE -n lv_ha vg_ha

  Logical volume "lv_ha" created.

# format with ext4

root@node01:~#
mkfs.ext4 /dev/vg_ha/lv_ha
[4] On other Nodes except a Node of [3], Scan LVM volumes to find new volume.
root@node02:~#
iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01 (non-flash)
root@node02:~#
iscsiadm -m discovery -t sendtargets -p 10.0.0.30

10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01
10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target02
root@node02:~#
iscsiadm -m node --login --target iqn.2024-04.world.srv:dlp.target02
root@node02:~#
iscsiadm -m session -o show

tcp: [1] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target01 (non-flash)
tcp: [2] 10.0.0.30:3260,1 iqn.2024-04.world.srv:dlp.target02 (non-flash)
root@node02:~#
lvm pvscan --cache --activate ay

  pvscan[2446] PV /dev/vda3 online, VG ubuntu-vg is complete.
  pvscan[2446] PV /dev/sdb1 ignore foreign VG.
  pvscan[2446] VG ubuntu-vg run autoactivation.
  1 logical volume(s) in volume group "ubuntu-vg" now active
[5] On a Node of [3], Set shared storage as a Cluster resource.
# [lvm_ha] : any name
# [vgname=***] : volume group name
# [group] : any name

root@node01:~#
pcs resource create lvm_ha ocf:heartbeat:LVM-activate vgname=vg_ha vg_access_mode=system_id group ha_group --future

# confirm status
# OK if LVM resource is [Started]

root@node01:~#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node02.srv.world (version 2.1.6-6fdc9deea29) - partition with quorum
  * Last updated: Tue Jul 23 00:37:00 2024 on node01.srv.world
  * Last change:  Tue Jul 23 00:36:55 2024 by root via cibadmin on node01.srv.world
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node02.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node01.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

# set [lvm_ha] and [scsi-shooter] start on a same node

root@node01:~#
pcs constraint colocation add lvm_ha with scsi-shooter

root@node01:~#
pcs status

Cluster name: ha_cluster
Cluster Summary:
  * Stack: corosync (Pacemaker is running)
  * Current DC: node02.srv.world (version 2.1.6-6fdc9deea29) - partition with quorum
  * Last updated: Tue Jul 23 00:40:19 2024 on node01.srv.world
  * Last change:  Tue Jul 23 00:40:02 2024 by root via cibadmin on node01.srv.world
  * 2 nodes configured
  * 2 resource instances configured

Node List:
  * Online: [ node01.srv.world node02.srv.world ]

Full List of Resources:
  * scsi-shooter        (stonith:fence_scsi):    Started node02.srv.world
  * Resource Group: ha_group:
    * lvm_ha    (ocf:heartbeat:LVM-activate):    Started node02.srv.world

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
Matched Content