DRBD 9 : Configure2018/07/12 |
Configure DRBD after installing.
This example is based on the environment like follows. +----------------------+ | +----------------------+ | [ DRBD Node#1 ] |10.0.0.51 | 10.0.0.52| [ DRBD Node#2 ] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | +----------------------+ +----------------------+
It's necessarry the server you'd like to install DRBD has free block-device.
|
|
[1] | Create a Volume Group for DRBD on all Nodes. |
[root@node01 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@node01 ~]# vgcreate drbdpool /dev/sdb1 Volume group "vg_volume01" successfully created |
[2] | If Firewalld is running, allow service ports. |
[root@node01 ~]# firewall-cmd --add-port=6996-7800/tcp --permanent success [root@node01 ~]# firewall-cmd --reload success |
[3] | On a Node, Initialize DRBD Cluster and add Nodes. |
[root@node01 ~]# drbdmanage init 10.0.0.51
You are going to initialize a new drbdmanage cluster.
CAUTION! Note that:
* Any previous drbdmanage cluster information may be removed
* Any remaining resources managed by a previous drbdmanage installation
that still exist on this system will no longer be managed by drbdmanage
Confirm:
yes/no: yes
[ 2420.329129] drbd .drbdctrl: Starting worker thread (from drbdsetup [13925])
[ 2420.451483] drbd .drbdctrl/0 drbd0: disk( Diskless -> Attaching )
[ 2420.453513] drbd .drbdctrl/0 drbd0: Maximum number of peer devices = 31
.....
.....
[ 2421.677920] drbd .drbdctrl: Preparing cluster-wide state change 2615618191 (0->-1 3/1)
[ 2421.680499] drbd .drbdctrl: Committing cluster-wide state change 2615618191 (2ms)
[ 2421.682894] drbd .drbdctrl: role( Secondary -> Primary )
Waiting for server: .
Operation completed successfully
# add a Node to DRBD Cluster [root@node01 ~]# drbdmanage add-node node02.srv.world 10.0.0.52 [ 2452.989515] drbd .drbdctrl node02.srv.world: Starting sender thread (from drbdsetup [14020]) [ 2452.997518] drbd .drbdctrl node02.srv.world: conn( StandAlone -> Unconnected ) Operation completed successfully Operation completed successfully [ 2453.037235] drbd .drbdctrl node02.srv.world: Starting receiver thread (from drbd_w_.drbdctr [13926]) [ 2453.040902] drbd .drbdctrl node02.srv.world: conn( Unconnected -> Connecting ) Host key verification failed. Give leader time to contact the new node Operation completed successfully Operation completed successfully Join command for node node02.srv.world: drbdmanage join -p 6999 10.0.0.52 1 node01.srv.world 10.0.0.51 0 twuZE5BAthnZIRyEAAS/ |
[4] | Move to other Nodes except the working Node of previous [3] section, and Join with the command which is shown on [3] section. |
[root@node02 ~]# drbdmanage join -p 6999 10.0.0.52 1 node01.srv.world 10.0.0.51 0 twuZE5BAthnZIRyEAAS/
You are going to join an existing drbdmanage cluster.
CAUTION! Note that:
* Any previous drbdmanage cluster information may be removed
* Any remaining resources managed by a previous drbdmanage installation
that still exist on this system will no longer be managed by drbdmanage
Confirm:
yes/no: yes
[ 2491.338532] drbd: loading out-of-tree module taints kernel.
[ 2491.343082] drbd: module verification failed: signature and/or required key missing - tainting kernel
[ 2491.364065] drbd: initialized. Version: 9.0.14-1 (api:2/proto:86-113)
.....
.....
[ 2553.012505] drbd .drbdctrl node01.srv.world: conn( StandAlone -> Unconnected )
[ 2553.025846] drbd .drbdctrl node01.srv.world: Starting receiver thread (from drbd_w_.drbdctr [13762])
[ 2553.028899] drbd .drbdctrl node01.srv.world: conn( Unconnected -> Connecting )
Operation completed successfully
|
[5] | Make sute the state of Cluster, it's OK if all states are [ok]. |
[root@node01 ~]# drbdadm status --== Thank you for participating in the global usage survey ==-- The server's response is: you are the 1527th user to install this version .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node02.srv.world role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate[root@node01 ~]# drbdmanage list-nodes +--------------------------------------------------+ | Name | Pool Size | Pool Free | State | |--------------------------------------------------| | node01.srv.world | 81916 | 81908 | ok | | node02.srv.world | 81916 | 81908 | ok | +--------------------------------------------------+ |
[6] | Create a Resource and Volume on DRBD Cluster. |
# create [resource01] resource [root@node01 ~]# drbdmanage add-resource resource01 Operation completed successfully [root@node04 ~]# drbdmanage list-resources +--------------------+ | Name | State | |--------------------| | resource01 | ok | +--------------------+ # add volume with 10G [root@node01 ~]# drbdmanage add-volume resource01 10GB Operation completed successfully [root@node01 ~]# drbdmanage list-volumes +------------------------------------------------+ | Name | Vol ID | Size | Minor | State | |------------------------------------------------| | resource01 | 0 | 9.31 GiB | 100 | ok | +------------------------------------------------+ # deploy resource # the last number means number of Nodes you'd like to use for this resource [root@node01 ~]# drbdmanage deploy-resource resource01 2 Operation completed successfully # show status : after setting, state is [Inconsistent] (syncing) [root@node01 ~]# drbdadm status .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate node02.srv.world role:Primary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate resource01 role:Secondary disk:Inconsistent node02.srv.world role:Secondary replication:SyncTarget peer-disk:UpToDate done:0.07 # after finishing syncing, state turned like follows [root@node01 ~]# drbdadm status .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate node02.srv.world role:Primary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate resource01 role:Secondary disk:UpToDate node02.srv.world role:Secondary peer-disk:UpToDate |
[7] | It's OK to configure DRBD, create file system on DRBD device and mount it to use. |
# the number of [/dev/drbd***], it's the number of [Minor] value that is shown with the command [drbdmanage list-volumes] [root@node01 ~]# mkfs.xfs /dev/drbd100 [root@node01 ~]# mkdir /drbd_disk [root@node01 ~]# mount /dev/drbd100 /drbd_disk [root@node01 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 26G 1.7G 25G 7% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 2.0G 8.5M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda1 xfs 1014M 238M 777M 24% /boot tmpfs tmpfs 396M 0 396M 0% /run/user/0 /dev/drbd100 xfs 9.4G 33M 9.3G 1% /drbd_disk # create a test file [root@node01 ~]# echo 'test file' > /drbd_disk/test.txt [root@node01 ~]# ll /drbd_disk total 4 -rw-r--r--. 1 root root 10 Jul 12 19:54 test.txt |
[8] | To mount DRBD device on the secondary Host, do it like follows. |
########### on primary Node ########### # unmount and get secondary role [root@node01 ~]# umount /drbd_disk [root@node01 ~]# drbdadm secondary resource01
########### on secondary Node ########### # get primary role and mount [root@node02 ~]# drbdadm primary resource01 [root@node02 ~]# mount /dev/drbd100 /drbd_disk [root@node02 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cl-root xfs 26G 1.6G 25G 6% / devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 2.0G 8.5M 2.0G 1% /run tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/vda1 xfs 1014M 238M 777M 24% /boot tmpfs tmpfs 396M 0 396M 0% /run/user/0 /dev/drbd100 xfs 9.4G 33M 9.3G 1% /drbd_disk[root@node02 ~]# ll /drbd_disk total 4 -rw-r--r--. 1 root root 10 Jul 12 19:54 test.txt |
Sponsored Link |