GlusterFS 8 : Geo Replication2021/04/05 |
Configure GlusterFS Geo Replication to create a replication volume of a GlusterFS volume on a remote site.
This configuration is general Primary-Replica (read only replica) settings.
For example, Configure Geo Replication to a Gluster Volume [vol_distributed] like an example of the link here.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.53| [GlusterFS Server#3] | | node01.srv.world +----------+----------+ node03.srv.world | | | | | | +----------------------+ | +----------------------+ [vol_distributed] | [secondary-vol_distributed] +----------------------+ | +----------------------+ | [GlusterFS Server#2] |10.0.0.52 | 10.0.0.54| [GlusterFS Server#4] | | node02.srv.world +----------+----------+ node04.srv.world | | | | | +----------------------+ +----------------------+ |
[1] |
Create a volume for replication on Replica Nodes (node03, node04), refer to here.
It proceeds with a volume named [secondary-vol_distributed] on this example. |
[2] | On all Nodes, Install Geo replication packages. |
[root@node01 ~]# dnf --enablerepo=centos-gluster8,powertools -y install glusterfs-geo-replication |
[3] | On all Nodes, if SELinux is enabled, change policy. |
[root@node01 ~]# setsebool -P rsync_full_access on [root@node01 ~]# setsebool -P rsync_client on |
[4] | On a Node that makes replication volume, change settings of the volume. |
[root@node03 ~]# gluster volume set secondary-vol_distributed performance.quick-read off volume set: success |
[5] | On a Node that makes primary volume, set SSH key-pair. It proceeds with [root] account and with no-passphrase on this example. |
[root@node01 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:9iknf49kFVUrMPoEzMjWKPNsWxn9Il2a+RfnrFQJjS0 root@node01.srv.world The key's randomart image is:[root@node01 ~]# ssh-copy-id node01 root@node01's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.[root@node01 ~]# ssh-copy-id node02 [root@node01 ~]# ssh-copy-id node03 [root@node01 ~]# ssh-copy-id node04 |
[6] | On a Node that makes primary volume, create Geo replication sessions. |
[root@node01 ~]#
gluster system:: execute gsec_create Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub # gluster volume geo-replication [primary volume] [(replication host)::(replication volume)] *** [root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed create push-pem Creating geo-replication session between vol_distributed & node03::secondary-vol_distributed has been successful |
[7] | On a Node that makes primary volume, and also On a Node that makes replication volume, enable shared storage feature. |
[root@node01 ~]# gluster volume set all cluster.enable-shared-storage enable volume set: success |
[8] | On a Node that makes primary volume, configure meta-volume and start Geo replication. After starting Geo replication, try to wtite files on a primary volume to verify replication works normally. |
# gluster volume geo-replication [primary volume] [(replication host)::(replication volume)] *** [root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed config use_meta_volume true geo-replication config updated successfully
[root@node01 ~]#
[root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed start Starting geo-replication session between vol_distributed & node03::secondary-vol_distributed has been successful gluster volume geo-replication vol_distributed node03::secondary-vol_distributed status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- node01 vol_distributed /glusterfs/distributed root node03::secondary-vol_distributed node03 Active Changelog Crawl 2021-04-04 12:32:51 node02 vol_distributed /glusterfs/distributed root node03::secondary-vol_distributed node03 Active Changelog Crawl 2021-04-04 12:32:51 |
Sponsored Link |