GlusterFS 11 : Geo Replication2023/03/31 |
Configure GlusterFS Geo Replication to create a replication volume of a GlusterFS volume on a remote site.
This configuration is general Primary-Replica (read only replica) settings.
For example, Configure Geo Replication to a Gluster Volume [vol_distributed] like an example of the link here.
Primary and Replica are both in internal network on this example, though, it's possible to configure Geo Replication over the internet. | +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.53| [GlusterFS Server#3] | | node01.srv.world +----------+----------+ node03.srv.world | | | | | | +----------------------+ | +----------------------+ [vol_distributed] | [secondary-vol_distributed] +----------------------+ | +----------------------+ | [GlusterFS Server#2] |10.0.0.52 | 10.0.0.54| [GlusterFS Server#4] | | node02.srv.world +----------+----------+ node04.srv.world | | | | | +----------------------+ +----------------------+ |
[1] |
Create a volume for replication on Replica Nodes (node03, node04), refer to here.
It proceeds with a volume named [secondary-vol_distributed] on this example. |
[2] | On all Nodes, Install Geo replication packages. |
[root@node01 ~]# dnf --enablerepo=centos-gluster11 -y install glusterfs-geo-replication |
[3] | On all Nodes, if SELinux is enabled, change policy. |
[root@node01 ~]# setsebool -P rsync_full_access on [root@node01 ~]# setsebool -P rsync_client on |
[4] | On a Node that makes replication volume, change settings of the volume. |
[root@node03 ~]# gluster volume set secondary-vol_distributed performance.quick-read off volume set: success |
[5] | On a Node that makes primary volume, set SSH key-pair. It proceeds with [root] account and with no-passphrase on this example. By the way, if you set like this example, it needs to change sshd settings temporarily to [PermitRootLogin yes]. However, if the servers are internet facing one, you had better to transfer keys with other secure ways without changing sshd even if temporary. |
[root@node01 ~]# ssh-keygen -q -N "" Enter file in which to save the key (/root/.ssh/id_rsa):[root@node01 ~]# ssh-copy-id node01 root@node01's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node01'" and check to make sure that only the key(s) you wanted were added.[root@node01 ~]# ssh-copy-id node02 [root@node01 ~]# ssh-copy-id node03 [root@node01 ~]# ssh-copy-id node04 |
[6] | On a Node that makes primary volume, create Geo replication sessions. |
[root@node01 ~]#
gluster system:: execute gsec_create Common secret pub file present at /var/lib/glusterd/geo-replication/common_secret.pem.pub # gluster volume geo-replication [primary volume] [(replication host)::(replication volume)] *** [root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed create push-pem Creating geo-replication session between vol_distributed & node03::secondary-vol_distributed has been successful |
[7] | On a Node that makes primary volume, and also On a Node that makes replication volume, enable shared storage feature. |
[root@node01 ~]# gluster volume set all cluster.enable-shared-storage enable volume set: success |
[8] | On a Node that makes primary volume, configure meta-volume and start Geo replication. After starting Geo replication, try to wtite files on a primary volume to verify replication works normally. |
# gluster volume geo-replication [primary volume] [(replication host)::(replication volume)] *** [root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed config use_meta_volume true geo-replication config updated successfully
[root@node01 ~]#
[root@node01 ~]# gluster volume geo-replication vol_distributed node03::secondary-vol_distributed start Starting geo-replication session between vol_distributed & node03::secondary-vol_distributed has been successful gluster volume geo-replication vol_distributed node03::secondary-vol_distributed status PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER SECONDARY SECONDARY NODE STATUS CRAWL STATUS LAST_SYNCED ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- node01 vol_distributed /glusterfs/distributed root node03::secondary-vol_distributed Active Changelog Crawl 2023-03-31 14:16:53 node02 vol_distributed /glusterfs/distributed root node03::secondary-vol_distributed Active Changelog Crawl 2023-03-31 14:16:53 |
Sponsored Link |