GlusterFS 11 : Remove Nodes (Bricks)2023/11/13 |
Remove Nodes (Bricks) from existing Cluster.
For example, Remove a Node [node03] from the existing Cluster.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.srv.world +----------+----------+ node02.srv.world | | | | | | +----------------------+ | +----------------------+ ⇑ | ⇑ file1, file3 ... | file2, file4 ... | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.srv.world +----------+ | | +----------------------+ |
[1] | Remove a New Node from existing Cluster on a node. (OK on any existing node except removing target) |
# confirm volume info [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: b194bdab-76cc-45e3-90ed-007809c4e8fb Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Brick3: node03:/glusterfs/distributed Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on # start removing node from volume # rebalance volume is also run [root@node01 ~]# gluster volume remove-brick vol_distributed node03:/glusterfs/distributed start
It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: 9f17da5b-90ee-46d1-a4a1-9ece10b28fe4
# confirm status [root@node01 ~]# gluster volume remove-brick vol_distributed node03:/glusterfs/distributed status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- node03 0 0Bytes 0 0 0 completed 0:00:00 # after [status] turning to [completed], commit removing [root@node01 ~]# gluster volume remove-brick vol_distributed node03:/glusterfs/distributed commit volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. # confirm volume info [root@node01 ~]# gluster volume info Volume Name: vol_distributed Type: Distribute Volume ID: b194bdab-76cc-45e3-90ed-007809c4e8fb Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: node01:/glusterfs/distributed Brick2: node02:/glusterfs/distributed Options Reconfigured: performance.client-io-threads: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on |
Sponsored Link |