Fedora 41
Sponsored Link

GlusterFS 11 : ノードを削除する2024/11/01

 
既存のクラスターからノードを削除する場合の設定です。
例として、リンク先の通り構築した分散構成のクラスターから [node03] を削除します。
                                  |
+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   node01.srv.world   +----------+----------+   node02.srv.world   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   node03.srv.world   +----------+
|                      |
+----------------------+

[1] 削除対象ノードを除く、既存ノードのいずれかのノードで、ノード削除の設定をします。
# ボリューム情報確認

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 3d95b074-bd22-4e37-8c14-8ac6e41e25a5
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on

# ボリュームから対象ノードの削除を開始
# ボリュームのリバランスも自動で実施される

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed start

It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: 2e89b5f8-d9cc-49d5-9d7c-b7fdbd8ecabb

# 進行状況確認

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed status

     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
   node03                0        0Bytes             0             0             0            completed        0:00:00

# 進行状況確認の結果 [status] が [completed] の後、削除実行をコミットする

[root@node01 ~]#
gluster volume remove-brick vol_distributed node03:/glusterfs/distributed commit

volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

# ボリューム情報確認

[root@node01 ~]#
gluster volume info


Volume Name: vol_distributed
Type: Distribute
Volume ID: 3d95b074-bd22-4e37-8c14-8ac6e41e25a5
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
performance.client-io-threads: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
関連コンテンツ