Ceph Quincy : Monitor ノードを追加/削除する2022/06/14 |
既存の Ceph クラスターに Monitor ノードを追加/削除するには、以下のように設定します。
| +--------------------+ | +----------------------+ | [dlp.srv.world] |10.0.0.30 | 10.0.0.31| [www.srv.world] | | Ceph Client +-----------+-----------+ RADOSGW | | | | | | +--------------------+ | +----------------------+ +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] | 例として、管理ノードから [node04] ノードに、新たに Monitor Daemin を追加します。 |
# 公開鍵転送 [root@node01 ~]# ssh-copy-id node04 # Firewalld 稼働中の場合はサービス許可 [root@node01 ~]# ssh node04 "firewall-cmd --add-service=ceph-mon; firewall-cmd --runtime-to-permanent" # 必要なパッケージをインストール [root@node01 ~]# ssh node04 "dnf -y install centos-release-ceph-quincy; dnf -y install ceph"
# モニターマップ設定 [root@node01 ~]# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'}) [root@node01 ~]# NODENAME="node04" [root@node01 ~]# NODEIP="10.0.0.54" [root@node01 ~]# monmaptool --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to fe4fb100-abec-488d-93fe-71b7ae7d9b81 monmaptool: writing epoch 0 to /etc/ceph/monmap (2 monitors) # Monitor Daemin 設定 [root@node01 ~]# scp /etc/ceph/ceph.conf node04:/etc/ceph/ceph.conf [root@node01 ~]# scp /etc/ceph/ceph.mon.keyring node04:/etc/ceph [root@node01 ~]# scp /etc/ceph/monmap node04:/etc/ceph [root@node01 ~]# ssh node04 "ceph-mon --cluster ceph --mkfs -i node04 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring" [root@node01 ~]# ssh node04 "chown -R ceph. /etc/ceph /var/lib/ceph/mon" [root@node01 ~]# ssh node04 "ceph auth get mon. -o /etc/ceph/ceph.mon.keyring" [root@node01 ~]# ssh node04 "systemctl enable --now ceph-mon@node04" [root@node01 ~]# ssh node04 "ceph mon enable-msgr2"
ceph -s cluster: id: fe4fb100-abec-488d-93fe-71b7ae7d9b81 health: HEALTH_OK services: mon: 2 daemons, quorum node01,node04 (age 106s) mgr: node01(active, since 71m) mds: 1/1 daemons up osd: 3 osds: 3 up (since 5m), 3 in (since 10m) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 219 objects, 464 KiB usage: 150 MiB used, 480 GiB / 480 GiB avail pgs: 225 active+clean io: client: 7.6 KiB/s rd, 0 B/s wr, 7 op/s rd, 5 op/s wr |
[2] | Monitor Daemin を新規追加するノードで、SELinux を有効にしている場合は、ポリシーの変更が必要です。 |
[root@node04 ~]#
vi cephmon.te # 以下の内容で新規作成 module cephmon 1.0; require { type ceph_t; type ptmx_t; type initrc_var_run_t; type sudo_exec_t; type chkpwd_exec_t; type shadow_t; class file { execute execute_no_trans lock getattr map open read }; class capability { audit_write sys_resource }; class process setrlimit; class netlink_audit_socket { create nlmsg_relay }; class chr_file getattr; } #============= ceph_t ============== allow ceph_t initrc_var_run_t:file { lock open read }; allow ceph_t self:capability { audit_write sys_resource }; allow ceph_t self:netlink_audit_socket { create nlmsg_relay }; allow ceph_t self:process setrlimit; allow ceph_t sudo_exec_t:file { execute execute_no_trans open read map }; allow ceph_t ptmx_t:chr_file getattr; allow ceph_t chkpwd_exec_t:file { execute execute_no_trans open read map }; allow ceph_t shadow_t:file { getattr open read }; checkmodule -m -M -o cephmon.mod cephmon.te [root@node04 ~]# semodule_package --outfile cephmon.pp --module cephmon.mod [root@node04 ~]# semodule -i cephmon.pp |
[3] | 既存のクラスターから Monitor ノードを削除する場合は以下のように実行します。 例として、管理ノードから [node04] ノードを削除します。 |
[root@node01 ~]# ceph -s cluster: id: fe4fb100-abec-488d-93fe-71b7ae7d9b81 health: HEALTH_OK services: mon: 2 daemons, quorum node01,node04 (age 2m) mgr: node01(active, since 72m) mds: 1/1 daemons up osd: 3 osds: 3 up (since 6m), 3 in (since 11m) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 219 objects, 464 KiB usage: 150 MiB used, 480 GiB / 480 GiB avail pgs: 225 active+clean io: client: 7.1 KiB/s rd, 0 B/s wr, 7 op/s rd, 4 op/s wr # [node04] の Monitor Daemon を分離する [root@node01 ~]# ceph mon remove node04 removing mon.node04 at [v2:10.0.0.54:3300/0,v1:10.0.0.54:6789/0], there will be 1 monitors # 対象ノードの Monitor Daemon を無効化 [root@node01 ~]# ssh node04 "systemctl disable --now ceph-mon@node04.service" ceph -s cluster: id: fe4fb100-abec-488d-93fe-71b7ae7d9b81 health: HEALTH_OK services: mon: 1 daemons, quorum node01 (age 1.92858s) mgr: node01(active, since 73m) mds: 1/1 daemons up osd: 3 osds: 3 up (since 7m), 3 in (since 12m) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 219 objects, 464 KiB usage: 150 MiB used, 480 GiB / 480 GiB avail pgs: 225 active+clean |
Sponsored Link |