Ceph Octopus : Configure Cluster #12020/08/31 |
Install Distributed File System Ceph to Configure Storage Cluster.
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (use [/dev/sdb] on this example) | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
[1] | Generate SSH key-pair on [Monitor Daemon] Node (call it Admin Node on here) and set it to each Node. Configure key-pair with no-passphrase as [root] account on here. If you use a common account, it also needs to configure Sudo. If you set passphrase to SSH kay-pair, it also needs to set SSH Agent. |
root@node01:~# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): Created directory '/home/ubuntu/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/ubuntu/.ssh/id_rsa Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub The key fingerprint is: SHA256:jKbVHHWzfLv+IO4vBI12U2/6GhLjo7/zOupYtbhfmNc ubuntu@node01.srv.world The key's randomart image is: ..... .....
root@node01:~#
vi ~/.ssh/config # create new (define each Node and SSH user) Host node01 Hostname node01.srv.world User root Host node02 Hostname node02.srv.world User root Host node03 Hostname node03.srv.world User root
root@node01:~#
chmod 600 ~/.ssh/config
# transfer public key root@node01:~# ssh-copy-id node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ubuntu/.ssh/id_rsa.pub"
The authenticity of host 'node01.srv.world (10.0.0.51)' can't be established.
ECDSA key fingerprint is SHA256:0TfV//D7JSem+SOtN5rksAvfE0bXlFw3dWX+w5ri8i8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@node01.srv.world's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.
root@node01:~# ssh-copy-id node02 root@node01:~# ssh-copy-id node03 |
[2] | Install Ceph to each Node from Admin Node. |
root@node01:~# for NODE in node01 node02 node03
do
ssh $NODE "apt update; apt -y install ceph"
done
|
[3] | Configure [Monitor Daemon], [Manager Daemon] on Admin Node. |
root@node01:~#
uuidgen 72840c24-3a82-4e28-be87-cf9f905918fb # create new config # file name ⇒ (any Cluster Name).conf # set Cluster Name [ceph] (default) on this example ⇒ [ceph.conf] root@node01:~# vi /etc/ceph/ceph.conf [global] # specify cluster network for monitoring cluster network = 10.0.0.0/24 # specify public network public network = 10.0.0.0/24 # specify UUID genarated above fsid = 72840c24-3a82-4e28-be87-cf9f905918fb # specify IP address of Monitor Daemon mon host = 10.0.0.51 # specify Hostname of Monitor Daemon mon initial members = node01 osd pool default crush rule = -1 # mon.(Node name) [mon.node01] # specify Hostname of Monitor Daemon host = node01 # specify IP address of Monitor Daemon mon addr = 10.0.0.51 # allow to delete pools mon allow pool delete = true # generate secret key for Cluster monitoring root@node01:~# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' creating /etc/ceph/ceph.mon.keyring # generate secret key for Cluster admin root@node01:~# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' creating /etc/ceph/ceph.client.admin.keyring # generate key for bootstrap root@node01:~# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' creating /var/lib/ceph/bootstrap-osd/ceph.keyring # import generated key root@node01:~# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring root@node01:~# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring # generate monitor map root@node01:~# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'}) root@node01:~# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'}) root@node01:~# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'}) root@node01:~# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap monmaptool: monmap file /etc/ceph/monmap monmaptool: set fsid to 72840c24-3a82-4e28-be87-cf9f905918fb monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors) # create a directory for Monitor Daemon # directory name ⇒ (Cluster Name)-(Node Name) root@node01:~# mkdir /var/lib/ceph/mon/ceph-node01
# assosiate key and monmap to Monitor Daemon # --cluster (Cluster Name) root@node01:~# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
root@node01:~#
chown ceph. /etc/ceph/ceph.* root@node01:~# chown -R ceph. /var/lib/ceph/mon/ceph-node01 /var/lib/ceph/bootstrap-osd root@node01:~# systemctl enable --now ceph-mon@$NODENAME
# enable Messenger v2 Protocol root@node01:~# ceph mon enable-msgr2
# enable Placement Groups auto scale module root@node01:~# ceph mgr module enable pg_autoscaler
# create a directory for Manager Daemon # directory name ⇒ (Cluster Name)-(Node Name) root@node01:~# mkdir /var/lib/ceph/mgr/ceph-node01
# create auth key root@node01:~# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *' [mgr.node01] key = AQC3IEZfESetLhAA/rnFCLkpvopkARxyLKLJAA==root@node01:~# ceph auth get-or-create mgr.node01 | tee /etc/ceph/ceph.mgr.admin.keyring root@node01:~# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-node01/keyring root@node01:~# chown ceph. /etc/ceph/ceph.mgr.admin.keyring root@node01:~# chown -R ceph. /var/lib/ceph/mgr/ceph-node01 root@node01:~# systemctl enable --now ceph-mgr@$NODENAME |
[4] | Confirm Cluster status. That's OK if [Monitor Daemon] and [Manager Daemon] are enabled like follows. For OSD (Object Storage Device), Configure them on next section, so it's no ploblem if [HEALTH_WARN] at this point. |
root@node01:~# ceph -s cluster: id: 72840c24-3a82-4e28-be87-cf9f905918fb health: HEALTH_WARN OSD count 0 < osd_pool_default_size 3 services: mon: 1 daemons, quorum node01 (age 2m) mgr: node01(active, since 23s) osd: 0 osds: 0 up, 0 in data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 1 unknown |
Sponsored Link |