Ceph Pacific : Use File System2023/06/19 |
Configure a Client Host [dlp] to use Ceph Storage like follows.
| +--------------------+ | | [dlp.srv.world] |10.0.0.30 | | Ceph Client +-----------+ | | | +--------------------+ | +----------------------------+----------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-----------+ +-----------+-----------+ +-----------+-----------+ | [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-----------------------+ +-----------------------+ +-----------------------+ |
For example, mount as Filesystem on a Client Host.
|
|
[1] | Transfer SSH public key to Client Host and Configure it from Admin Node. |
# transfer public key root@node01:~# ssh-copy-id dlp # install required packages root@node01:~# ssh dlp "apt -y install ceph-fuse"
# transfer required files to Client Host root@node01:~# scp /etc/ceph/ceph.conf dlp:/etc/ceph/ ceph.conf 100% 273 277.9KB/s 00:00root@node01:~# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/ ceph.client.admin.keyring 100% 151 199.8KB/s 00:00root@node01:~# ssh dlp "chown ceph:ceph /etc/ceph/ceph.*" |
[2] | Configure MDS (MetaData Server) on a Node. Configure it on [node01] Node on this example. |
# create directory # directory name ⇒ (Cluster Name)-(Node Name) root@node01:~# mkdir -p /var/lib/ceph/mds/ceph-node01 root@node01:~# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01 creating /var/lib/ceph/mds/ceph-node01/keyring root@node01:~# chown -R ceph:ceph /var/lib/ceph/mds/ceph-node01 root@node01:~# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring added key for mds.node01 root@node01:~# systemctl enable --now ceph-mds@node01 |
[3] | Create 2 RADOS pools for Data and MeataData on MDS Node. Refer to the official documents to specify the end number (64 on the example below) ⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/ |
root@node01:~# ceph osd pool create cephfs_data 32 pool 'cephfs_data' created root@node01:~# ceph osd pool create cephfs_metadata 32 pool 'cephfs_metadata' created root@node01:~# ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 5 and data pool 4 root@node01:~# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] root@node01:~# ceph mds stat cephfs:1 {0=node01=up:active} root@node01:~# ceph fs status cephfs cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active node01 Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs_metadata metadata 96.0k 151G cephfs_data data 0 151G MDS version: ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable) |
[4] | Mount CephFS on a Client Host. |
# Base64 encode client key root@dlp:~# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key root@dlp:~# chmod 600 admin.key
mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key root@dlp:~# df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 392M 568K 391M 1% /run /dev/mapper/debian--vg-root ext4 28G 1.4G 26G 6% / tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock /dev/vda1 ext2 455M 58M 373M 14% /boot tmpfs tmpfs 392M 0 392M 0% /run/user/0 10.0.0.51:6789:/ ceph 152G 0 152G 0% /mnt |
Sponsored Link |