Skip to main content

CephFS: Deploy MDS Daemons, Create CephFS Filesystem, Create Ceph User, Mount CephFS on Clients

771 words·
Ceph CephFS
Table of Contents
Ceph - This article is part of a series.
Part 2: This Article

Overview
#

Im using the following Rocky Linux 9.4 based VM setup from the previous post, with the addition of two Ubuntu 24.04 servers that are used to mount the CephFS filesystem.

192.168.30.100 rocky1 # Initial / Bootstrap Node
192.168.30.101 rocky2 # Node 2
192.168.30.102 rocky3 # Node 3

192.168.30.10 ubuntu1 # Ceph Client 1
192.168.30.11 ubuntu2 # Ceph Client 2

Ceph Metadata Server Daemons
#

List Cluster Nodes
#

# List cluster nodes
ceph orch host ls

# Shell output:
HOST    ADDR            LABELS  STATUS
rocky1  192.168.30.100  _admin
rocky2  192.168.30.101
rocky3  192.168.30.102
3 hosts in cluster

Deploy Metadata Servers
#

Each CephFS file system requires at least one MDS daemon.

# Deploy MDS daemons
ceph orch apply mds cephfs-1 --placement="rocky1,rocky2,rocky3"

# Shell output:
Scheduled mds.cephfs-1 update...

Verify Running MDS Daemons
#

# List running Metadata Server daemons
sudo ceph orch ps --daemon-type mds

# Shell output:
NAME                        HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
mds.cephfs-1.rocky1.zfpvjj  rocky1         running (75s)    72s ago  75s    12.4M        -  18.2.2   3c937764e6f5  799385b3d6c4
mds.cephfs-1.rocky2.ifqdlu  rocky2         running (78s)    73s ago  78s    11.7M        -  18.2.2   3c937764e6f5  76aa98659881
mds.cephfs-1.rocky3.aoiraz  rocky3         running (76s)    73s ago  76s    14.0M        -  18.2.2   3c937764e6f5  5849ec9c8b38
# Check the overall health
ceph mds stat

# Shell output:
3 up:standby

This output means all three daemons are running in standby mode, prepared to take over from each other if one fails.



CephFS Filesystem
#

Create the Filesystem
#

# Create the RADOS metadata and data pools
ceph osd pool create cephfs_data 64 &&
ceph osd pool create cephfs_meta 64

# Shell output:
pool 'cephfs_data' created
pool 'cephfs_meta' created
# Deploy the CephFS filesystem
ceph fs new cephfs-storage-1 cephfs_meta cephfs_data

# Shell output:
  Pool 'cephfs_data' (id '2') has pg autoscale mode 'on' but is not marked as bulk.
  Consider setting the flag by running
    # ceph osd pool set cephfs_data bulk true
new fs with metadata pool 3 and data pool 2

Verify the Filesystem, Check Status
#

# List Cephfs filesystems
ceph fs ls

# Shell output:
name: cephfs-storage-1, metadata pool: cephfs_meta, data pools: [cephfs_data ]
# Check the filesystem status
ceph fs status cephfs-storage-1

# Shell output:
cephfs-storage-1 - 0 clients
================
RANK  STATE            MDS               ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  cephfs-1.rocky1.zfpvjj  Reqs:    0 /s    10     13     12      0
    POOL       TYPE     USED  AVAIL
cephfs_meta  metadata  96.0k  37.9G
cephfs_data    data       0   37.9G
     STANDBY MDS
cephfs-1.rocky2.ifqdlu
cephfs-1.rocky3.aoiraz
MDS version: ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)

Verify MDS Daemon Status
#

# Check the overall health
ceph mds stat

# Shell output:
cephfs-storage-1:1 {0=cephfs-1.rocky1.zfpvjj=up:active} 2 up:standby



Authentication
#

List Ceph Users & Details
#

# List all users in the Ceph cluster
ceph auth ls
# List user details like key
ceph auth get client.admin

# Shell output:
[client.admin]
        key = AQCWMIBmcz3lIxAA4MpKiRYxDQwoUokLAUs2bg==
        caps mds = "allow *"
        caps mgr = "allow *"
        caps mon = "allow *"
        caps osd = "allow *"

Create User
#

# Create a new user that is allowed to access the CephFS "cephfs-storage-1" filesystem
ceph auth get-or-create client.ubuntu1 mds 'allow *' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_meta' mon 'allow r' -o /etc/ceph/ceph.client.ubuntu1.keyring

Copy User Keyring
#

Copy the user keyring manually or via SSH to the client where Cephfs will be mounted:

# Print the keyring of the new user
cat /etc/ceph/ceph.client.ubuntu1.keyring

# Shell output:
[client.ubuntu1]
        key = AQDDP4BmBwRxJxAA3KV05/bZAz8luN1fSfznAA==

Copy the key AQDDP4BmBwRxJxAA3KV05/bZAz8luN1fSfznAA== and add it to the client where the Cephfs file system is mounted.


Copy Ceph Configuration
#

Copy the Ceph configuration manually or via SSH to the client where Cephfs will be mounted:

# Print the Ceph configuration
cat /etc/ceph/ceph.conf

# Shell output:
# minimal ceph.conf for 3b1f0e44-3631-11ef-8ac6-000c29ad85a9
[global]
        fsid = 3b1f0e44-3631-11ef-8ac6-000c29ad85a9
        mon_host = [v2:192.168.30.100:3300/0,v1:192.168.30.100:6789/0] [v2:192.168.30.101:3300/0,v1:192.168.30.101:6789/0] [v2:192.168.30.102:3300/0,v1:192.168.30.102:6789/0]

Mount Cephfs on Client
#

Install Ceph Client
#

# Install Ceph-Common package
sudo apt install ceph-common -y

Add User Keyring
#

Paste the keyring of the “ubuntu1” user created previously:

# Create a file for the user keyring
sudo vi /etc/ceph/ceph.client.ubuntu1.keyring

# Paste the keyring
AQDDP4BmBwRxJxAA3KV05/bZAz8luN1fSfznAA==

Add Ceph Configuration
#

# Create a file for the Ceph configuration
sudo vi /etc/ceph/ceph.conf

# Paste the config
# minimal ceph.conf for 3b1f0e44-3631-11ef-8ac6-000c29ad85a9
[global]
        fsid = 3b1f0e44-3631-11ef-8ac6-000c29ad85a9
        mon_host = [v2:192.168.30.100:3300/0,v1:192.168.30.100:6789/0] [v2:192.168.30.101:3300/0,v1:192.168.30.101:6789/0] [v2:192.168.30.102:3300/0,v1:192.168.30.102:6789/0]

Mount CephFS
#

Create Directory for the Mount Point
#

# Create a directory for the mountpoint
sudo mkdir -p /mnt/cephfs-storage-1

Mount CephFS Filesystem
#

# Mount CephFS using the kernel
sudo mount -t ceph 192.168.30.100:6789:/ /mnt/cephfs-storage-1 -o name=ubuntu1,secretfile=/etc/ceph/ceph.client.ubuntu1.keyring

# Unmount Cephfs
sudo umount /mnt/cephfs-storage-1

Verify through Webinterface
#

The hosts that have mounted the Cephfs file system can also be verified through the Ceph webinterface:



Links #

# CephFS
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/3/html/ceph_file_system_guide/deploying-ceph-file-systems#creating-ceph-file-system-client-users

# Ceph Client Authentication
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/2/html/ceph_file_system_guide_technology_preview/mounting_and_unmounting_ceph_file_systems#client_authentication
Ceph - This article is part of a series.
Part 2: This Article