分布式存储系统之Ceph集群RBD基础使用( 三 )


使用Linux内核rbd模块连入ceph集群使用RBD磁盘
1、在客户端主机上安装ceph-common程序
[root@ceph-admin ~]# yum install -y ceph-commonLoaded plugins: fastestmirrorRepository epel is listed more than once in the configurationRepository epel-debuginfo is listed more than once in the configurationRepository epel-source is listed more than once in the configurationDetermining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.comCeph| 1.5 kB00:00:00Ceph-noarch| 1.5 kB00:00:00base| 3.6 kB00:00:00ceph-source| 1.5 kB00:00:00epel| 4.7 kB00:00:00extras| 2.9 kB00:00:00updates| 2.9 kB00:00:00(1/2): epel/x86_64/updateinfo| 1.0 MB00:00:08(2/2): epel/x86_64/primary_db| 7.0 MB00:00:52Package 2:ceph-common-13.2.10-0.el7.x86_64 already installed and latest versionNothing to do[root@ceph-admin ~]#提示:安装上述程序包 , 需要先配置好ceph和epel源;
2、在ceph集群上创建客户端用户用于连入ceph集群 , 并授权
[root@ceph-admin ~]# ceph auth get-or-create client.test mon 'allow r' osd 'allow * pool=ceph-rbdpool'[client.test]key = AQB0Gztj63xwGhAAq7JFXnK2mQjBfhq0/kB5uA==[root@ceph-admin ~]# ceph auth get client.testexported keyring for client.test[client.test]key = AQB0Gztj63xwGhAAq7JFXnK2mQjBfhq0/kB5uA==caps mon = "allow r"caps osd = "allow * pool=ceph-rbdpool"[root@ceph-admin ~]#提示:对于rbd客户端来说 , 要想连入ceph集群 , 首先它对mon需要有对的权限 , 其次要想在osd之上存储数据 , 可以授权为* , 表示可读可写 , 但需要限定在对应存储池之上;
导出client.test用户的keyring文件 , 并传给客户端
[root@ceph-admin ~]# ceph --user test -s2022-10-04 01:31:24.776 7faddac3e700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.test.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2022-10-04 01:31:24.776 7faddac3e700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication[errno 2] error connecting to the cluster[root@ceph-admin ~]# ceph auth get client.testexported keyring for client.test[client.test]key = AQB0Gztj63xwGhAAq7JFXnK2mQjBfhq0/kB5uA==caps mon = "allow r"caps osd = "allow * pool=ceph-rbdpool"[root@ceph-admin ~]# ceph auth get client.test -o /etc/ceph/ceph.client.test.keyringexported keyring for client.test[root@ceph-admin ~]# ceph --user test -scluster:id:7fd4a619-9767-4b46-9cee-78b9dfe88f34health: HEALTH_OKservices:mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03mgr: ceph-mgr01(active), standbys: ceph-mon01, ceph-mgr02mds: cephfs-1/1/1 up{0=ceph-mon02=up:active}osd: 10 osds: 10 up, 10 inrgw: 1 daemon activedata:pools:10 pools, 464 pgsobjects: 250objects, 3.8 KiBusage:10 GiB used, 890 GiB / 900 GiB availpgs:464 active+clean[root@ceph-admin ~]#提示:这里需要说明一下 , 我这里是用admin host主机来充当客户端来使用 , 本地/etc/ceph/目录下保存的以后集群的配置文件;所以客户端主机上必须要有对应授权keyring文件 , 以及集群配置文件才能正常连入ceph集群;如果我们在客户端主机上能够使用ceph -s 命令指定对应用户能够查看到集群状态 , 说明对应keyring和配置文件是没有问题的;
3、客户端映射image
[root@ceph-admin ~]# fdisk -lDisk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000a7984Device BootStartEndBlocksIdSystem/dev/sda1*2048105062352428883Linux/dev/sda21050624104857599519034888eLinux LVMDisk /dev/mapper/centos-root: 52.1 GB, 52072284160 bytes, 101703680 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes[root@ceph-admin ~]# rbd map --user test ceph-rbdpool/vol01/dev/rbd0[root@ceph-admin ~]# fdisk -lDisk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000a7984Device BootStartEndBlocksIdSystem/dev/sda1*2048105062352428883Linux/dev/sda21050624104857599519034888eLinux LVMDisk /dev/mapper/centos-root: 52.1 GB, 52072284160 bytes, 101703680 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/rbd0: 5368 MB, 5368709120 bytes, 10485760 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 4194304 bytes / 4194304 bytes[root@ceph-admin ~]#

经验总结扩展阅读