动态子树分区依赖于共享存储完成热点负载在MDS间的迁移,于是Ceph把MDS的元数据存储于后面的RADOS集群上的专用存储池中,此存储池可由多个MDS共享;MDS对元数据的访问并不直接基于RADOS进行,而是为其提供了一个基于内存的缓存区以缓存热点元数据,并且在元数据相关日志条目过期之前将一直存储于内存中;
CephFS使用元数据日志来解决容错问题
元数据日志信息流式存储于CephFS元数据存储池中的元数据日志文件上,类似于LFS(Log-Structured File System)和WAFL( Write Anywhere File Layout)的工作机制, CephFS元数据日志文件的体积可以无限增长以确保日志信息能顺序写入RADOS,并额外赋予守护进程修剪冗余或不相关日志条目的能力;
Multi MDS
每个CephFS都会有一个易读的文件系统名称和一个称为FSCID标识符ID,并且每个CephFS默认情况下都只配置一个Active MDS守护进程;一个MDS集群中可处于Active状态的MDS数量的上限由max_mds参数配置,它控制着可用的rank数量,默认值为1; rank是指CephFS上可同时处于Active状态的MDS守护进程的可用编号,其范围从0到max_mds-1;一个rank编号意味着一个可承载CephFS层级文件系统目录子树 目录子树元数据管理功能的Active状态的ceph-mds守护进程编制,max_mds的值为1时意味着仅有一个0号rank可用; 刚启动的ceph-mds守护进程没有接管任何rank,它随后由MON按需进行分配;一个ceph-mds一次仅可占据一个rank,并且在守护进程终止时将其释放;即rank分配出去以后具有排它性;一个rank可以处于下列三种状态中的某一种,Up:rank已经由某个ceph-mds守护进程接管; Failed:rank未被任何ceph-mds守护进程接管; Damaged:rank处于损坏状态,其元数据处于崩溃或丢失状态;在管理员修复问题并对其运行“ceph mds repaired”命令之前,处于Damaged状态的rank不能分配给其它任何MDS守护进程;
查看ceph集群mds状态
[root@ceph-admin ~]# ceph mds statcephfs-1/1/1 up {0=ceph-mon02=up:active}[root@ceph-admin ~]#提示:可以看到当前集群有一个mds运行在ceph-mon02节点并处于up活动状态;
部署多个mds
【分布式存储系统之Ceph集群MDS扩展】[root@ceph-admin ~]# ceph-deploy mds create ceph-mon01 ceph-mon03[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mon01 ceph-mon03[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : create[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9478f34830>[ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] func : <function mds at 0x7f947918d050>[ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] mds : [('ceph-mon01', 'ceph-mon01'), ('ceph-mon03', 'ceph-mon03')][ceph_deploy.cli][INFO ] default_release : False[ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in this directory? [root@ceph-admin ~]# su - cephadmLast login: Thu Sep 29 23:09:04 CST 2022 on pts/0[cephadm@ceph-admin ~]$ lscephadm@ceph-mgr01 cephadm@ceph-mgr02 cephadm@ceph-mon01 cephadm@ceph-mon02 cephadm@ceph-mon03 ceph-cluster[cephadm@ceph-admin ~]$ cd ceph-cluster/[cephadm@ceph-admin ceph-cluster]$ lsceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.logceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mds create ceph-mon01 ceph-mon03[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephadm/.cephdeploy.conf[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mds create ceph-mon01 ceph-mon03[ceph_deploy.cli][INFO ] ceph-deploy options:[ceph_deploy.cli][INFO ] username : None[ceph_deploy.cli][INFO ] verbose : False[ceph_deploy.cli][INFO ] overwrite_conf : False[ceph_deploy.cli][INFO ] subcommand : create[ceph_deploy.cli][INFO ] quiet : False[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2c575ba7e8>[ceph_deploy.cli][INFO ] cluster : ceph[ceph_deploy.cli][INFO ] func : <function mds at 0x7f2c57813050>[ceph_deploy.cli][INFO ] ceph_conf : None[ceph_deploy.cli][INFO ] mds : [('ceph-mon01', 'ceph-mon01'), ('ceph-mon03', 'ceph-mon03')][ceph_deploy.cli][INFO ] default_release : False[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mon01:ceph-mon01 ceph-mon03:ceph-mon03[ceph-mon01][DEBUG ] connection detected need for sudo[ceph-mon01][DEBUG ] connected to host: ceph-mon01[ceph-mon01][DEBUG ] detect platform information from remote host[ceph-mon01][DEBUG ] detect machine type[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core[ceph_deploy.mds][DEBUG ] remote host will use systemd[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mon01[ceph-mon01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph_deploy.mds][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite[ceph-mon03][DEBUG ] connection detected need for sudo[ceph-mon03][DEBUG ] connected to host: ceph-mon03[ceph-mon03][DEBUG ] detect platform information from remote host[ceph-mon03][DEBUG ] detect machine type[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core[ceph_deploy.mds][DEBUG ] remote host will use systemd[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mon03[ceph-mon03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf[ceph-mon03][WARNIN] mds keyring does not exist yet, creating one[ceph-mon03][DEBUG ] create a keyring file[ceph-mon03][DEBUG ] create path if it doesn't exist[ceph-mon03][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mon03 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mon03/keyring[ceph-mon03][INFO ] Running command: sudo systemctl enable ceph-mds@ceph-mon03[ceph-mon03][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mon03.service to /usr/lib/systemd/system/ceph-mds@.service.[ceph-mon03][INFO ] Running command: sudo systemctl start ceph-mds@ceph-mon03[ceph-mon03][INFO ] Running command: sudo systemctl enable ceph.target[ceph_deploy][ERROR ] GenericError: Failed to create 1 MDSs [cephadm@ceph-admin ceph-cluster]$
经验总结扩展阅读
- 苹果手机怎么删除系统软件
- win7升级到win10系统后,node13升级为node16,node版本node-sass版本与不匹配,导致出现npm ERR! ERESOLVE could not resolve
- day09-1存储引擎
- 手把手教你玩转 Gitea|在 Windows 系统上安装 Gitea
- 分布式存储系统之Ceph集群CephFS基础使用
- 日光性皮炎怎么办
- △fhm什么意思
- 海信电视系统如何还原
- 华为手机怎么切换小米操作系统
- qq戳一戳怎么设置