分布式存储系统之Ceph集群部署( 九 )

不过,对于Luminous之前的版本来说,管理员需要依次手动执行如下步骤删除OSD设备:
1. 于CRUSH运行图中移除设备:ceph osd crush remove {name}
【分布式存储系统之Ceph集群部署】2. 移除OSD的认证key:ceph auth del osd.{osd-num}
3. 最后移除OSD设备:ceph osd rm {osd-num}
测试上传下载数据对象
1、创建存储池并设置PG数量为16个
[root@ceph-mon01 ~]# ceph osd pool create testpool 16 16pool 'testpool' created[root@ceph-mon01 ~]# ceph osd pool lstestpool[root@ceph-mon01 ~]#2、上传文件到testpool
[root@ceph-mon01 ~]# rados put test /etc/issue -p testpool[root@ceph-mon01 ~]# rados ls -p testpooltest[root@ceph-mon01 ~]#提示:可以看到我们上传/etc/issue文件到testpool存储池中并命名为test,对应文件已将在testpool存储中存在;说明上传没有问题;
3、获取存储池中数据对象的具体位置信息
[root@ceph-mon01 ~]# ceph osd map testpool testosdmap e44 pool 'testpool' (1) object 'test' -> pg 1.40e8aab5 (1.5) -> up ([4,0,6], p4) acting ([4,0,6], p4)[root@ceph-mon01 ~]#提示:可以看到test文件在testpool存储中被分别存放编号为4、0、6的osd上去了;
4、下载文件到本地
[root@ceph-mon01 ~]# ls[root@ceph-mon01 ~]# rados get test test-down -p testpool[root@ceph-mon01 ~]# lstest-down[root@ceph-mon01 ~]# cat test-down\SKernel \r on an \m[root@ceph-mon01 ~]#5、删除数据对象
[root@ceph-mon01 ~]# rados rm test-p testpool[root@ceph-mon01 ~]# rados ls -p testpool[root@ceph-mon01 ~]#6、删除存储池
[root@ceph-mon01 ~]# ceph osd pool rm testpoolError EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool testpool.If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.[root@ceph-mon01 ~]# ceph osd pool rm testpool --yes-i-really-really-mean-it.Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool testpool.If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.[root@ceph-mon01 ~]#提示:删除存储池命令存在数据丢失的风险,Ceph于是默认禁止此类操作 。管理员需要在ceph.conf配置文件中启用支持删除存储池的操作后,方可使用类似上述命令删除存储池;
扩展ceph集群
扩展mon节点
Ceph存储集群需要至少运行一个Ceph Monitor和一个Ceph Manager,生产环境中,为了实现高可用性,Ceph存储集群通常运行多个监视器,以免单监视器整个存储集群崩溃 。Ceph使用Paxos算法,该算法需要半数以上的监视器大于n/2,其中n为总监视器数量)才能形成法定人数 。尽管此非必需,但奇数个监视器往往更好 。“ceph-deploy mon add {ceph-nodes}”命令可以一次添加一个监视器节点到集群中 。
[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add ceph-mon02[cephadm@ceph-admin ceph-cluster]$ ceph-deploy mon add ceph-mon03查看监视器及法定人数相关状态
[root@ceph-mon01 ~]# ceph quorum_status --format json-pretty{"election_epoch": 12,"quorum": [0,1,2],"quorum_names": ["ceph-mon01","ceph-mon02","ceph-mon03"],"quorum_leader_name": "ceph-mon01","monmap": {"epoch": 3,"fsid": "7fd4a619-9767-4b46-9cee-78b9dfe88f34","modified": "2022-09-24 01:56:24.196075","created": "2022-09-24 00:36:13.210155","features": {"persistent": ["kraken","luminous","mimic","osdmap-prune"],"optional": []},"mons": [{"rank": 0,"name": "ceph-mon01","addr": "192.168.0.71:6789/0","public_addr": "192.168.0.71:6789/0"},{"rank": 1,"name": "ceph-mon02","addr": "192.168.0.72:6789/0","public_addr": "192.168.0.72:6789/0"},{"rank": 2,"name": "ceph-mon03","addr": "192.168.0.73:6789/0","public_addr": "192.168.0.73:6789/0"}]}}[root@ceph-mon01 ~]#提示:可以看到现在有3个mon节点,其中mon01为leader节点,总共有3个选票;

经验总结扩展阅读