当前位置: 欣欣网 > 码农

一文带你通过cephadm无坑部署ceph集群!

2024-02-27码农

关注上方 浩道Linux ,回复 资料 ,即可获取海量 L inux Python 网络通信、网络安全 等学习资料!

前言

大家好,这里是 浩道Linux ,主要给大家分享 L inux P ython 网络通信、网络安全等 相关的IT知识平台。

今天浩道跟大家分享关于ceph集群部署的硬核干货,通过本文你将可以利用cephadm无坑部署ceph一套集群!一起学习下吧!

文章来源:https://blog.51cto.com/zengyi/6059434

一、cephadm介绍

从红帽ceph5开始使用cephadm代替之前的ceph-ansible作为管理整个集群生命周期的工具,包括部署,管理,监控。

cephadm引导过程在单个节点(bootstrap节点)上创建一个小型存储集群,包括一个Ceph Monitor和一个Ceph Manager,以及任何所需的依赖项。

如下图所示:

cephadm可以登录到容器仓库来拉取ceph镜像和使用对应镜像来在对应ceph节点进行部署。ceph容器镜像对于部署ceph集群是必须的,因为被部署的ceph容器是基于那些镜像。

为了和ceph集群节点通信,cephadm使用ssh。通过使用ssh连接,cephadm可以向集群中添加主机,添加存储和监控那些主机。

该节点让集群up的软件包就是cepadm,podman或docker,python3和chrony。这个容器化的版本减少了ceph集群部署的复杂性和依赖性。

1、python3

yum -y install python3

2、podman或者docker来运行容器

# 安装阿里云提供的docker-ceyum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable docker --now# 配置镜像加速器mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://bp1bh1ga.mirror.aliyuncs.com"]}EOFsystemctl daemon-reloadsystemctl restart docker

3、时间同步(比如chrony或者NTP)

二、部署ceph集群前准备

2.1、节点准备

节点名称

系统

IP地址

ceph角色

硬盘

node1

Rocky Linux release 8.6

172.24.1.6

mon,mgr,服务器端,管理节点

/dev/vdb,/dev/vdc/,dev/vdd

node2

Rocky Linux release 8.6

172.24.1.7

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node3

Rocky Linux release 8.6

172.24.1.8

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node4

Rocky Linux release 8.6

172.24.1.9

客户端,管理节点


2.2、修改每个节点的/etc/host

172.24.1.6node1172.24.1.7node2172.24.1.8node3172.24.1.9node4

2.3、在node1节点上做免密登录

[root@node1 ~]# ssh-keygen[root@node1 ~]# ssh-copy-id root@node2[root@node1 ~]# ssh-copy-id root@node3[root@node1 ~]# ssh-copy-id root@node4

三、node1节点安装cephadm

1.安装epel源[root@node1 ~]# yum -y install epel-release2.安装ceph源[root@node1 ~]# yum search release-ceph上次元数据过期检查:0:57:14 前,执行于 2023年02月14日 星期二 14时22分00秒。================= 名称 匹配:release-ceph ============================================centos-release-ceph-nautilus.noarch : Ceph Nautilus packages from the CentOS Storage SIG repositorycentos-release-ceph-octopus.noarch : Ceph Octopus packages from the CentOS Storage SIG repositorycentos-release-ceph-pacific.noarch : Ceph Pacific packages from the CentOS Storage SIG repositorycentos-release-ceph-quincy.noarch : Ceph Quincy packages from the CentOS Storage SIG repository[root@node1 ~]# yum -y install centos-release-ceph-pacific.noarch3.安装cephadm[root@node1 ~]# yum -y install cephadm4.安装ceph-common[root@node1 ~]# yum -y install ceph-common

四、其它节点安装docker-ce,python3

具体过程看标题一。

五、部署ceph集群

5.1、部署ceph集群,顺便把dashboard(图形控制界面)安装上

[root@node1 ~]# cephadm bootstrap --mon-ip 172.24.1.6 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password redhat --dashboard-password-noupdateVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) ispresentsystemctl ispresentlvcreate ispresentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 0b565668-ace4-11ed-960c-5254000de7a0Verifying IP 172.24.1.6 port 3300 ...Verifying IP 172.24.1.6 port 6789 ...Mon IP `172.24.1.6`isin CIDR network `172.24.1.0/24`- internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v16...Ceph version: ceph version16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)Extracting ceph user uid/gid fromcontainer image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to172.24.1.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH keyto /etc/ceph/ceph.pubAdding keyto root@localhost authorized_keys...Adding host node1...Deploying mon service withdefault placement...Deploying mgr service withdefault placement...Deploying crash service withdefault placement...Deploying prometheus service withdefault placement...Deploying grafana service withdefault placement...Deploying node-exporter service withdefault placement...Deploying alertmanager service withdefault placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9is availableGenerating a dashboard self-signed certificate...Creating initialadmin user...Fetching dashboard port number...Ceph Dashboard isnow available at:URL: https://node1.domain1.example.com:8443/User: adminPassword: redhatEnabling client.admin keyring and conf onhostswith"admin" labelYou can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 0b565668-ace4-11ed-960c-5254000de7a0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringPlease consider enabling telemetry tohelp improve Ceph: ceph telemetry onFor more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/Bootstrap complete.

5.2、把集群公钥复制到将成为集群成员的节点

[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node4

5.3、添加节点node2,node3,node4(各节点要先安装docker-ce,python3)

[root@node1 ~]# ceph orch host add node2 172.24.1.7Added host 'node2' with addr '172.24.1.7'[root@node1 ~]# ceph orch host add node3 172.24.1.8Added host 'node3' with addr '172.24.1.8'[root@node1 ~]# ceph orch host add node4 172.24.1.9Added host 'node4' with addr '172.24.1.9'

5.4、给node1、node4打上管理员标签,拷贝ceph配置文件和keyring到node4

[root@node1 ~]# ceph orch host label add node1 _adminAdded label _admin to host node1[root@node1 ~]# ceph orch host label add node4 _adminAdded label _admin to host node4[root@node1 ~]# scp /etc/ceph/{*.conf,*.keyring} root@node4:/etc/ceph[root@node1 ~]# ceph orch host lsHOST ADDR LABELS STATUS node1 172.24.1.6 _admin node2 172.24.1.7node3 172.24.1.8node4 172.24.1.9 _admin

5.5、添加mon

[root@node1 ~]# ceph orch apply mon "node1,node2,node3"Scheduled mon update...

5.6、添加mgr

[root@node1 ~]# ceph orch apply mgr --placement="node1,node2,node3"Scheduled mgr update...

5.7、添加osd

[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdd或者:[root@node1 ~]# for i in node1 node2 node3; do for j in vdb vdc vdd; do ceph orch daemon add osd $i:/dev/$j; done; doneCreated osd(s) 0 on host 'node1'Created osd(s) 1 on host 'node1'Created osd(s) 2 on host 'node1'Created osd(s) 3 on host 'node2'Created osd(s) 4 on host 'node2'Created osd(s) 5 on host 'node2'Created osd(s) 6 on host 'node3'Created osd(s) 7 on host 'node3'Created osd(s) 8 on host 'node3'[root@node1 ~]# ceph orch device lsHOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS node1 /dev/vdb hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdc hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdd hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdb hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdc hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdd hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdb hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdc hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdd hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked

5.8、至此,ceph集群部署完毕!

[root@node1~]# ceph -scluster:id: 0b565668-ace4-11ed-960c-5254000de7a0health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 7m)mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxuosd: 9 osds: 9 up (since 59s), 9 in (since 81s)data:pools: 1 pools, 1 pgsobjects: 0 objects, 0 Busage: 53 MiB used, 90 GiB / 90 GiB availpgs: 1 active+clean

5.9、node4节点管理ceph

# 在目录5.4已经将ceph配置文件和keyring拷贝到node4节点[root@node4~]# ceph -s-bash: ceph: 未找到命令,需要安装ceph-common# 安装ceph源[root@node4~]# yum -y install centos-release-ceph-pacific.noarch# 安装ceph-common[root@node4~]# yum -y install ceph-common[root@node4~]# ceph -scluster:id: 0b565668-ace4-11ed-960c-5254000de7a0health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 7m)mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxuosd: 9 osds: 9 up (since 59s), 9 in (since 81s)data:pools: 1 pools, 1 pgsobjects: 0 objects, 0 Busage: 53 MiB used, 90 GiB / 90 GiB availpgs: 1 active+clean

更多精彩

关注公众号 浩道Linux

浩道Linux ,专注于 Linux系统 的相关知识、 网络通信 网络安全 Python相关 知识以及涵盖IT行业相关技能的学习, 理论与实战结合,真正让你在学习工作中真正去用到所学。同时也会分享一些面试经验,助你找到高薪offer,让我们一起去学习,一起去进步,一起去涨薪!期待您的加入~~~ 关注回复「资料」可 免费获取学习资料 (含有电子书籍、视频等)。

喜欢的话,记得 点「赞」 「在看」