當前位置: 妍妍網 > 碼農

一文帶你透過cephadm無坑部署ceph集群!

2024-02-27碼農

關註上方 浩道Linux ,回復 資料 ,即可獲取海量 L inux Python 網路通訊、網路安全 等學習資料!

前言

大家好,這裏是 浩道Linux ,主要給大家分享 L inux P ython 網路通訊、網路安全等 相關的IT知識平台。

今天浩道跟大家分享關於ceph集群部署的硬核幹貨,透過本文你將可以利用cephadm無坑部署ceph一套集群!一起學習下吧!

文章來源:https://blog.51cto.com/zengyi/6059434

一、cephadm介紹

從紅帽ceph5開始使用cephadm代替之前的ceph-ansible作為管理整個集群生命周期的工具,包括部署,管理,監控。

cephadm引導過程在單個節點(bootstrap節點)上建立一個小型儲存集群,包括一個Ceph Monitor和一個Ceph Manager,以及任何所需的依賴項。

如下圖所示:

cephadm可以登入到容器倉庫來拉取ceph映像和使用對應映像來在對應ceph節點進行部署。ceph容器映像對於部署ceph集群是必須的,因為被部署的ceph容器是基於那些映像。

為了和ceph集群節點通訊,cephadm使用ssh。透過使用ssh連線,cephadm可以向集群中添加主機,添加儲存和監控那些主機。

該節點讓集群up的軟體包就是cepadm,podman或docker,python3和chrony。這個容器化的版本減少了ceph集群部署的復雜性和依賴性。

1、python3

yum -y install python3

2、podman或者docker來執行容器

# 安裝阿裏雲提供的docker-ceyum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable docker --now# 配置映像加速器mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://bp1bh1ga.mirror.aliyuncs.com"]}EOFsystemctl daemon-reloadsystemctl restart docker

3、時間同步(比如chrony或者NTP)

二、部署ceph集群前準備

2.1、節點準備

節點名稱

系統

IP地址

ceph角色

硬碟

node1

Rocky Linux release 8.6

172.24.1.6

mon,mgr,伺服器端,管理節點

/dev/vdb,/dev/vdc/,dev/vdd

node2

Rocky Linux release 8.6

172.24.1.7

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node3

Rocky Linux release 8.6

172.24.1.8

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node4

Rocky Linux release 8.6

172.24.1.9

客戶端,管理節點


2.2、修改每個節點的/etc/host

172.24.1.6node1172.24.1.7node2172.24.1.8node3172.24.1.9node4

2.3、在node1節點上做免密登入

[root@node1 ~]# ssh-keygen[root@node1 ~]# ssh-copy-id root@node2[root@node1 ~]# ssh-copy-id root@node3[root@node1 ~]# ssh-copy-id root@node4

三、node1節點安裝cephadm

1.安裝epel源[root@node1 ~]# yum -y install epel-release2.安裝ceph源[root@node1 ~]# yum search release-ceph上次後設資料過期檢查:0:57:14 前,執行於 2023年02月14日 星期二 14時22分00秒。================= 名稱 匹配:release-ceph ============================================centos-release-ceph-nautilus.noarch : Ceph Nautilus packages from the CentOS Storage SIG repositorycentos-release-ceph-octopus.noarch : Ceph Octopus packages from the CentOS Storage SIG repositorycentos-release-ceph-pacific.noarch : Ceph Pacific packages from the CentOS Storage SIG repositorycentos-release-ceph-quincy.noarch : Ceph Quincy packages from the CentOS Storage SIG repository[root@node1 ~]# yum -y install centos-release-ceph-pacific.noarch3.安裝cephadm[root@node1 ~]# yum -y install cephadm4.安裝ceph-common[root@node1 ~]# yum -y install ceph-common

四、其它節點安裝docker-ce,python3

具體過程看標題一。

五、部署ceph集群

5.1、部署ceph集群,順便把dashboard(圖形控制界面)安裝上

[root@node1 ~]# cephadm bootstrap --mon-ip 172.24.1.6 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password redhat --dashboard-password-noupdateVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) ispresentsystemctl ispresentlvcreate ispresentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 0b565668-ace4-11ed-960c-5254000de7a0Verifying IP 172.24.1.6 port 3300 ...Verifying IP 172.24.1.6 port 6789 ...Mon IP `172.24.1.6`isin CIDR network `172.24.1.0/24`- internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v16...Ceph version: ceph version16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)Extracting ceph user uid/gid fromcontainer image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to172.24.1.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH keyto /etc/ceph/ceph.pubAdding keyto root@localhost authorized_keys...Adding host node1...Deploying mon service withdefault placement...Deploying mgr service withdefault placement...Deploying crash service withdefault placement...Deploying prometheus service withdefault placement...Deploying grafana service withdefault placement...Deploying node-exporter service withdefault placement...Deploying alertmanager service withdefault placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9is availableGenerating a dashboard self-signed certificate...Creating initialadmin user...Fetching dashboard port number...Ceph Dashboard isnow available at:URL: https://node1.domain1.example.com:8443/User: adminPassword: redhatEnabling client.admin keyring and conf onhostswith"admin" labelYou can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 0b565668-ace4-11ed-960c-5254000de7a0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringPlease consider enabling telemetry tohelp improve Ceph: ceph telemetry onFor more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/Bootstrap complete.

5.2、把集群公鑰復制到將成為集群成員的節點

[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node4

5.3、添加節點node2,node3,node4(各節點要先安裝docker-ce,python3)

[root@node1 ~]# ceph orch host add node2 172.24.1.7Added host 'node2' with addr '172.24.1.7'[root@node1 ~]# ceph orch host add node3 172.24.1.8Added host 'node3' with addr '172.24.1.8'[root@node1 ~]# ceph orch host add node4 172.24.1.9Added host 'node4' with addr '172.24.1.9'

5.4、給node1、node4打上管理員標簽,拷貝ceph配置檔和keyring到node4

[root@node1 ~]# ceph orch host label add node1 _adminAdded label _admin to host node1[root@node1 ~]# ceph orch host label add node4 _adminAdded label _admin to host node4[root@node1 ~]# scp /etc/ceph/{*.conf,*.keyring} root@node4:/etc/ceph[root@node1 ~]# ceph orch host lsHOST ADDR LABELS STATUS node1 172.24.1.6 _admin node2 172.24.1.7node3 172.24.1.8node4 172.24.1.9 _admin

5.5、添加mon

[root@node1 ~]# ceph orch apply mon "node1,node2,node3"Scheduled mon update...

5.6、添加mgr

[root@node1 ~]# ceph orch apply mgr --placement="node1,node2,node3"Scheduled mgr update...

5.7、添加osd

[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdd或者:[root@node1 ~]# for i in node1 node2 node3; do for j in vdb vdc vdd; do ceph orch daemon add osd $i:/dev/$j; done; doneCreated osd(s) 0 on host 'node1'Created osd(s) 1 on host 'node1'Created osd(s) 2 on host 'node1'Created osd(s) 3 on host 'node2'Created osd(s) 4 on host 'node2'Created osd(s) 5 on host 'node2'Created osd(s) 6 on host 'node3'Created osd(s) 7 on host 'node3'Created osd(s) 8 on host 'node3'[root@node1 ~]# ceph orch device lsHOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS node1 /dev/vdb hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdc hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdd hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdb hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdc hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdd hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdb hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdc hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdd hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked

5.8、至此,ceph集群部署完畢!

[root@node1~]# ceph -scluster:id: 0b565668-ace4-11ed-960c-5254000de7a0health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 7m)mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxuosd: 9 osds: 9 up (since 59s), 9 in (since 81s)data:pools: 1 pools, 1 pgsobjects: 0 objects, 0 Busage: 53 MiB used, 90 GiB / 90 GiB availpgs: 1 active+clean

5.9、node4節點管理ceph

# 在目錄5.4已經將ceph配置檔和keyring拷貝到node4節點[root@node4~]# ceph -s-bash: ceph: 未找到命令,需要安裝ceph-common# 安裝ceph源[root@node4~]# yum -y install centos-release-ceph-pacific.noarch# 安裝ceph-common[root@node4~]# yum -y install ceph-common[root@node4~]# ceph -scluster:id: 0b565668-ace4-11ed-960c-5254000de7a0health: HEALTH_OKservices:mon: 3 daemons, quorum node1,node2,node3 (age 7m)mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxuosd: 9 osds: 9 up (since 59s), 9 in (since 81s)data:pools: 1 pools, 1 pgsobjects: 0 objects, 0 Busage: 53 MiB used, 90 GiB / 90 GiB availpgs: 1 active+clean

更多精彩

關註公眾號 浩道Linux

浩道Linux ,專註於 Linux系統 的相關知識、 網路通訊 網路安全 Python相關 知識以及涵蓋IT行業相關技能的學習, 理論與實戰結合,真正讓你在學習工作中真正去用到所學。同時也會分享一些面試經驗,助你找到高薪offer,讓我們一起去學習,一起去進步,一起去漲薪!期待您的加入~~~ 關註回復「資料」可 免費獲取學習資料 (含有電子書籍、視訊等)。

喜歡的話,記得 點「贊」 「在看」