ceph部署
ceph-deploy部署三节点ceph集群
ceph版本
Ceph:Nautilus(14.2.22)
Ceph-deploy:2.0.1
机器信息
操作系统版本
[root@node-1 ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
IP地址 | 主机名 | 附加磁盘(OSD) | 集群角色 |
---|---|---|---|
172.16.214.169 | node-1 | 一块20G磁盘(/dev/sdb) | mon,mgr,osd0(主节点) |
172.16.214.170 | node-2 | 一块20G磁盘(/dev/sdb) | osd1 |
172.16.214.171 | node-2 | 一块20G磁盘(/dev/sdb) | osd2 |
如果环境允许,可以用一个 ceph-admin 节点专门放置 mon,mgr,mds 等这些组件,osd 放置在其他节点,更便于管理
关闭防火墙和selinux
sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
配置hosts文件
在所有节点执行:
172.16.214.169 node-1
172.16.214.170 node-2
172.16.214.171 node-3
配置 ssh 无密码访问
[root@node-1 ~]$ ssh-keygen (每一步都按回车,口令密码留空)
[root@node-1 ~]$ ssh-copy-id root@node-1
[root@node-1 ~]$ ssh-copy-id root@node-2
[root@node-1 ~]$ ssh-copy-id root@node-3
配置 ntp 时间同步
全部节点都需要执行
yum -y intall ntp
node1节点操作:
vim /etc/ntp.conf
#注释掉默认的配置项:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#添加配置项:
server ntp1.aliyun.com #阿里云ntp服务器
#node2/node3节点操作:
vim /etc/ntp.conf
#同样注释掉默认的server配置项:
#添加配置项:
server 172.16.214.169 #node-1-ntp服务器
#全部节点都执行:
systemctl restart ntpd
systemctl enable ntpd
#查看ntp连接情况和状态
[root@node-1 etc]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
120.25.115.20 .INIT. 16 u - 64 0 0.000 0.000 0.000
[root@node-2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
node-1 .INIT. 16 u - 64 0 0.000 0.000 0.000
[root@node-3 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
node-1 .INIT. 16 u - 64 0 0.000 0.000 0.000
[root@node-1 ~]# ntpstat
synchronised to NTP server (84.16.73.33) at stratum 2
time correct to within 139659 ms
polling server every 64 s
部署 Ceph 集群
添加阿里云的 base 源和 epel 源
所有节点都执行
备份系统原本的源
[root@node-1 ~]# mkdir /mnt/repo_bak
[root@node-1 ~]# mv /etc/yum.repos.d/* /mnt/repo_bak
添加新源
[root@node-1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@node-1 ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
添加ceph的yum源
所有节点都执行
vim /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph
baseurl=http://download.ceph.com/rpm-nautilus/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
#更新yum缓存及系统软件
yum makecache
yum -y update
查看ceph版本,判断yum是否配置正确
[root@node-1 ~] yum list ceph --showduplicates |sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
Installed Packages
* extras: mirrors.aliyun.com
ceph.x86_64 2:14.2.9-0.el7 x86_64
ceph.x86_64 2:14.2.8-0.el7 x86_64
ceph.x86_64 2:14.2.7-0.el7 x86_64
ceph.x86_64 2:14.2.6-0.el7 x86_64
ceph.x86_64 2:14.2.5-0.el7 x86_64
ceph.x86_64 2:14.2.4-0.el7 x86_64
ceph.x86_64 2:14.2.3-0.el7 x86_64
ceph.x86_64 2:14.2.22-0.el7 x86_64
ceph.x86_64 2:14.2.22-0.el7 @x86_64
ceph.x86_64 2:14.2.21-0.el7 x86_64
ceph.x86_64 2:14.2.2-0.el7 x86_64
ceph.x86_64 2:14.2.20-0.el7 x86_64
ceph.x86_64 2:14.2.19-0.el7 x86_64
ceph.x86_64 2:14.2.18-0.el7 x86_64
ceph.x86_64 2:14.2.17-0.el7 x86_64
ceph.x86_64 2:14.2.16-0.el7 x86_64
ceph.x86_64 2:14.2.15-0.el7 x86_64
ceph.x86_64 2:14.2.14-0.el7 x86_64
ceph.x86_64 2:14.2.13-0.el7 x86_64
ceph.x86_64 2:14.2.12-0.el7 x86_64
ceph.x86_64 2:14.2.11-0.el7 x86_64
ceph.x86_64 2:14.2.1-0.el7 x86_64
ceph.x86_64 2:14.2.10-0.el7 x86_64
ceph.x86_64 2:14.2.0-0.el7 x86_64
ceph.x86_64 2:14.1.1-0.el7 x86_64
ceph.x86_64 2:14.1.0-0.el7 x86_64
* base: mirrors.aliyun.com
Available Packages
[root@node-1 ~] yum list ceph-deploy --showduplicates |sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
Installed Packages
* extras: mirrors.aliyun.com
ceph-deploy.noarch 2.0.1-0 norch
ceph-deploy.noarch 2.0.1-0 @norch
ceph-deploy.noarch 2.0.0-0 norch
ceph-deploy.noarch 1.5.39-0 norch
ceph-deploy.noarch 1.5.38-0 norch
ceph-deploy.noarch 1.5.37-0 norch
ceph-deploy.noarch 1.5.36-0 norch
ceph-deploy.noarch 1.5.35-0 norch
ceph-deploy.noarch 1.5.34-0 norch
ceph-deploy.noarch 1.5.33-0 norch
ceph-deploy.noarch 1.5.32-0 norch
ceph-deploy.noarch 1.5.31-0 norch
ceph-deploy.noarch 1.5.30-0 norch
ceph-deploy.noarch 1.5.29-0 norch
ceph-deploy.noarch 1.5.25-1.el7 epel
* base: mirrors.aliyun.com
Available Packages
安装 ceph-deploy
在主节点 node-1 上执行
yum -y install python-setuptools #安装ceph依赖包
yum install ceph-deploy (默认会选择安装2.0最新版本)
#查看ceph-deploy安装版本
[root@node-1 ceph-deploy] ceph-deploy --version
2.0.1
初始化集群
在主节点 node-1 上执行
mkdir /root/ceph-deploy
cd /root/ceph-deploy
ceph-deploy new --public-network 172.16.214.0/24 --cluster-network 172.16.214.0/24 node-1
#在该目录下生成了新集群的一些配置文件
[root@node-1 ceph-deploy]# ll
total 280
-rw------- 1 root root 113 Mar 10 20:33 ceph.bootstrap-mds.keyring
-rw------- 1 root root 113 Mar 10 20:33 ceph.bootstrap-mgr.keyring
-rw------- 1 root root 113 Mar 10 20:33 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Mar 10 20:33 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 151 Mar 10 20:33 ceph.client.admin.keyring
-rw-r--r-- 1 root root 300 Mar 10 21:44 ceph.conf
-rw-r--r--. 1 root root 255706 Mar 10 21:44 ceph-deploy-ceph.log
-rw-------. 1 root root 73 Mar 10 11:50 ceph.mon.keyring
安装 ceph
ceph-deploy install --no-adjust-repos node-1 node-2 node-3
初始化 ceph monitor
ceph-deploy mon create-initial
同步配置文件
将当前临时文件夹下的配置文件同步到所有节点的 /etc/ceph/ 下
ceph-deploy install --no-adjust-repos node-1 node-2 node-3
安装 ceph mgr
ceph-deploy mgr create node-1
配置 Mgr-Dashboard 模块
#安装 ceph-mgr-dashboard
yum -y install ceph-mgr-dashboard
#开启 dashboard 模块
ceph mgr module enable dashboard
#默认情况下,仪表板的所有 HTTP 连接均使用 SSL/TLS 进行保护。
#要快速启动并运行仪表板,可以使用以下命令生成并安装自签名证书
[root@node-1 ceph-deploy ~]$ sudo ceph dashboard create-self-signed-cert
Self-signed certificate created
#创建具有管理员角色的用户:
ceph dashboard set-login-credentials admin admin
#查看 ceph-mgr 服务:
[root@node-1 ceph-deploy]# ceph mgr services
{
"dashboard": "https://node-1:8443/"
}
#浏览器访问测试:
http://172.16.214.169:8443
安装 ceph mds
ceph-deploy mds create node-1
为集群增加 osd
我这个环境有三台虚拟机,每台虚拟机上有额外 1 块硬盘用于 ceph 集群,即sdb。这个要根据自己的环境找到正确的硬盘。可以使用lsblk
命令查看。
ceph-deploy osd create --data /dev/sdb node-1
ceph-deploy osd create --data /dev/sdb node-2
ceph-deploy osd create --data /dev/sdb node-3
检查集群状态
#ceph 健康: ceph health
[root@node-1 ceph-deploy]# ceph health
HEALTH_OK
#ceph 集群详细状态:ceph -s
[root@node-1 ceph-deploy]# ceph -s
cluster:
id: 707ccbc6-e01b-4722-89bc-7c98e04a80a2
health: HEALTH_OK
services:
mon: 1 daemons, quorum node-1 (age 64m)
mgr: node-1(active, since 63m)
mds: 1 up:standby
osd: 3 osds: 3 up (since 12m), 3 in (since 33h)
data:
pools: 4 pools, 64 pgs
objects: 48 objects, 31 MiB
usage: 3.2 GiB used, 57 GiB / 60 GiB avail
pgs: 64 active+clean
#ceph 集群 osd 状态:ceph osd tree
[root@node-1 ceph-deploy]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.05846 root default
-3 0.01949 host node-1
0 hdd 0.01949 osd.0 up 1.00000 1.00000
-5 0.01949 host node-2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-7 0.01949 host node-3
2 hdd 0.01949 osd.2 up 1.00000 1.00000
本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 haominglfs的博客!