How to deploy Ceph Storage Cluster on Centos7

Ceph is deployed in ceph-deploy mode this time
Cluster info
role
ip
ceph-001 ceph-deploy admin osd mon
10.16.70.6
ceph-002 osd mon
10.16.70.5
ceph-003 osd mon
10.16.70.7

 

Step1-ceph cluster’s┬áBasic configuration
Add ceph source
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
yum clean all
yum makecache
yum update
configuration all node’s hostname
hostnamectl set-hostname ceph-001
hostnamectl set-hostname ceph-002
hostnamectl set-hostname ceph-003
Set deploy node’s /etc/hosts
10.16.70.6 ceph-001
10.16.70.5 ceph-002
10.16.70.7 ceph-003
Set deploy node access to other nodes without password
ssh-keygen ssh-copy-id ceph-001 ssh-copy-id ceph-002 ssh-copy-id ceph-003
Install ntp server for all nodes
yum install -y ntp ntpdate ntp-doc ntpdate 0.us.pool.ntp.org hwclock –systohc systemctl enable ntpd.service systemctl start ntpd.service
Disable selinux
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
close firewall
systemctl stop firewalld systemctl disable firewalld
Configure the disks
Add disk 100G
all nodes disk 100G
format disk
fdisk /dev/vdb
mkfs.ext4 /dev/vdb
Step2-starting deploy ceph cluster
deploy node:Install ceph-deploy
yum -y install ceph-deploy
create cluster directory
mkdir cluster
cd cluster/
create cluster
ceph-deploy new ceph-001 ceph-002 ceph-003
modifier ceph.conf
echo mon_clock_drift_allowed = 2 >> ceph.conf
Install ceph
ceph-deploy install ceph-001 ceph-002 ceph-003
Initializes monitor
ceph-deploy mon create-initial
Add OSD to the cluster
Check all available disks on the OSD node
ceph-deploy disk list ceph-001 ceph-002 ceph-003
Prepare osd
ceph-deploy –overwrite-conf osd prepare ceph-001:/dev/vdb ceph-002:/dev/vdb ceph-003:/dev/vdb –zap-disk
activate osd
ceph-deploy –overwrite-conf osd activate ceph-001:/dev/vdb1 ceph-002:/dev/vdb1 ceph-003:/dev/vdb1
Check the osd
check ceph cluster
ceph health
ceph -s

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *