Basic concept
Ceph currently provides three official deployment methods for Ceph clusters: ceph-deploy, Cephadm and manual installation
- ceph-deploy
A cluster automation deployment tool, used for a long time, mature and stable, integrated by many automation tools, can be used in production deploymentCopy the code
- cephadm
A new cluster deployment tool provided by Octopus supports adding nodes on a GRAPHICAL user interface (GUI) or command-line interface (CLI). You are not advised to use it in production environmentsCopy the code
- manual
Manual deployment allows you to step by step deploy Ceph clusters, which supports more customization and deployment details. The installation is difficult, but you can clearly understand the installation and deployment detailsCopy the code
Here we use the mature and simple CEPh -deploy to implement the deployment of cepH cluster
Ceph - Deploy infrastructure
Query IP address: IP addr Query gateway: IP route show Query DNS: cat /etc/resolv.confCopy the code
Note: you can modify the above network segment according to your own environment
Public network & Cluster network
- The public network configuration is used to explicitly define IP addresses and subnets for the public network (cepH default all hosts run on the public network).
- The cluster network provides the following functions: Handles the heartbeat communication between OSD nodes, object replication, and traffic recovery
The OSD processes data replication for clients. The network load may affect the communication between clients and cepH clusters. Therefore, the cluster network must be independent of the public network for performance and security purposes
Public network and Cluster network are reused in this paperCopy the code
Roles in the cluster
- admin-node
An installation management node is required. The installation node is responsible for cluster deployment. Cephnode-01 is used as the admin-node and ceph-mon nodes.Copy the code
- mon
The Monitor node, also known as the monitoring and management node of Ceph, undertakes important management tasks of Ceph cluster. Generally, 3 or 5 nodes are required. A simple monitor node is deployed hereCopy the code
- osd
OSD is an Object Storage Daemon. Each of the three nodes has a 20 GB disk to serve as an Object Storage DaemonCopy the code
Note: If the production environment has multiple nodes, horizontal expansion can be continued. If the disk capacity is insufficient, expand the disk capacity based on requirements
Install a 3-node Ceph cluster
- Hardware environment
Vm: 2core+4 gb +20 gb diskCopy the code
- The operating system
Cat /proc/version CentOS Linux 7.9.2009 CoreCopy the code
- Deployment version
Ceph - deploy 2.0.1Copy the code
Cluster planning
System initialization
Note: Unless otherwise specified, all operations in this section need to be initialized on all nodes
Configuring host Names
hostnamectl set-hostname cephnode-01
hostnamectl set-hostname cephnode-02
hostnamectl set-hostname cephnode-03
Copy the code
Add the mapping between the host name and IP address in the /etc/hosts file of each machine
Cat >> /etc/hosts <<EOF # Ceph Cluster Network 192.168.168.138 cephnode-01 192.168.168.137 cephnode-02 192.168.168.132 Cephnode-03 # Ceph Public Network 192.168.168.138 cephnode-02 192.168.168.132 cephnode-03 EOF then exits and re-logs in to the root account. You can see that the host name takes effectCopy the code
Example Add SSH trust on a node
ssh-keygen -t rsa
ssh-copy-id root@cephnode-01
ssh-copy-id root@cephnode-02
ssh-copy-id root@cephnode-03
Copy the code
Disabling the Firewall
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT
Copy the code
Note: Disable the firewall, clear firewall rules, and set the default forwarding policy
Disabling the swap partition
swapoff -a
sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
Copy the code
Close the SELinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
Copy the code
Configuration EPEL source
Configure the yum source. Due to the network environment, configure the yum source to ali cloud in China to speed up the RPM installation and configuration. It is necessary to configure the basic source, EPEL source and Ceph source of CentOS
# to delete the default yum yum source and configure ali source rm -f/etc/yum repos. D / * repo wget http://mirrors.aliyun.com/repo/Centos-7.repo - P/etc/yum. Repos. D/a # install the EPEL wget source http://mirrors.aliyun.com/repo/epel-7.repo - P/etc/yum. Repos. D/aCopy the code
Configuration Ceph source
cat > /etc/yum.repos.d/ceph.repo <<EOF
[noarch]
name=Ceph noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0
[x86_64]
name=Ceph x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
EOF
Copy the code
Installing dependency packages
Install the ceph-deploy deployment tool. By default, the ceph-deploy version is 1.5 provided by the EPEL source. Make sure the installation version is 2.0.1 or higher, otherwise there will be many problems during the installation
Chrony conntrack ipset jq iptables curl yum install -y chrony Conntrack ipset jq iptables curl # yum install ceph -y # yum install ceph -y # yum install ceph -y --versionCopy the code
Setting the System Time
timedatectl set-timezone Asia/Shanghai
Copy the code
Configuring Clock Synchronization
timedatectl status
Copy the code
Note: System clock synchronized: yes, indicating that the clock has been synchronized; NTP service: active: the clock synchronization service is enabled
Write hardware clock
Timedatectl set-local- RTC 0 # Restart systemctl restart rsyslog Systemctl restart crondCopy the code
Closing irrelevant Services
systemctl stop postfix && systemctl disable postfix
Copy the code
After performing all the above operations we can restart all the hosts
sync
reboot
Copy the code
Deploy the Ceph cluster
Some cluster initialization configuration files and keys are generated during the ceph-deploy deployment process and will be used for subsequent capacity expansion. Therefore, You are advised to create a separate directory on admin-node to perform subsequent operations in this directory. Take ceph-admin as an example. Create a ceph-admin directory under /root/mkdir -p ceph-admin # All subsequent operations must be performed under the ceph-admin directory CD ceph-adminCopy the code
The installation packages required to install Ceph
The ceph-deploy install command automatically installs the EPEL source. Reset the Ceph source. We have already set the Ceph source to the domestic source, so manually install the Ceph installation package. In order to ensure the subsequent installation, deploy the Ceph installation package on all three nodes. Yum install ceph-mon ceph-radosgw ceph-mds ceph-mgr ceph-osd ceph-common -y yum install ceph-mon ceph-radosgw ceph-mds ceph-mgr ceph-osd ceph-common -yCopy the code
Create a Ceph Cluster
You need to specify cluster-network and public-network to deploy new --cluster-network 192.168.168.0/24 - public - network 192.168.168.0/24 cephnode - 01Copy the code
As you can see from the above output, SSH key key, ceph. Conf configuration file is generated during new cluster initialization. Keyring Authentication management key configuration Cluster network and Pubic network The following information is displayed when you view the file in the directoryCopy the code
Initialize the Monitor node
ceph-deploy mon create-initial
Copy the code
Possible exceptions
Ceph --cluster=ceph --admin-daemon: /var/run/ceph/ceph -- mon. Cephnode-01. asok /var/run/ceph/ceph-mon.cephnode-01.asok mon_status This error is reported if the file cannot be found. In this case, check whether the host name is the same as that configured in /etc/hostsCopy the code
After initialization, the following files will be generated automatically. Ceph -deploy --overwrite-conf admin -deploy --overwrite-conf admin -deploy --overwrite-conf admin -deploy --overwrite-conf cephnode-01 cephnode-02 cephnode-03Copy the code
Viewing Cluster Status
The Ceph cluster has been established. The current Ceph cluster contains a monitor node. By using Ceph -s, you can check the current Ceph cluster status, but there are no OSD nodesCopy the code
As you can see from the above screenshot, there is no OSD node in the cluster, so data cannot be stored. Next, add the OSD node to the clusterCopy the code
The next article will explain how to add a disk to a cluster as an OSD node and other operationsCopy the code