@TOC
1 introduction
In an openstack environment, data storage can be classified into temporary storage and permanent storage.
Temporary storage: Provided by the local file system and used for the local system of the Nova VM, temporary data disks, and system images uploaded by glance.
Permanent storage: Consists of block storage provided by Cinder and object storage provided by Swift. Block storage provided by Cinder is the most widely used. Block storage is usually mounted to VMS as cloud disks.
In Openstack, the three major data storage projects are nova (vm image file), Glance (shared template image), and Cinder (block storage).
The following figure shows the logical diagram of cinder, Glance, and Nova accessing the Ceph cluster:
The integration between Ceph and openstack mainly uses the RBD service of Ceph. The underlying layer of Ceph is the RadOS storage cluster, and Ceph realizes the access to the underlying RADOS through librados library.
Openstack project clients call LibrBD, and then librBD calls Librados to access the underlying RADOS. In practice, nova needs to use libvirtdriver to call librbd via libvirt and qemu. Librbd can be directly invoked by Cinder and Glance.
The data written to the Ceph cluster is sliced into multiple objects. The Objects are mapped to pg (which forms the PG container pool) by hash function. Then the PG is mapped to physical storage device OSD (OSD is a physical storage device based on file system, such as XFS) by several circles of Crush algorithm. Corruption, etc.).
2 Perform operations on an openstack cluster
Install the Ceph Nautilus source package on all controller and compute nodes of openstack. Centos8 is installed by default, but it must be the same as the ceph version you are connecting to! The default way to install the source package is as follows:
yum install centos-release-ceph-nautilus.noarch
Copy the code
# but I’m using nautilus14.2.10 here, so
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch
enabled=1
gpgcheck=0
type=rpm-md
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
Copy the code
# Python3-rBD must be installed on the node where the Glance-API service is installed. The Glance-API service runs on three controller nodes, so all three must be installed
yum install python3-rbd -y
Copy the code
# Cph-common must be installed on the nodes where the cinder-volume and nova-compute services reside. # Here cinder-volume and nova-compute services are running on two computing (storage) nodes
yum install ceph-common -y
Copy the code
3 Operations on the Ceph cluster
Add hosts to the ceph cluster and OpenStack cluster. Add hosts to the ceph cluster and OpenStack cluster
172.16.1.131 ceph131
172.16.1.132 ceph132
172.16.1.133 ceph133
172.16.1.160 controller160
172.16.1.161 controller161
172.16.1.162 controller162
172.16.1.168 controller168
172.16.1.163 compute163
172.16.1.164 compute164
Copy the code
3.1 Creating a Pool used by an openstack Cluster
By default, Ceph uses the form of pool to store data. A pool is a logical division of several PGS for organizational management. The objects in the PGS are mapped to different OSD nodes, so the pool is distributed across the entire cluster. It is possible to store different data in one pool, but this operation is not easy to distinguish the client data management, so generally create a separate pool for each client. Create pools for cinder, nova and Glance, and name them as: Volumes, VMS, images ==# The number of volumes is a permanent storage, VMS is a temporary back-end storage, images is a mirror storage == ==# The number of pg is an algorithm, you can use the official website of the computer to compute! The number of PGS in a pool can be calculated as follows: total number of PGS = (number of OSD nodes x 100)/Maximum number of copies/Pool (the result must be round to the NTH power of 2)== # Create pools in the ceph cluster
[root@ceph131 ~]# ceph osd pool create volumes 16 16 replicated
pool 'volumes' created
[root@ceph131 ~]# ceph osd pool create vms 16 16 replicated
pool 'vms' created
[root@ceph131 ~]# ceph osd pool create images 16 16 replicated
pool 'images' created
[root@ceph131 ~]# ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 rbd_storage
4 .rgw.root
5 default.rgw.control
6 default.rgw.meta
7 default.rgw.log
10 volumes
11 vms
12 images
Copy the code
3.2 Ceph Authorization Settings
3.2.1 Creating a User
Cephx authentication is enabled in ceph by default. You need to create and authorize new users for nova/ Cinder and Glance clients. # Create client. Glance and client. Cinder users on the management node for the nodes running cinder-volume and glance-api services respectively and set permissions. Set permissions for a pool. The pool name corresponds to the pool to be created
[root@ceph131 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
Copy the code
3.2.2 Pushing the Client.glance&client. cinder Key
# Configure no-secret operations for nodes
[cephdeploy@ceph131 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CsvXYKm8mRzasMFwgWVLx5LvvfnPrRc5S1wSb6kPytM root@ceph131
The key's randomart image is: +---[RSA 2048]----+ | +o. | | =oo. . | |. oo o . | | .. ... = | |... + S . * | | + o.=.+ O | | + * oo.. + * | | B *o .+.E . | | o * ... ++. | +----[SHA256]-----+Copy the code
# Push the key to all openstack cluster nodes
ssh-copy-id root@controller160 ssh-copy-id root@controller161 ssh-copy-id root@controller162 ssh-copy-id root@compute163 ssh-copy-id root@compute164Copy the code
Nova-compute and nova-volume run on the same node as each other.
# Push the secret key generated by the client.glance user to the node running the glance-API service
[root@ceph131 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
/ceph/ceph.client.glance.keyring
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller160 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller161 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller162 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
Change both the owner and user group of the key file
#chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller160 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller161 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller162 chown glance:glance /etc/ceph/ceph.client.glance.keyring
# Push the secret key generated by the client. Cinder user to the node running the cinder-volume service
[root@ceph131 ceph]# ceph auth get-or-create client.cinder | ssh root@compute163 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ceph]# ceph auth get-or-create client.cinder | ssh root@compute164 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
Change both the owner and user group of the key file
ssh root@compute163 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@compute164 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
Copy the code
3.2.3 libvirt secret key
#nova-compute nodes need to store the key file of the client. Cinder user in libvirt. When attaching a Cinder volume based on the Ceph backend to a VM instance, libvirt uses this key to access the Ceph cluster.
# The management node pushes the client. Cinder secret key file to the computing (storage) node. The generated file is temporary
[root@ceph131 ceph]# ceph auth get-key client.cinder | ssh root@compute164 tee /etc/ceph/client.cinder.key
AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ceph]# ceph auth get-key client.cinder | ssh root@compute163 tee /etc/ceph/client.cinder.key
AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
Copy the code
# Add libvirt keys to compute (storage) nodes, for example compute163 # Generate one UUID, == All compute (storage) nodes can share this UUID (other nodes do not need to perform this step) ==; #uuid will be used when you configure nova. Conf file
[root@compute163 ~]# uuidgen
e9776771-b980-481d-9e99-3ddfdbf53d1e
[root@compute163 ~]# cd /etc/ceph/
[root@compute163 ceph]# touch secret.xml
[root@compute163 ceph]# vim secret.xml
<secret ephemeral='no' private='no'>
<uuid>cb26bb6c-2a84-45c2-8187-fa94b81dd53d</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
[root@compute163 ceph]# virsh secret-define --file secret.xml
Secret cb26bb6c-2a84-45c2-8187-fa94b81dd53d created
[root@compute163 ceph]# virsh secret-set-value --secret cb26bb6c-2a84-45c2-8187-fa94b81dd53d --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set
Copy the code
# push ceph. Conf
[root@ceph131 ceph]# scp ceph.conf root@controller160:/etc/ceph/Conf 100% 514 407.7kb /s 00:00 [root@ceph131 ceph]# scp ceph.conf root@controller161:/etc/ceph/Conf 100% 514 631.5KB/s 00:00 [root@ceph131 ceph]# scp ceph.conf root@controller162:/etc/ceph/Conf 100% 514 218.3KB/s 00:00 [root@ceph131 ceph]# scp ceph.conf root@compute163:/etc/ceph/Ceph. Conf 100% 514 2.3KB/s 00:00 [root@ceph131 ceph]# scp ceph.conf root@compute164:/etc/ceph/Conf 100% 514 3.6KB/s 00:00Copy the code
4 Glance integrates with Ceph
4.1 configuration glance – API. Conf
Change the glance-api. Conf file on the glance-api running node to include 3 controller nodes. #vim /etc/glance/glance-api.conf
[DEFAULT]
# Enable copy-on-write
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/
Copy the code
Change the configuration file and restart the service
systemctl restart openstack-glance-api.service
Copy the code
# Upload cirros images
[root@controller160 ~]# glance image-create --name "rbd_cirros-05" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--visibility=public
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2020-07-06T15:25:40Z |
| direct_url | rbd://76235629-6feb-4f0c-a106-4be33d485535/images/f6da37cd-449a-436c-b321-e0c1c0 |
| | 6761d8/snap |
| disk_format | qcow2 |
| id | f6da37cd-449a-436c-b321-e0c1c06761d8 |
| min_disk | 0 |
| min_ram | 0 |
| name | rbd_cirros-05 |
| os_hash_algo | sha512 |
| os_hash_value | 6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e |
| | 2161b5b5186106570c17a9e58b64dd39390617cd5a350f78 |
| os_hidden | False |
| owner | d3dda47e8c354d86b17085f9e382948b |
| protected | False |
| size | 12716032 |
| status | active |
| tags | [] |
| updated_at | 2020-07-06T15:26:06Z |
| virtual_size | Not available |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+
To be able to use the RBD command on the controller node, you need to install ceph-common
[root@controller161 ~]# rbd -p images --id glance -k /etc/ceph/ceph.client.glance.keyring ls
f6da37cd-449a-436c-b321-e0c1c06761d8
[root@ceph131 ceph]# ceph dfRAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED HDD 96 GiB 93 GiB 169 MiB 3.2 GiB 3.30 TOTAL 96 GiB 93 GiB 169 MiB 3.2 GiB 3.30 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL Cephfs_data 1 0 B 0 0 B 0 29 GiB cephfs_metadata 2 8.9 KiB 22 1.5 MiB 0 29 GiB rbd_storage 3 33 MiB 20 100 MiB 0.11 29 gib.rgw. root 4 1.2 KiB 4 768 KiB 0 29 GiB default.rgw.control 5 0 B 8 0 B 0 29 GiB default.rgw.meta 6 369 B 2 384 KiB 0 29 GiB default.rgw.log 7 0 B 207 0 B 0 29 GiB volumes 10 0 B 0 0 B 0 29 GiB VMS 11 0 B 0 0 B 0 29 GiB images 12 12 MiB 8 37 MiB 0.04 29 GiB [root@ceph131 ceph]# rbd ls images
f6da37cd-449a-436c-b321-e0c1c06761d8
Copy the code
# Check ceph cluster for HEALTH_WARN, because the pool type is not defined, which can be defined as ‘cephfs’, ‘RBD ‘,’ RGW ‘, etc
[root@ceph131 ceph]# ceph -s
cluster:
id: 76235629-6feb-4f0c-a106-4be33d485535
health: HEALTH_WARN
application not enabled on 1 pool(s)
services:
mon: 3 daemons, quorum ceph131,ceph132,ceph133 (age 3d)
mgr: ceph131(active, since 4d), standbys: ceph132, ceph133
mds: cephfs_storage:1 {0=ceph132=up:active}
osd: 3 osds: 3 up (since 3d), 3 in(since 4d) rgw: 1 daemon active (ceph131) task status: scrub status: mds.ceph132: idle data: pools: 10 Pools, 224 PGS Objects: 271 Objects, 51 MiB Usage: 3.2gib Used, 93 GiB / 96 GiB Avail PGS: 224 Active + Clean IO: Client: 4.1 KiB/s rd, 0 B/s wr, 4 op/s rd, 2 op/s wr [root@ceph131 ceph]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'images'
use 'ceph osd pool application enable <pool-name> <app-name>'.where <app-name> is 'cephfs'.'rbd'.'rgw', or freeform for custom applications.
Copy the code
Set pool type to RBD
[root@ceph131 ceph]# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images'
[root@ceph131 ceph]# ceph osd pool application enable volumes rbd
enabled application 'rbd' on pool 'volumes'
[root@ceph131 ceph]# ceph osd pool application enable vms rbd
enabled application 'rbd' on pool 'vms'
Copy the code
# Verify as follows:
[root@ceph131 ceph]# ceph health detail
HEALTH_OK
[root@ceph131 ceph]# ceph osd pool application get images
{
"rbd": {}}Copy the code
5 Cinder integrates with Ceph
4.1 configuration cinder. Conf
Configure the ceph RBD driver in the cinder.conf file of the node where cinder-volume resides. Take compute163 as an example #vim /etc/cinder/cinder.conf
# The backend uses Ceph storage
[DEFAULT]
#enabled_backends = LVM #
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
# Replace the UUID
rbd_secret_uuid = cb26bb6c-2a84-45c2-8187-fa94b81dd53d
volume_backend_name = ceph
Copy the code
Change the configuration file and restart the service
[root@compute163 ceph]# systemctl restart openstack-cinder-volume.service
Copy the code
# validation
[root@controller160 ~]# openstack volume service list+------------------+-----------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-----------------+------+---------+-------+----------------------------+ | cinder-scheduler | Controller160 | nova | enabled | up | 2020-07-06 T16: ye that. 000000 | | cinder - the scheduler | controller162 | nova | enabled | Up | T16:2020-07-06. 000000 | | cinder - the scheduler | controller161 | nova | enabled | up | T16 2020-07-06: but. 000000 | | cinder - volume | compute163 @ LVM | nova | enabled | down | T16 2020-07-06:07:09. 000000 | | cinder - volume | Compute164 @ LVM | nova | enabled | down | T16 2020-07-06:07:04. 000000 | | cinder - volume | compute164 @ ceph | nova | enabled | | of the up the T16 2020-07-06: but. 000000 | | cinder - volume | compute163 @ ceph | nova | enabled | | of the up | the T16:2020-07-06. 000000 +------------------+-----------------+------+---------+-------+----------------------------+Copy the code
4.2 Creating a Volume
Set the volume type. Create a volume type for the Ceph back-end storage of Cinder on the controller node. When configuring multiple back-end storage devices, the volume type can be differentiated. To view the value, run the cinder type-list command
[root@controller160 ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| bc90d094-a76b-409f-affa-f8329d2b54d5 | ceph | - | True |
+--------------------------------------+------+-------------+-----------+
Copy the code
Set the value of “volume_backend_name” and the value of “ceph” to “ceph”
[root@controller160 ~]# cinder type-key ceph set volume_backend_name=ceph
[root@controller160 ~]# cinder extra-specs-list
+--------------------------------------+-------------+---------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+---------------------------------+
| 0aacd847-535a-447e-914c-895289bf1a19 | __DEFAULT__ | {} |
| bc90d094-a76b-409f-affa-f8329d2b54d5 | ceph | {'volume_backend_name': 'ceph'} | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- +Copy the code
Create a volume
[root@controller160 ~]# cinder create --volume-type ceph --name ceph-volume 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false| | consistencygroup_id | None | | created_at | T16 2020-07-06: away. 000000 | | description | None | | encrypted | False | | group_id | None | | id | 63cb956c-2e4f-434e-a21a-9280530f737e | | metadata | {} | | migration_status | None | | multiattach | False | | name | ceph-volume | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | d3dda47e8c354d86b17085f9e382948b | | provider_id | None | | replication_status | None | | service_uuid | None | | shared_targets | True | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | ec8c820dba1046f6a9d940201cf8cb06 | | volume_type | ceph | +--------------------------------+--------------------------------------+Copy the code
# validation
[root@controller160 ~]# openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| 63cb956c-2e4f-434e-a21a-9280530f737e | ceph-volume | available | 1 | |
| 9575c54a-d44e-46dd-9187-0c464c512c01 | test1 | available | 2 | |
+--------------------------------------+-------------+-----------+------+-------------+
[root@ceph131 ceph]# rbd ls volumes
volume-63cb956c-2e4f-434e-a21a-9280530f737e
Copy the code
6 Nova integrates with Ceph
6.1 configuration ceph. Conf
If you want to start a vm from Ceph RBD, you must configure Ceph as a temporary nova backend. It is recommended to enable the RBD cache function in the configuration file of the compute node. To facilitate troubleshooting, set the admin socket parameter, so that each vm using Ceph RBD has 1 socket, which is beneficial for vm performance analysis and troubleshooting; The configuration only involves the [client] and [client.cinder] fields of the ceph. Conf file of all compute nodes, for example, compute163
[root@compute163 ~]# vim /etc/ceph/ceph.conf
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring
# Create the socker log-related directory specified in the ceph. Conf file and change the owner
[root@compute163 ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@compute163 ~]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
Copy the code
6.2 configuration nova. Conf
Configure the Nova back-end to use the CEPh cluster VMS pool on all compute nodes
[root@compute01 ~]# vim /etc/nova/nova.conf
[DEFAULT]
vif_plugging_is_fatal = False
vif_plugging_timeout = 0
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = cb26bb6c-2a84-45c2-8187-fa94b81dd53d # uUID consistent
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
# Disable file injection
inject_password = false
inject_key = false
inject_partition = -2
# Temporary root disk discard function of the VM. The parameter "unmap" can release space immediately after the SCSI interface disk is released
hw_disk_discard = unmap
Copy the code
Change the configuration file and restart the computing service
[root@compute163 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
[root@compute163 ~]# systemctl status libvirtd.service openstack-nova-compute.service
Copy the code
6.3 configure the live migration
6.3.1 modify/etc/libvirt/libvirtd. Conf
# Operate on all compute nodes, compute163 as an example; [num] [num] [num
[root@compute163 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf
Uncomment the following three lines
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"
Uncomment and change the listening port
55:listen_addr = "172.16.1.163"
# Uncomment and deauthenticate
158:auth_tcp = "none"
Copy the code
6.3.2 modify/etc/sysconfig/libvirtd
# Operate on all compute nodes, compute163 as an example; # num = num where libvirtd is modified
[root@compute163 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
# Uncomment
9:LIBVIRTD_ARGS="--listen"
Copy the code
6.3.3 Configuring Encryption-free Access for compute Nodes
All compute nodes must be set to free from encryption.
[root@compute163 ~]# usermod -s /bin/bash nova
[root@compute163 ~]# passwd nova
Changing password for user nova.
New password:
BAD PASSWORD: The password contains the user name in some form
Retype new password:
passwd: all authentication tokens updated successfully.
[root@compute163 ~]# su - nova
[nova@compute163 ~]$ ssh-keygen -t rsa -N ' ' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /var/lib/nova/.ssh/id_rsa.
Your public key has been saved in /var/lib/nova/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bnGCcG6eRvSG3Bb58eu+sXwEAnb72hUHmlNja2bQLBU nova@compute163
The key's randomart image is: +---[RSA 3072]----+ | +E. | | o.. o B | | . o.oo.. B + | | * = ooo= * .| | B S oo.* o | | + = + .. o | | + o ooo | | . . .o.o. | | .*o | +----[SHA256]-----+ [nova@compute163 ~]$ ssh-copy-id nova@compute164 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nova/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys nova@compute164's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'nova@compute164'"[nova@compute163 ~]$SSH [email protected] Activate. [nova@compute163 ~]$SSH [email protected] Activate the web console with: systemctlenable--now cockpit.socket Last failed login: Wed Jul 8 00:24:02 CST 2020 from 172.16.1.163 on SSH: Notty There were 5 failed login attempts since the last successful login. [nova@compute164 ~]$Copy the code
6.3.4 Setting iptables – == Iptables has been disabled. Therefore, you do not need to set ==
# Live-migration: Virsh -c qemu+ TCP ://{node_ip or node_name}/system “virsh -c qemu+ TCP ://{node_ip or node_name}/system” # Before and after the migration, the migrated instance on the source compute node uses ports tcp49152 to 49161 for temporary communication; # Do not restart the iptables service because the IPtables rules have been enabled on the VM. Insert the iptables rules as much as possible. Use the “iptables saved” command to edit the configuration file. # Operate on all compute nodes, compute163 as an example
[root@compute163 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 16509 -j ACCEPT
[root@compute163 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 49152:49161 -j ACCEPT
Copy the code
6.3.5 Restarting the Service
systemctl mask libvirtd.socket libvirtd-ro.socket \
libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
service libvirtd restart
systemctl restart openstack-nova-compute.service
Copy the code
# validation
[root@compute163 ~]# netstat -lantp|grep libvirtdTCP 0 0 172.16.1.163:16509 0.0.0.0:* LISTEN 582582/libvirtdCopy the code
6.4 Verifying the Integration
6.4.1 Creating a BooTable Storage Volume Based on Ceph storage
When nova starts instance from RBD, the image must be in raw format. Otherwise, errors will be reported in glance-api and cinder during vm startup. # format conversion, first will *. Img files into *. Raw files # cirros – 0.4.0 – x86_64 – disk. Img online yourself to download this file
[root@controller160 ~]# qemu-img convert -f qcow2 -o raw ~/ cirros-0.4.0-x86_64-disked. img ~/ cirros-0.4.0-x86_64-disked. raw
# Generate raw image
[root@controller160 ~]# openstack image create "cirros-raw" \> --file ~/cirros-0.4.0-x86_64-disk.raw \ > --disk-format raw --container-format bare \ > --public +------------------+---------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+---------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------+ | checksum | ba3cd24377dde5dfdd58728894004abb | | container_format | bare | | created_at | 2020-07-07T02:13:06Z | | disk_format | raw | | file | /v2/images/459f5ddd-c094-4b0f-86e5-f55baa33595c/file | | id | 459f5ddd-c094-4b0f-86e5-f55baa33595c | | min_disk | 0 | | min_ram | 0 | | name | cirros-raw | | owner | d3dda47e8c354d86b17085f9e382948b | | properties | direct_url='rbd://76235629-6feb-4f0c-a106-4be33d485535/images/459f5ddd-c094-4b0f-86e5-f55baa33595c/snap', os_hash_algo='sha512', os_hash_value='b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64 974ce240e', os_hidden='False', owner_specified.openstack.md5='ba3cd24377dde5dfdd58728894004abb', owner_specified.openstack.object='images/cirros-raw', owner_specified.openstack.sha256='87ddf8eea6504b5eb849e418a568c4985d3cea59b5a5d069e1dc644de676b4ec', self='/v2/images/459f5ddd-c094-4b0f-86e5-f55baa33595c'| | protected | False | | schema | /v2/schemas/image | | size | 46137344 | | status | active | | tags | | | updated_at | 2020-07-07T02:13:42Z | | visibility | public | +------------------+---------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------------------------------------------------------------------------+Copy the code
# Create a Bootable volume using the new mirror
[root@controller160 ~]# cinder create --image-id 459f5ddd-c094-4b0f-86e5-f55baa33595c --volume-type ceph --name ceph-boot 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false| | consistencygroup_id | None | | created_at | 2020-07-07 T02: when the 000000 | | description | None | | encrypted | False | | group_id | None | | id | 46a45564-e148-4f85-911b-a4542bdbd4f0 | | metadata | {} | | migration_status | None | | multiattach | False | | name | ceph-boot | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | d3dda47e8c354d86b17085f9e382948b | | provider_id | None | | replication_status | None | | service_uuid | None | | shared_targets | True | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | ec8c820dba1046f6a9d940201cf8cb06 | | volume_type | ceph | +--------------------------------+--------------------------------------+# Check the newly created bootable volume. It takes time to create the bootable volume
[root@controller160 ~]# cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 62a21adb-0a22-4439-bf0b-121442790515 | available | ceph-boot | 1 | ceph | true | |
| 63cb956c-2e4f-434e-a21a-9280530f737e | available | ceph-volume | 1 | ceph | false | |
| 9575c54a-d44e-46dd-9187-0c464c512c01 | available | test1 | 2 | __DEFAULT__ | false | |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
Copy the code
Create an instance from volumes based on the ceph backend; # “–boot-volume” specifies the volume with the “bootable” attribute. After the vm starts, the VM runs on the volumes
# Create cloud host type
[root@controller160 ~]# openstack flavor create --id 1 --vcpus 1 --ram 256 --disk 1 m1.nano+----------------------------+---------+ | Field | Value | +----------------------------+---------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 1 | | id | 1 | | name | m1.nano | | OS - flavor - access: is_public | True | | properties | | | ram | 256 | | rxtx_factor 1.0 | | | swap | | | vcpus | | 1 +----------------------------+---------+# Safety Rules
[root@controller160 ~]# openstack security group rule create --proto icmp default
[root@controller160 ~]# openstack security group rule create --proto tcp --dst-port 22 'default'
Create virtual network
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider-eth1
# Create a subnet and modify it as requiredOpenstack subnet create --network provider-eth1 \ --allocation-pool start=172.16.2.220,end=172.16.2.229 \ --dns-nameserver 114.114.114.114 -- Gateway 172.16.2.254 --subnet-range 172.16.2.0/24\172.16.2.0/24# Available cloud host type
openstack flavor list
# Available mirror
openstack image list
# Available security groups
openstack security group list
# Available networks
openstack network list
# Create an instance, which can also be created on the Web
[root@controller160 ~]# nova boot --flavor m1.nano \
--boot-volume d3770a82-068c-49ad-a9b7-ef863bb61a5b \
--nic net-id=53b98327-3a47-4316-be56-cba37e8f20f2 \
--security-group default \
ceph-boot02
[root@controller160 ~]# nova show c592ca1a-dbce-443a-9222-c7e47e245725+--------------------------------------+-------------------------------------------------------------------------------- --+ | Property | Value | +--------------------------------------+-------------------------------------------------------------------------------- --+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute163 | | OS-EXT-SRV-ATTR:hostname | ceph-boot02 | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute163 | | OS-EXT-SRV-ATTR:instance_name | instance-00000025 | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-ty5v9w8n | | OS-EXT-SRV-ATTR:root_device_name | /dev/vda | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | - | | OS-EXT-STS:vm_state | active | | OS - the SRV - USG: launched_at | 2020-07-07 T10:55:39. 000000 | | OS - the SRV - USG: terminated_at | - | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2020-07-07T10:53:30Z | | description | - | | flavor:disk | 1 | | flavor:ephemeral | 0 | | flavor:extra_specs | {} | | flavor:original_name | m1.nano | | flavor:ram | 256 | | flavor:swap | 0 | | flavor:vcpus | 1 | | hostId | 308132ea4792b277acfae8d3c5d88439d3d5d6ba43d8b06395581d77 | | host_status | UP | | id | c592ca1a-dbce-443a-9222-c7e47e245725 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | locked | False | | locked_reason | - | | metadata | {} | | name | ceph-boot02 | | os-extended-volumes:volumes_attached | [{"id": "d3770a82-068c-49ad-a9b7-ef863bb61a5b"."delete_on_termination": false}] | | progress | 0 | | the provider - eth1 network | 172.16.2.228 | | security_groups | default | | server_groups | [] | | status | ACTIVE | | tags | [] | | tenant_id | d3dda47e8c354d86b17085f9e382948b | | trusted_image_certificates | - | | updated | 2020-07-07T10:55:40Z | | user_id | ec8c820dba1046f6a9d940201cf8cb06 | +--------------------------------------+-------------------------------------------------------------------------------- --+Copy the code
6.4.2 Starting a VM from Ceph RBD
#--nic: netid = network ID;
# Cirros-cephrbd-instance1 = instance name
[root@controller160 ~]# openstack server create --flavor m1.nano --image cirros-raw --nic net-id=53b98327-3a47-4316-be56-cba37e8f20f2 --security-group default cirros-cephrbd-instance1+-------------------------------------+---------------------------------------------------+ | Field | Value | +-------------------------------------+---------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | CRiNuZoK6ftt | | config_drive | | | created | 2020-07-07T15:13:08Z | | flavor | m1.nano (1) | | hostId | | | id | 6ea79ec0-1ec6-47ff-b185-233c565b1fab | | image | cirros-raw (459f5ddd-c094-4b0f-86e5-f55baa33595c) | | key_name | None | | name | cirros-cephrbd-instance1 | | progress | 0 | | project_id | d3dda47e8c354d86b17085f9e382948b | | properties | | | security_groups | name='eea8a6b4-2b6d-4f11-bfe8-12b56bafe36c' |
| status | BUILD |
| updated | 2020-07-07T15:13:10Z |
| user_id | ec8c820dba1046f6a9d940201cf8cb06 |
| volumes_attached | |
+-------------------------------------+---------------------------------------------------+
# Query the generated instance
[root@controller160 ~]# nova list+--------------------------------------+--------------------------+--------+------------+-------------+----------------- -----------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------------------+--------+------------+-------------+----------------- -- -- -- -- -- -- -- -- -- -- -- + | c592ca1a dbce - 443 - a - 9222 - c7e47e245725 | ceph - boot02 | ACTIVE | - | Running | provider - eth1 = 172.16.2.228 | Ec6 ea79ec0 | 6-1-47 ff - b185-233 c565b1fab | cirros cephrbd - instance1 | ACTIVE | - | Running | provider - eth1 = 172.16.2.226 | +--------------------------------------+--------------------------+--------+------------+-------------+----------------- -----------+Copy the code
6.4.3 Performing live-Migration Tasks for VMS started by the RBD
[nova show 6ea79ec0-1ec6-47FF - b185-233C565b1fab] [nova show 6ea79ec0-1ec6-47FF - b185-233C565b1fab] [nova show 6ea79ec0-1ec6-47FF - b185-233C565b1fab] [nova show 6ea79ec0-1ec6-47FF - b185-233C565b1fab]
Nova Hypervisor-Servers compute163
[root@controller01 ~]# nova live-migration cirros-cephrbd-instance1 compute164
# Check the status during migration
[root@controller160 ~]# nova list+--------------------------------------+--------------------------+-----------+------------+-------------+-------------- --------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+--------------------------+-----------+------------+-------------+-------------- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + | c592ca1a dbce - 443 - a - 9222 - c7e47e245725 | ceph - boot02 | ACTIVE | - | Running | provider - eth1 = 172.16.2.228 | | 6ea79ec0-1ec6-47ff-b185-233c565b1fab | cirros-cephrbd-instance1 | MIGRATING | migrating | Running | The provider - eth1 = 172.16.2.226 | +--------------------------------------+--------------------------+-----------+------------+-------------+-------------- --------------+Copy the code
# After the migration is complete, check the instacn node;
"Nova show 6ea79ec0-1ec6-47FF - b185-233C565b1fab" command "hypervisor_hostname"
[root@controller01 ~]# nova hypervisor-servers compute163
[root@controller01 ~]# nova hypervisor-servers compute164
Copy the code
X. Problems encountered in the process
Eg1.2020-07-04 00:39:56.394 671959 ERROR glance.common.wsgi rados.ObjectNotFound: [errno 2] Error calling conf_read_file [errno 2] Error calling conf_read_file /etc/ceph/ eg2.2020-07-04 01:01:27.736 1882718 ERROR glance_store._drivers. RBD [req-fd768a6d-e7e2-476b-b1d3-d405d7a560f2 ec8c820dba1046f6a9d940201cf8cb06 d3dda47e8c354d86b17085f9e382948b - default default] Error con necting to ceph cluster.: rados.ObjectNotFound: [errno 2] error connecting to the cluster eg3.libvirtd[580770]: --listen parameter not permitted with systemd activation sockets, see'man libvirtd' forFurther guidance: Systemd mode is used by default. To return to traditional mode, all systemds must be masked. systemctl mask libvirtd.socket libvirtd-ro.socket \ libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket Then use the following command to restart: service libvirtd restart eg4. AdminSocketConfigObs: : init: failed: AdminSocket: : bind_and_listen: failed tobind the UNIX domain socket to '/ var/run/ceph/guests/ceph - client. Cinder. 596406.94105140863224 asok': (13) Permission denied The /var/run/ceph directory is owned by qemu, but the /var/run/ceph directory is owned by ceph:ceph, and the permission is 770. Change the permission of /var/run/ceph directly to 777 and /var/logError on AMQP connection <0.9284.0> (172.16.1.162:55008 -> 172.16.1.162:5672, state: starting): AMQPLAIN login refused: user'guest'Can only connect via localhost reason is: starting from the 3 version does not support remote login solution: guest vim/etc/rabbitmq/rabbitmq config to add the following fields, a little!!!!! [{rabbit, [{loopback_users, []}]}].eg6.failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed Add the following sentences to /etc/nova.nova.conf of the compute node: Then restart vif_PLUgging_IS_FATAL = False viF_PLUgging_TIMEOUT = 0 eg7.Resize Error: Not able to execute SSHcommand: Unexpected error while running commandThe openstack migration is based on SSH. Therefore, you need to configure a no-secret solution between compute nodes. For details, see 6.3.3Copy the code