This series of articles will teach you how to build an OpenStack development environment from scratch across multiple OpenStack systems. The installation version of OpenStack used in the current tutorial is version 20 Train (version T for short). Release Note Train, Originally Released: 16 October, 2019 13 May, 2020 Victoria, Originally Released: 14 October, 2020

The nuggets community


OpenStack Ussuri offline Deployment: OpenStack Train Offline Deployment: OpenStack Ussuri Offline deployment

OpenStack Train Offline deployment | 0 Local offline deployment yum OpenStack Train offline deployment | 1 Controller node – Environment Preparation OpenStack Train Offline deployment | 2 Compute node – Environment Preparation OpenStack Train offline deployment | 3 controller nodes -Keystone authentication service component OpenStack Train offline deployment | 4 controller nodes -Glance image service component OpenStack Train offline deployment | 5 Controller nodes -Placement service component OpenStack Train Offline deployment | 6.1 Controller Node -Nova Computing service component OpenStack Train Offline deployment | 6.2 Compute Node -Nova Computing service Component OpenStack Train Offline deployment | 6.3 Controller Node -Nova Computing service component OpenStack Train Offline Deployment | 7.1 Controller Node -Neutron Network service Component OpenStack Train Offline Deployment | 7.2 Compute Node -Neutron Network service Component OpenStack Train deployment | 7.3 Controller Node -Neutron Service component OpenStack Train Deployment | 8 Controller Node -Horizon Service component OpenStack Train Deployment | 9 Start an OpenStack instance Train Offline deployment | 10 Controller node -Heat service component OpenStack Train Offline deployment | 11.1 Controller Node -Cinder Storage Service Component OpenStack Train Offline deployment | 11.2 Storage node -Cinder storage service component OpenStack Train Offline Deployment | 11.3 Controller Node -Cinder Storage Service Component OpenStack Train Offline Deployment | 11.4 Compute Node -Cinder Storage Service Component OpenStack Offline Deployment of Train | 11.5 Instance Using -Cinder storage service components


Gold Mining community: Customizing OpenStack Images | Customizing OpenStack images | Environment Preparation Customizing OpenStack images | Windows7 Customizing OpenStack images | Windows10 Customizing OpenStack images | Linux Customize an OpenStack image | Windows Server2019


CSDN

CSDN: OpenStack Ussuri Offline Installation and Deployment Series (full) OpenStack Train Offline Installation and Deployment Series (full) Looking forward to making progress together with you.


OpenStack Train Offline Deployment | 11.4 Compute Node -Cinder Storage Service Component

On each compute node, configure computing to use block storage.

The official reference: docs.openstack.org/install-gui… Docs.openstack.org/train/insta… Docs.openstack.org/cinder/trai… Docs.openstack.org/cinder/trai… Docs.openstack.org/cinder/trai… Blog: yinwucheng.com/?p=491 www.cnblogs.com/tssc/p/9877…

Compute nodes Configure computing to use block storage

[root@compute1 ~]# vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne
Copy the code

Or quick configuration

#For quick configuration
yum install openstack-utils -y

openstack-config --set  /etc/nova/nova.conf cinder os_region_name  RegionOne
Copy the code

Configure filters so that only instances can access block storage volume groups to prevent system errors

[root@compute1 ~]# vgdisplay --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <63.00 GiB PE Size 4.00 MiB Total PE 16127 Alloc PE/Size 16126/62.99 GiB Free PE/Size 1/4.00 MiB VG UUID Ws4wQS-K0VZ-vp9q-TB9Y-sLP9-OKyb-SZbTd8 [root@compute1 ~]#Copy the code
  • Note:
  • If the operating system disk /dev/sda of the storage node uses an LVM volume group, you also need to add the device to the filter and add the configuration file /etc/lvm/lvm.conf as follows:
devices {
  ......
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
  ......
}
Copy the code
  • If the operating system disk /dev/sda of the compute node uses the LVM volume group, you also need to add the device to the filter by adding the configuration file /etc/lvm/lvm.conf as follows:
devices {
  ......
filter = [ "a/sda/", "r/.*/"]
  ......
}
Copy the code