This series of articles will teach you how to build an OpenStack development environment from scratch across multiple OpenStack systems. The installation version of OpenStack used in the current tutorial is version 20 Train (version T for short). Release Note Train, Originally Released: 16 October, 2019 13 May, 2020 Victoria, Originally Released: 14 October, 2020

The nuggets community


OpenStack Ussuri offline Deployment: OpenStack Train Offline Deployment: OpenStack Ussuri Offline deployment

OpenStack Train Offline deployment | 0 Local offline deployment yum OpenStack Train offline deployment | 1 Controller node – Environment Preparation OpenStack Train Offline deployment | 2 Compute node – Environment Preparation OpenStack Train offline deployment | 3 controller nodes -Keystone authentication service component OpenStack Train offline deployment | 4 controller nodes -Glance image service component OpenStack Train offline deployment | 5 Controller nodes -Placement service component OpenStack Train Offline deployment | 6.1 Controller Node -Nova Computing service component OpenStack Train Offline deployment | 6.2 Compute Node -Nova Computing service Component OpenStack Train Offline deployment | 6.3 Controller Node -Nova Computing service component OpenStack Train Offline Deployment | 7.1 Controller Node -Neutron Network service Component OpenStack Train Offline Deployment | 7.2 Compute Node -Neutron Network service Component OpenStack Train deployment | 7.3 Controller Node -Neutron Service component OpenStack Train Deployment | 8 Controller Node -Horizon Service component OpenStack Train Deployment | 9 Start an OpenStack instance Train Offline deployment | 10 Controller node -Heat service component OpenStack Train Offline deployment | 11.1 Controller Node -Cinder Storage Service Component OpenStack Train Offline deployment | 11.2 Storage node -Cinder storage service component OpenStack Train Offline Deployment | 11.3 Controller Node -Cinder Storage Service Component OpenStack Train Offline Deployment | 11.4 Compute Node -Cinder Storage Service Component OpenStack Offline Deployment of Train | 11.5 Instance Using -Cinder storage service components


Gold Mining community: Customizing OpenStack Images | Customizing OpenStack images | Environment Preparation Customizing OpenStack images | Windows7 Customizing OpenStack images | Windows10 Customizing OpenStack images | Linux Customize an OpenStack image | Windows Server2019


CSDN

CSDN: OpenStack Ussuri Offline Installation and Deployment Series (full) OpenStack Train Offline Installation and Deployment Series (full) Looking forward to making progress together with you.


OpenStack Train Offline deployment | 1 Controller node – Environment preparation

I. Deployment planning

Official reference link: OpenStack Installation Guide: Environment OpenStack Installation Guide: Service Components CSDN Blog Reference: CentOS7 Installing OpenStack(Rocky version)-01. Prepare the system environment for the controller node

1. System description

(1) Server OS system: CentOS 7 version: centos-7-x86_64-minimal-1908 Image: CentOS- 7-x86_64-minimal-1908

(2) Development environment

VMware workstation Pro

(3) OpenStack: Train

2. Deployment planning

(1) Node description

Controller node: Controller, a service component

Compute node: Compute1, a service component

Optional nodes: Block, etc., various other service components… Etc.

(2) Physical network Before deployment, plan the network deployment. For details, see environment-Network in OpenStack official installation guide. (1) Select an IP address segment for the management network. You are advised to use the example IP address segment in official documents. (2) Physical NIC 1: the first NIC ens33 is used as the management NIC by default. (3) Physical nic 2, which can be the supplier network; The provider network. Prepare at least two nics. You are advised to prepare at least three nics.

Management network: 10.0.0.0/24; Supply network: 192.168.2.0/24, (the use of network adapter, and the use of network segment can be determined according to the actual situation);

(3) Virtual network

  • Supply network: 192.168.2.0/24, ENS34, according to the actual situation; Refer to the launch – the instance – networks – the provider

  • Private network 1:172.16.0.0/24, instance private LAN; Refer to the launch – the instance – networks – selfservice

(4) Planning overview

The host name Management network Card 1 Supply network The network card 2 configuration
controller 10.0.0.11 ens33 192.168.2.11 ens34 4C8G64G
compute1 10.0.0.31 ens33 192.168.2.31 ens34 4C8G64G
compute2 10.0.0.32 ens33 192.168.2.32 ens34 4C8G64G
compute3 10.0.0.33 ens33 192.168.2.33 ens34 4C8G64G

Note: More node types can be increased in sequence, but it is better to complete the planning of each type of network.

The actual test for this tutorial is, as the following addresses will prevail in subsequent tutorials.

The host name Management network Card 1 Supply network The network card 2 configuration
controller 192.168.232.101 ens33 192.168.2.101 ens34 4C8G64G
compute1 192.168.232.111 ens33 192.168.2.111 ens34 4C8G64G
compute2 192.168.232.112 ens33 192.168.2.112 ens34 4C8G64G
compute3 192.168.232.113 ens33 192.168.2.113 ens34 4C8G64G

Two, environmental preparation

1. Basic network

(1) Nic configuration blog reference: Centos7 set a static IP address, Centos7 set a static IP address and access to the Internet configuration file: /etc/sysconfig/network-scripts/ifcfg-ens33

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
# BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
# IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea1c227b-1fad-48f0-942f-968d183b3523"
DEVICE="ens33"
ONBOOT="yes"

# add follow
BOOTPROTO="static"
IPADDR="192.168.232.101"
NETMASK="255.255.255.0"
GATEWAY="192.168.232.2"
DNS1="1.1.1.1"
Copy the code

(2) Gateway configuration configuration file: /etc/networks

[root@controller ~]# cat /etc/networks
default 0.0.0.0
loopback 127.0.0.0
link-local 169.254.0.0

#add followNETWORKING = yes GATEWAY = 192.168.232.2Copy the code

2. Host name

(1) File modification

Configuration file /etc/hosts Control node

hostnamectl set-hostname controller
exec bash
Copy the code
[root@controller ~]# hostnamectl set-hostname controller
echo 192.168.232.101 controller >> /etc/hosts
echo 192.168.232.111 compute1 >> /etc/hosts
echo 192.168.232.112 compute2 >> /etc/hosts
echo 192.168.232.113 compute3 >> /etc/hosts
Copy the code

Compute Node 1

hostnamectl set-hostname compute1
exec bash
Copy the code
[root@compute1 ~]# hostnamectl set-hostname compute1
echo 192.168.232.101 controller >> /etc/hosts
echo 192.168.232.111 compute1 >> /etc/hosts
echo 192.168.232.112 compute2 >> /etc/hosts
echo 192.168.232.113 compute3 >> /etc/hosts
Copy the code

Compute Node 2

hostnamectl set-hostname compute2
exec bash
Copy the code
[root@compute2 ~]# hostnamectl set-hostname compute2
echo 192.168.232.101 controller >> /etc/hosts
echo 192.168.232.111 compute1 >> /etc/hosts
echo 192.168.232.112 compute2 >> /etc/hosts
echo 192.168.232.113 compute3 >> /etc/hosts
Copy the code

Compute Node 3

hostnamectl set-hostname compute3
exec bash
Copy the code
[root@compute3 ~]# hostnamectl set-hostname compute3
echo 192.168.232.101 controller >> /etc/hosts
echo 192.168.232.111 compute1 >> /etc/hosts
echo 192.168.232.112 compute2 >> /etc/hosts
echo 192.168.232.113 compute3 >> /etc/hosts
Copy the code

(Optional) Storage node: For details, see official environment-networking-storage-cinder. Note: ① Some distributions add an unnecessary entry to the /etc/hosts file that resolves the actual host name to another loopback IP address, such as 127.0.1.1. If any, it must be commented out or removed to prevent name resolution problems. ② Do not delete the 127.0.0.1 entry. ③ Each node needs to resolve the host names of other nodes. After the host name is changed, exit the terminal and log in to the server again.

(2) Verify connectivity

The connectivity test should be conducted at each node, and the last one is ping QQ.com. The management network does not necessarily need to be connected to the public Internet.

ping -c 4 controller
ping -c 4 compute1
ping -c 4 compute2
ping -c 4 compute3
ping -c 4 qq.com
Copy the code

3. Login without password

[optional]

  • Before this, you need to set the root password first
  • Set the SSH remote login of the root user (permissionRootLogin) to yes, and restart the SSH service
  • Ensure that at least the control nodes and compute nodes (to configure passwordless login between all the control nodes and compute nodes, perform the following operations for each node.)

Controller node Compute node

[root@controller ~]# ssh-copy-id controller # copy the public key to the target machine, [root@controller ~]# SSH controller # test login [root@controller ~]# ssh-copy-id compute1 # Copy the public key to the target machine SSH compute1:~/ # file is copied to compute1 to complete the no-password login to each other. [root@controller ~]# SSH compute1 # test login, then exit the test terminal [root@controller ~]# ssh-copy-id compute2 SSH compute2:~/ # File is copied to compute2 to complete the password-free login to each other. [root@controller ~]# SSH compute2 # Test login, then exit the test terminal...Copy the code

4. Disable the firewall

(1) Close iptables

On CentOS7 is firewalld service, stop and forbid starting firewalld

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
Copy the code

(2) Close selinux

setenforce 0
getenforce
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux
Copy the code

5. Yum source configuration

(1) Create a repo file

Remove the original system built-in repo file to avoid content conflict with the new configuration file

cd
mkdir ori_repo-config
mv /etc/yum.repos.d/* ./ori_repo-config/
touch /etc/yum.repos.d/CentOS-PrivateLocal.repo
vim /etc/yum.repos.d/CentOS-PrivateLocal.repo
Copy the code

File content: /etc/yum. Reps. d/ centos-privatelocal

[base] name = CentOS - $releasever - base baseurl = http://192.168.2.111/yumrepository/base/ gpgcheck = 0 enabled = 1 / updates Name = CentOS - $releasever - Updates baseurl = http://192.168.2.111/yumrepository/updates/ gpgcheck = 0 enabled = 1 / extras Name = CentOS - $releasever - Extras baseurl = http://192.168.2.111/yumrepository/extras/ gpgcheck = 0 enabled = 1 [centos-openstack-train] name=CentOS-7 - OpenStack train Baseurl gpgcheck = 0 enabled = = http://192.168.2.111/yumrepository/centos-openstack-train/ [1] centos - qemu - ev Name = CentOS - $releasever - baseurl QEMU EV = http://192.168.2.111/yumrepository/centos-qemu-ev/ gpgcheck = 0 enabled = 1 [centos - ceph - nautilus] name = centos - 7 - ceph nautilus baseurl = http://192.168.2.111/yumrepository/centos-ceph-nautilus/ Gpgcheck =0 enabled=1 [centos-nfs-ganesha28] name= centos-7-nfs Ganesha 2.8 Baseurl = http://192.168.2.111/yumrepository/centos-nfs-ganesha28/ gpgcheck = 0 enabled = 1Copy the code

(2) Update the software source

[root@controller ~]# yum clean all [root@controller ~]# yum makecache [root@controller ~]# yum repolist Failed to set locale, defaulting to C Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile repo id repo name status base CentOS-7 - Base 10097 Centos-ceph-nautilus centos-7-ceph nautilus 224 centos-nfs-ganesha28 centos-7-nfs Ganesha 2.8 140 centos-openstack-train CentOS-7 - OpenStack train 2323 centos-qemu-ev CentOS-7 - QEMU EV 87 extras CentOS-7 - Extras 341  updates CentOS-7 - Updates 1787 repolist: 14999 [root@controller ~]#Copy the code

Third, system basic software installation

1. Linux tools

yum install -y lsof vim net-tools wget git 
Copy the code

2.NTP time synchronization

Official reference: OpenStack official documentation: Environment -NTP Blog Garden Reference: CentOS7 Install OpenStack(Rocky version)-01. Prepare the system environment for the controller node

yum  -y install chrony
vim /etc/chrony.conf
Copy the code

(1) Edit the chrony.conf file and add the following content.

# add followServer ntp1.aliyun.com iburst server ntp2.aliyun.com iburst allow 192.168.232.2/24Copy the code

Note: ① Replace NTP_SERVER with a proper and more accurate (lower-layer) NTP server host name or IP address. ② This configuration supports multiple server keys. By default, controller nodes synchronize time through a public server pool. However, you can also choose to configure alternate servers, such as those provided by your organization. ③ To enable other nodes to connect to the Chrony daemon on the controller node, add this key to the same chrony.conf file as above: allow 10.0.0.0/24, replace 10.0.0.0/24 as your subnet description if necessary.

(2) Restart the NTP service and configure startup upon startup:

systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service
Copy the code

(3) Set the time zone and synchronize the time

timedatectl set-timezone Asia/Shanghai
chronyc sources
timedatectl status
Copy the code
[root@controller ~]# timedatectl set-timezone Asia/Shanghai [root@controller ~]# chronyc sources 210 Number of sources =  6 MS Name/IP address Stratum Poll Reach LastRx Last sample = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = ^ - 139.199.214.202 2 to 9, 377, 424 +2409us[+2409us] +/- 33ms ^- 4-53-160-75.rev.nuso.clo> 2 9 173 57 -19ms[ -19ms] +/- 165ms ^- hydra.spiderspace.co.uk 2 9 15799 +2178us[+2178us] +/ -139ms ^- srcf-ntp.stanford.edu 291373 +15ms[+15ms] +/ -118ms ^* 120.25.115.20 2 9377 779 +1182us[+1340us] +/ -10ms ^ -203.107.6.88 2 10 377 403-5063us [-5063us] +/ -37ms [root@controller ~]# timedatectl status Local time: Wed 2020-04-22 14:33:50 CST Universal time: Wed 2020-04-22 06:33:50 UTC RTC time: Wed 2020-04-22 06:26:21 Time zone: Asia/Shanghai (CST, +0800) NTP enabled: yes NTP synchronized: no RTC in local TZ: no DST active: n/a [root@controller ~]#Copy the code

Iv. Install basic OpenStack software

1. (Omitted) OpenStack repository installation

[All nodes] The related yum sources in the local repository have been configured during the local source configuration, so this step can be omitted. The official OpenStack document environment-packs-rDO

  • On CentOS, the extras repository provides the RPM that enables the OpenStack repository.
  • CentOS includes the extras repository by default, so you can simply install the package to enable the OpenStack repository.
Yum clean all yum makecache yum repolist yum update -y # optional yum install centos-release-openstack-train-y yum clean all yum makecacheCopy the code

2.OpenStack client software

yum install python-openstackclient openstack-selinux -y
yum install openstack-utils -y 
Copy the code

Openstack -utils Is used to quickly configure openstack configuration files.

3. SQL Database

(1) Install the mariadb software package

[root@controller ~]# yum install mariadb mariadb-server MySQL-python python2-PyMySQL -y
Copy the code

(2) Create the openstack database configuration file /etc/my.cnf.d/mariadb_openstack.cnf

[root@controller ~]# vim /etc/my.cnf.d/mariadb_openstack.cnf
Copy the code

Add the following configuration to [mysqld]

[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
init-connect = 'SET NAMES utf8'
Copy the code

Configuration description:

#Default storage engine
default-storage-engine = innodb   
#With the exclusive table space schema, each table has a tablespace, an index file, a shared table space, a shared table space, and an index. If there is damage, it is difficult to repair. For example, if zabbix uses a database, it is difficult to optimize if it does not use the exclusive table space
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
Copy the code

(3) Start the database, initialize the database and set the boot

systemctl restart mariadb.service
systemctl status mariadb.service 

systemctl enable mariadb.service 
systemctl list-unit-files |grep mariadb.service
Copy the code

Set the password of user root of the database. The default password is blank

/usr/bin/mysql_secure_installation
#After entering the command, press Enter (Y) for the first time to set the password, then enter user root with the password root, and press Enter (Y) all the way until the installation is successful.
systemctl restart mariadb.service
systemctl status mariadb.service
Copy the code

Note: The production environment can use the pwGen tool to generate database passwords

openssl rand -hex 10
Copy the code

(4) Under the test database, the related database is created separately when needed

mysql -proot
-----------------------------------
flush privileges;
show databases;
select user,host from mysql.user;
exit
-----------------------------------
Copy the code

Message queue

Message Queue (MQ), short for Message Queue, is an application-to-application method of communication. Applications communicate by reading and writing messages (application-specific data) in and out of the queue, without the need for a dedicated connection to link them. Messaging refers to programs communicating with each other by sending data in a message, rather than by making direct calls to each other, which is usually used for techniques such as remote procedure calls. Queuing refers to applications communicating through queues. The use of queues removes the requirement that the receiving and sending applications execute simultaneously. RabbitMQ is a complete, reusable enterprise messaging system based on AMQP. Complies with the Mozilla Public License.

(1). Install the rabbitmq server. –

yum install rabbitmq-server -y
Copy the code

(2). Start RabbitMQ and configure self-start ports 5672,15672 for troubleshooting

systemctl start rabbitmq-server.service
systemctl status rabbitmq-server.service

systemctl enable rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
Copy the code

(3). Create an openstack account and password in the message queue. Add an openstack user and password, configure user rights, and configure read and write permissions.

rabbitmqctl add_user openstack RABBIT_PASS
Copy the code

Change Replace RABBIT_PASS to a properly secure password.

#Error may be reported. After the host name is changed, you need to exit the current terminal and log in again.
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
Copy the code

(4) Enable rabbitmq_management to implement Web management

#View supported plug-ins
rabbitmq-plugins list
#To enable the Web management plug-in, restart the service for it to take effect
rabbitmq-plugins enable rabbitmq_management
systemctl restart rabbitmq-server.service
systemctl status rabbitmq-server.service
rabbitmq-plugins list

lsof -i:15672
Copy the code

(5). The browser to access the RabbitMQ test access address: http://192.168.232.101:15672, the default user name password is a guest, a web interface to manage and create user, administrative authority if don’t have access to review the firewall configuration control node state. Select admin, click openstack user name, set the password and permission for the openstack user, update user, exit the guest user, and log in to openstack using the user name and password. At this point, RabbitMQ is configured

5. Memcached service

The authentication service uses Memcached to cache the token. The cache service Memecached runs on the controller node. In a production deployment, it is recommended to enable a combination of firewalls, authentication, and encryption to secure it. (1) Install Memcached to cache the token

yum install memcached python-memcached -y
Copy the code

(2) Modify the memcached configuration file

#If no IPv6 address is enabled, delete the ::1 address bindingVim/etc/sysconfig/memcached -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the OPTIONS = "- l 127.0.0.1, controller" ----------------------------------
#Memcached parameters are as follows:-m is the amount of memory allocated to Memcache in MB. -u is the user who is running Memcache. -l is the IP address of the server that is listening to Memcache. If there are multiple addresses, -p is the port on which Memcache will listen. I have set this to 12000, and I have set this to 1024. -C is the maximum number of concurrent connections to run, which defaults to 1024. -P is the pid file in which Memcache is saved. -vv starts in very vreBose mode and outputs debugging information and errors to the consoleCopy the code

(3) Start memcached and set it to automatically start upon startup

systemctl start memcached.service
systemctl status memcached.service
netstat -anptl|grep memcached

systemctl enable memcached.service
systemctl list-unit-files |grep memcached.service
Copy the code

At this point, memcached is configured

6. Service discovery and Etcd registration

This Etcd service is a new addition for automated configuration (1). Install the Etcd service

yum install etcd -y
Copy the code

(2) modify the ETCD configuration file

#Note The IP address cannot be replaced by the controller and cannot be resolved  
vim /etc/etcd/etcd.conf
-----------------------------------
#[Member]ETCD_DATA_DIR = "/ var/lib/etcd/default etcd" ETCD_LISTEN_PEER_URLS = "http://192.168.232.101:2380" ETCD_LISTEN_CLIENT_URLS = "http://192.168.232.101:2379" ETCD_NAME = "controller"
#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS = "http://192.168.232.101:2380" ETCD_ADVERTISE_CLIENT_URLS = "http://192.168.232.101:2379" ETCD_INITIAL_CLUSTER = controller = "http://192.168.232.101:2380" ETCD_INITIAL_CLUSTER_TOKEN = "etcd - cluster - 01" ETCD_INITIAL_CLUSTER_STATE="new" ------------------------------------Copy the code

(3) Start the ETCD and set it to start automatically upon startup

systemctl start etcd.service
systemctl status etcd.service
netstat -anptl|grep etcd

systemctl enable etcd.service
systemctl list-unit-files |grep etcd.service
Copy the code

5. Prepare the controller node environment

At this point, the basic environment is configured on the controller node, and openstack components can be installed. VMware VMS can now be shut down for snapshots.