3.3.1 Implementation of key technologies for running the Libvirt VM

3.3.1.1 Parsing and loading driver configurations

Compile the nomad-driver-libvirt driver plug-in project and put it in the driver loading folder. As shown in Figure 3.8, the libvirt plug-in configuration is added and resolved in the Nomad Client configuration. The libvirt public API delegates its implementation to one or more internal drivers, depending on the connection URI passed when initializing the library, such as QEMu :///system. If the libvirt daemon is available and there is a hypervisor active, there will usually be a network and storage driver active.

Figure 3.8 Nomad-driver-libvirt driver configuration

As shown in Figure 3.9, the driver plug-in is loaded from the Nomad Client log, and the Fingerprint health check is normal.

Figure 3.9 Nomad-driver-libvirt driver loading and Fingerprint

3.3.1.2 VM Life Cycle Management

When a VM is running, the workload contains a VM task group. The life cycle management of the tasks in the task group needs to implement the runtime driver interface, and the life cycle of the VM is encapsulated at the lower level.

Libvirt controls the entire life cycle of a domain, or virtual machine. A domain can transition between several states throughout its life cycle:

(1) Undefined is a basic state, any state of a domain that is not defined or established.

(2) Defined is the state of a domain that is Defined but not running. This state can also be described as stopped.

(3) Running is the state of a domain that has been defined and is Running on a Hypervisor.

(4) Paused refers to the state that a domain system has transitioned from running to Paused. Its memory image has been temporarily stored and it can be restored to the running state.

(5) Saved can persist a domain and restore it again.

Figure 3.10 State diagram of the domain lifecycle

The domain life cycle is more detailed. In the previous layer of the Libvirt virtual machine running wrapper, this article reduces some of the state and provides interface abstractions such as creating a VIRTUAL machine from a configuration file, stopping a virtual machine, deleting a virtual machine, querying the virtual machine status, and event listening and data collection, as shown in Figure 3.11.

Figure 3.11 VIRTUAL machine management abstract interface

3.3.1.3 Configuring VM Tasks

The vm task configuration is defined and resolved when the Libvirt virtual machine is running, in HCL format, nested in the Task section of the original Nomad job definition. As shown in Table 3.1, for example, task.driver refers to the driver field in the Task domain.

Table 3.1 VM task configuration

field value
task.driver Libvirt, string
task.config.name Task name, a string, for example, Centostask
task.config.memory.value Vm memory size: integer, for example, 1
task.config.memory.unit The unit of memory description, a string, can be GIb, MIB, KIB, or b
task.config.vcpu Virtual CPUS of the VM, for example, 1
task.config.disks Disk storage, list type
task.config.disks[0].device Disk device, a character string, including disk, LUN, floppy, and cdrom
task.config.disks[0].type The value is a string of characters. If device is disk, it is file or block
task.config.disks[0].source Virtual machine images address, string, such as/var/lib/libvirt/images/centos qcow2
task.config.disks[0].target_bus Bus is a string of characters, including virtio, SATA, SCSI, and FDC
task.config.machine The virtual hardware machine type, a string, for example pc-i440fx-2.12
task.config.interfaces List type, network interface configuration, device transparent transmission
task.config.interfaces[0].name Specifies the name of the VM internal network interface. The value is a string, for example, eth0
task.config.interfaces[0].model The value is a character string, such as virtio, E1000, or RTL8139
task.config.interfaces[0].interface_binding_method The value can be virtio, SLIrp, sriov, Bridge, Masquerade, or network
task.config.interfaces[0].pci_address PCI address specified for transparent transmission of the network adapter. The value is a character string, for example, 0000:00:1C.3
task.config.interfaces[0].mac_address The value is a character string, for example, 52:54:00:a5:ef:66
task.config.interfaces[0].source_name A character string. For example, NAT network default, Bridge network br0

After the job is submitted to the Nomad Server, it is assigned to the Nomad Client that runs the Libvirt vm to execute the job. The HCL job is then transferred to the Libvirt vm for parsing, converted into XML, and sent to the Libvirt daemon for execution.

3.3.1.4 Managing the Network when VMS are Running

The Libvirt vm supports both NAT and Bridge network modes.

After the KVM is installed, the default network is in NAT mode. The host has a virBR0 virtual interface (the default virtual network in NAT mode) and a virbr0-NIC virtual nic, which is also a switch and bridge, to distribute content to VMS. Virbr0 is a bridge that by default receives all messages to network 192.168.122.0/24. Virbr0 is not directly bound to the physical network adapter. Packets pass through virBR0 for network address resolution and are forwarded from the physical network device. In NAT mode, you need to run a DHCP server on the host to assign IP addresses to Intranet machines. You can use the DNsmasq tool to implement this.

In the NAT network implementation, as shown in Figure 3.12, there is no connection between the virtual interface and the physical interface. Therefore, the VM can only access the external world through the virtual network, and cannot locate and access the virtual host from the network.

Figure 3.12 Nomad-driver-libvirt NAT network model

The following uses vm creation as an example to describe the network creation process:

1. Modify kernel parameters to allow IP packet forwarding.

2. Define the NAT virtual network, including the IP address segment.

3. Start the NAT network and modify the iptables rules. The host system performs SNAT.

4. DNS/DHCP implementation, the local host is responsible for management by starting a DNsmasQ.

5. The host creates iptables rules and performs SNAT.

6. Create a VM, specify the virtual network type virtio, and select the NAT network.

7. The VM starts and uses the DHCP protocol to request the DNsmASQ for dynamic IP address configuration.

 

Bridge network implementation, as shown in Figure 3.13, create a Bridge interface BR0, bind the Bridge to the physical network adapter eth0, and assign an external address. At this time, BR0 becomes the switch device, and then bind the corresponding virtual machine network device to a port on the virtual Bridge. Data is passed between the physical network adapter and the virtual network interface.

Figure 3.13 Nomad-driver-libvirt bridge network model

The following uses VM creation as an example. The network creation process is similar to the preceding NAT mode, with some differences:

1. Edit and modify the network device script file to create bridge device BR0.

2. Enable the spanning tree protocol for the network bridge.

3. Set IP acquisition-based dynamic or static configuration for br0.

4. Change the service startup sequence to enable the bridging virtual NIC to start before the host NIC.

5. Modify nic eth0 and disable NetworkManager service. The service does not support bridging.

6. Restart the network service

7. Create a VM and dynamically configure IP addresses using DHCP.

 

To dynamically obtain the IP address of the Libvirt VM, install the Qemu-guest-Agent in the VM image and use the Qemu Guest Agent to communicate with the VM client. Run the guest-network-get-interfaces command to obtain the VM IP address, MAC address, and subnet mask of the client. After obtaining the VM IP address, users can log in to the VM using SSH.