3.3 Architecture design for running the Libvirt VM
The above analysis of the Nomad runtime driver plug-in mechanism, using the way of the runtime driver interface, this section design and implement the Libvirt virtual machine based on the Nomad edge computing platform, here named nomad-driver-libvirt.
When choosing to use Libvirt to implement virtual machine operations, one is that the edge computing platform in this article is expected to gradually support multiple virtualization platforms to facilitate legacy system migration. However, as shown in Figure 3.3, with reference to the NFV reference framework published by ETSI, An Virtualized Infrastructure Manager (VIM) layer provides an Virtualized Hypervisor, also known as a Virtual Machine Monitor (VMM). The drivers and apis provided by each technology are not the same, so different management tools need to be called for different virtualization technologies, which makes it more difficult to control and manage. Second, virtualization technology develops rapidly. New virtualization technologies may be more suitable for application scenarios. In the future, the platform needs to migrate to different hypervisors and expand its functions. This paper uses layered design, and uses Libvirt to build an abstraction layer between VIM layer and Virtual Network Functions Manager (VNFM) layer to provide unified API for the upper layer to call, and the lower layer to encapsulate different Virtual machines. This facilitates virtual machine management. The Network Functions Virtualization Infrastructure (NFVI), together with the VIM layer, constitutes the IaaS layer. NFVI is a set of resources used to host and connect virtual functions, including servers, hypervisors, operating systems, virtual machines, virtual switches, and network resources. Nomad is the core component of the edge computing platform, the Network Functions Virtualization Orchestrator (NFVO), which performs resource orchestration and Network service orchestration, among other Functions.
Figure 3.3 NFV ETSI frame of reference
The VIM instantiation model design for Nomad is shown in Figure 3.4.
Figure 3.4 VIM instantiation model based on Nomad
Using the plug-in mechanism of Nomad Client after version 0.9, the libvirt vm of nomad-libvirt-driver can save the compiled binary files in the Nomad plug-in loading directory when running the libvirt vm. When the Nomad Client is started, the nomad-libvirt-driver process is started. It is a user space management tool, as shown in Figure 3.5. It uses the API bound with the libvirt Golang language to communicate with the libvirtd daemon process and manage VMS.
Figure 3.5 Schematic diagram of support for the Libvirt virtual machine
Nomad-driver-libvirt is designed and implemented for edge computing platforms to manage virtual machines. It is intended to facilitate the management of virtual machine tasks and enable the migration of some legacy systems. Don’t like the existing cloud computing platforms such as oVirt, it’s so rich and powerful management function, that is putting the cart before the horse, is the most important part of the implementation requirements, is currently more VMS on a single node management, need to implement start, execution, stop, review, wait, destruction, restore the virtual machine operations, monitoring the function such as data acquisition, In theory, it can support Xen, KVM, VMWare, VirtualBox, Hyper-V and other virtual machine monitors. Currently, it is mainly developed and verified for KVM. KVM is a kernel module that emulates the CPU of a virtual machine, but virtual machine I/O devices are simulated through the user space program QEMU, which itself has a complete set of open source full virtualization solutions.
However, the Nomad runtime driver interface was designed abstractly at the beginning, and is more based on different container workloads, although some non-container workloads are taken into account, but there is still a bias in the design of some functions. For example, commands run on VMS. The commands are different depending on the VM type and the VM operating system (OS). Users prefer to log in to VMS in IaaS mode. For example, the lightweight container technology uses Cgroups and Namespace technology, while the traditional operating system virtualization technology, the boundary of resource isolation and restriction of VM instances is not Cgroups, and the required resource size can be specified when a VM is created. The taskHandle callback in the Nomad driver plug-in mechanism is used to recover from a crash while the Libvirt virtual machine is running. The taskHandleVersion field is used to modify and migrate the task configuration mode.
The initial design and implementation of the edge computing platform Libvirt virtual machine, the first support QEMU/KVM virtual machine, involving computing, storage, network virtualization, and other parts, considering the CPU, disk, network hardware model and operation mode of complex types and configurations. The current Libvirt VM supports common CPUS, raw disks, partitions, and logical volumes as storage devices, NAT, DHCP, and bridge networks as networks, and QCOW2 as vm images. Based on this framework, the later extension is more convenient.
The following are three considerations for computing, storage, and network when running the Libvirt virtual machine.
In computing, CPU virtualization allows a single CPU to emulate multiple cpus in parallel, allowing a single platform to run multiple operating systems simultaneously. Libvirt supports three CPU modes:
(1) host-passthrough: Libvirt commands the KVM to transparently transmit all the CPU instructions from the host to the VM. As a result, the virtual machine is able to maximize the use of the host CPU instruction set, so the performance is the best. However, in live migration, it requires that the CPU of the destination node be the same as that of the source node.
(2) host-model: Libvirt selects the most suitable CPU model from the /usr/share/libvirt/cpu_map. XML configuration file according to the current host CPU instruction set. In this mode, the VM has fewer instruction sets than the host, and its performance is worse than that of the host-passthrough mode. However, during live migration, the CPU of the destination node is different from that of the source node.
(3) Custom: The VM in this mode has the lowest CPU instruction set and the lowest performance. However, the VM in this mode has the highest performance across different CPU models during live migration. In addition, the custom mode allows users to add additional instruction sets.
Host-passthrough is better than host-model, and host-model is better than custom. Custom is better than host-model, and host-model is better than host-passthrough.
The default host-model mode is configured when the Libvirt VM is running. The host-passthrough mode is also supported to improve performance. In practice, Intel E5 series cpus are used. Cpus of this series come in a variety of models, such as Xeon, Haswell, IvyBridge, and SandyBridge. Even with host-model, live migrating VMS between these different CPU models may fail. Therefore, from the perspective of live migration, when selecting host-mode: Fully consider the types of the existing hosts. The same issues must be taken into account when purchasing and expanding the capacity. Do not select host-passthrough unless there is no live migration scenario.
On the storage side, virtualization of physical memory allows applications to think of themselves as having contiguously contiguously located address Spaces. In practice, the code and data of an application can be separated into multiple discontinuous pages or segments in memory, or even swapped to external storage such as disk or flash memory. Even if physical memory is insufficient, the application can be executed sequentially. Libvirt also has support for multiple types of storage devices.
The Libvirt vm currently stores vm images in QCOW2 format when running. Compared to container images, compiled VM images have little support for metadata injection. Virtual machine image storage, also can build the corresponding virtual machine image warehouse.
The network types and modes of Libvirt are also complex. The Libvirt user-mode network (the default network mode) is implemented entirely by QEMU, independent of other tools. It is a complete set of TCP/IP stacks implemented by QEMU using SLIRP, and a complete set of virtual NAT networks are implemented using this stack. This method is simple, independent, does not require the root permission, and has poor performance. It does not support some network functions, such as ICMP, and cannot access the client from the host or external network. In addition, Libvirt’s MACvTap has four modes: VEPA, Bridge, private, and Passthrough. Based on requirements, only NAT and Bridge are supported when the Libvirt vm runs.
The NAT mode applies to desktop host virtualization. The NAT network topology is shown in Figure 3.6. The NETWORK address translation (NAT) mode allows hosts and VMS to access each other and VMS to access the Internet, but does not allow external users to access VMS.
Figure 3.6 Libvirt NAT network topology
The Bridge mode applies to server virtualization. Figure 3.7 shows the Bridge network topology. The Bridge mode is a Virtual Bridge connection mode. The Bridge mode is more complex than the NAT network. The Bridge mode enables a VM to become a host with an independent IP address on the network and communicate with the machines on the subnet.
Figure 3.7 Libvirt Bridge network topology
Bridge network, client and host share a physical network device connection network. Bridges are used for advanced Settings, especially if the host has multiple network interfaces.
Both the NAT network and the Bridge network can be created on the host, but the client can only choose one or the other.
For logging, Libvirt provides logging, but with many limitations. The vm internal system is complete. The running application logs are stored inside the VM and are destroyed along with the vm image destruction. On the other hand, the application logs are not easy to migrate, so third-party logging tools are required. These are also included in the load of the VIRTUAL machine, which is an additional overhead. The Libvirt virtual machine uses Nomad’s logging utility classes and logging style for consistency.