This is the 23rd day of my participation in Gwen Challenge.
What is MPI
MPI is a communication protocol for programming parallel computers. Both point-to-point and collective communication are supported. MPI “is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation.”
According to Wikipedia, MPI is an industry-standard process communication protocol defined as a messaging interface to support point-to-point and collective communication, currently primarily used in high performance computing.
1. Version brief
The first version of MPI, MPI-1, was released in 1994, and concrete implementations were produced within a year. It was followed by the release of the MPI-2, which is a superset of the MPI-1, and the latest version is the MPI-3. Several specific difference, please refer to the wikipedia en.wikipedia.org/wiki/Messag… .
2. Communication model
As mentioned above, THE MPI protocol supports point-to-point and collective communication, which is generally interprocess communication. When a process monopolizes a CPU core for computing, it maximizes the computing power of the process. Why is that? A brief introduction to the CPU structure of a computer.
As we all know, the hardware equipment of computer operation is the CENTRAL processing unit (CPU), which affects the core of its computing power. The first one is the quality, which means the running frequency of CPU, that is, the speed of calculation. Modern computers generally measure the computing frequency with GHz as the indicator, and the larger the indicator, the stronger the computing power. The second is quantity, which means the number of CPU cores. The more cores there are, the more points there are to provide computation. Therefore, if multiple processes are running in the same CPU core, the computing power will be evenly distributed in theory, even resulting in resource waste caused by resource scheduling.
The communicator, an important concept of MPI, defines a set of processes that can send messages to each other. In this group of processes, each process is assigned a number, called a rank, which explicitly communicates with each other.
2.1 Point-to-point model
In point-to-point communication model, through a process can specify another process of rank, and a unique message label (tag) to send a message to another process, the recipient can choose to accept the specified label (tag) messages or accept all the news, and then according to the order of the message, processing received messages. Common interfaces include MPI_SEND and MPI_RECV.
2.2 Collective communication model
In most cases, a process may need to communicate with several other processes. Say the main process wants to send a broadcast to all processes. At this time, if the point-to-point information transfer is considered, it will be very repetitive and may lead to extremely low network utilization. MPI addresses this need by providing a protocol for communication between groups. Common interfaces are MPI_bcast and MPI_reduce.
3. MPI compiler
There are many MPI compilers available, such as OpenMPI,MPICH,MVAPICH, etc. For personal, non-commercial use and documentation, OpenMPI is an excellent choice. OpenMPI is a free, open source IMPLEMENTATION of MPI that is compatible with the MPI-1 and MPI-2 standards. OpenMPI is developed and maintained by the open source community and supports most types of HPC platforms with high performance. The official website is www.open-mpi.org/.
3.1 Installation Method
Installation of operations based on The Linux operating system (using Centos 7 distributions here) can be done using the command line.
# yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel # yum install openmpi-devel 4.8.5 20150623 (Red Hat 4.8.5-36) Copyright (C) 2015 Free Software Foundation, Inc.Copy the code
If the version information is displayed after you run the mpifort — version command, the installation is successful.
However, Centos 7 does not support the module command, which may cause module load MPi cannot find the module command. Install environment-modules to solve this problem. The installation command is as follows:
yum -y install environment-modules
source /usr/share/Module/init/bash
module avail
Copy the code
3.2 Basic Functions
Open-mpi provides a large number of compilers, supporting C/C++, Fortran and other languages.
As you can see from the figure above, open-MPI is covered by most FORTRAN versions, including MPIF77, MPIF90, mpifort, etc. FORTRAN programs can be compiled using the following commands
Mpif90 -o hello hello.f90Copy the code
The above instructions compiled successfully, and now it’s time to run the program. The mpirun command is used here.
In the single-machine environment
#-np specifies the number of run integrations mpirun -np 5 a utCopy the code
In a cluster environment
-np -hostfile specifies the host address of the cluster mpirun -np N -hostfile <filename> <program>Copy the code
If the program itself has no errors, then it runs successfully.