Don’t follow the rules. Can not become fangyuan. Mencius Li Lou Shang
When it comes to instruction sets and CPU architecture, you will think of computer architecture in computer science courses. I don’t want to repeat the content of the course so boring, but first look at the definition of a class:
Typedef enum: int {Reg0, Reg1, Reg2, Reg3} RegNum; Typedef enum: int {Int3 // Device output, the contents of the register Reg0 to the screen} Interrupt; // typedef int Instruct; /** Virtual CPU class that simulates instructions provided by the CPU. The virtual CPU consists of four registers and computing units. The four register numbers are defined in RegNum; The operation unit provides 9 instructions of assignment, addition and subtraction, comparison and jump. */ @interface VCPU: NSObject // assigns a constant value to the register numbered reg. -(void)moveFromConst:(int)val toReg:(RegNum)reg; // Assign the value from register number reg1 to register number reg2. -(void)moveFromReg:(RegNum)reg1 toReg:(RegNum)reg2; // Assign the value in the register numbered reg to the memory at address addr. -(void)moveFromReg:(RegNum)reg toAddr:(Addr)addr; // Assign the value from the memory address addr to the register number reg. -(void)moveFromAddr:(Addr)addr toReg:(RegNum)reg; // Add the value in register numbered reg1 to the value in register numbered reg2 and save the result to register numbered reg2. -(void)addFromReg:(RegNum)reg1 toReg:(RegNum)reg2; // Subtract the value in register numbered reg1 from register numbered reg2 and save the result to register numbered reg2. -(void)subFromReg:(RegNum)reg1 toReg:(RegNum)reg2; // Perform instruct if the contents of two registers are equal, otherwise do nothing. -(void)isEqualReg:(RegNum)reg1 withReg:(RegNum)reg2 thenGoto:(Instruct)instruct; // Jump to instruct. -(void)jumpTo:(Instruct)instruct; // System returns -(void)ret; // System call, currently only support screen output call Int3, which means to output the value of register 0 to the screen. -(void)sys:(Interrupt)interrupt; @endCopy the code
Above is the definition section of a class called VCPU, which is implemented in OC language to simulate CPU functions. Let’s look at another code snippet that uses this class:
-(void)main:(VCPU*)cpu memory:(VMemory*)memory { VINSTRUCT_BEGIN VINSTRUCT(0, [CPU moveFromConst:10 toReg:Reg0]) // Save constant 10 to the CPU register Reg0 VINSTRUCT(1, [CPU moveFromConst:15 toReg:Reg1]) // Save constant 15 to the CPU register Reg1. [CPU addFromReg:Reg0 toReg:Reg1]) // Add values in Reg0 to values in Reg1 and save to Reg1 [CPU moveFromReg:Reg1 toAddr:0x1000]) // Save the sum results stored in Reg1 to memory at 0x1000 VINSTRUCT(4, [CPU moveFromAddr: 0x1000Toreg :Reg0]) // Save memory values at 0x1000 to Reg0 VINSTRUCT(5, [CPU moveFromConst:25 toReg:Reg1]) // Save constant 25 to the CPU register Reg1. [CPU isEqualReg:Reg0 withReg:Reg1 thenGoto:9] // If the value in Reg0 is equal to the value in Reg1, execute instruction 9: Print VINSTRUCT(7, [CPU moveFromReg:Reg1 toAddr:0x1000]) // Save the values in register Reg1 to memory address 0x1000. VINSTRUCT(8, [CPU jumpTo:10]) // jumpTo execute instruction 10 VINSTRUCT(9, [CPU sys:Int3]) // system call to output the values stored in Reg0. VINSTRUCT(10, [CPU RET]) // End of procedure. VINSTRUCT_END }Copy the code
Can you see what the above code does (see VirtualSystem** on my ** github site to see and run the full code)? Did you get a glimpse of the concepts and knowledge of CPU, memory, process, etc. in the above code? It actually implements the following simple functions:
-(void)main { int a = 10; int b = 15; a = a + b; if ( a == 25){ NSLog(@"output:%d",a); }}Copy the code
If you look at the VCPU class, you’ll see that it provides some very basic operations: addition and subtraction, data movement, comparison, address jumps, system calls, and so on. Callers can use the action methods provided by this class to write and perform a particular function. The internal implementation of this class also provides several temporary storage Spaces where values can be read and written using RegNum numbers. A VCPU is actually a simple simulation of the capabilities of a real CPU. Let’s take a look at the CPU components:
As can be seen from the CPU structure picture above, the CPU is divided into storage unit (SU), operation unit (ALU), and control unit (CU). If you map these components and structures to the VCPU class, you will find that the memory unit corresponds to its data members; The arithmetic unit and the control unit correspond to all the instance methods in it, and the arithmetic unit provides the implementation of the CPU instructions (the VCPU class provides many method implementations).
The set of instructions provided by a CPU is called an instruction set.
We can use OC language to implement a VCPU class, can use Swift language to implement a Swift version of VCPU class, can also use Java language to implement a Java version of VCPU class. For example, OC provides a method for directly manipulating memory addresses, but Java does not. Even in OC we can implement methods in VCPU classes in many different ways.
The CPU architecture and functions provided by different manufacturers and different technical processes and technical levels as well as specific equipment are also different. Such as ARM instruction architecture CPU, x86 instruction architecture CPU, POWER-PC instruction architecture system CPU. The cpus of these different architectures provide completely different instructions and storage units due to their completely different architectures. It is impossible to make ARM instructions execute directly on X86 cpus (just as OC’s provisioning methods do not execute in Java). CPU instructions in the same architecture can be compatible with each other to a certain extent, because the instruction set of cpus in the same architecture is the same (analogous to the same interface, but different internal implementation). For example, the instruction sets provided by Intel x86 cpus and AMD x86 cpus are similar and compatible. The only difference between them is their internal implementation.
The CPU instruction set defines a set of basic functions that the CPU should provide. It is a standard, an interface and a protocol. In software development, there are concepts of protocol and interface definition. Both consumers and providers need to follow this standard for programming and interaction: the provider needs to realize the functions of the interface, and how to realize them is an internal matter, which is not exposed to the outside world, and consumers do not need to know the specific implementation details. The consumer is always using the combination of functions provided by the interface to accomplish some function. This kind of design thinking applies to hardware systems as well. CPU instruction sets are typically defined by companies that design or produce cpus or by standards organizations. So what are the mainstream CPU instruction sets or CPU architectures on the market?
- X86 / x64 instruction set
The reference since: baike.baidu.com/item/Intel…
The x86 architecture first appeared in the Intel 8086 CENTRAL processing unit (CPU) released by Intel corporation in 1978, and supports 32-bit systems starting from Intel80386. X86 is now almost the standard platform for personal computers, becoming the most successful CPU architecture of all time. Other companies make x86 processors: AMD, Cyrix, NEC, IBM, IDT, Transmeta, etc.
A 64 – bit architecture
By 2002, the architecture of x86 was beginning to reach some design limits due to the length of 32-bit features. This leads to the difficulty of handling large amounts of information stored in quantities larger than 4GB. Intel had decided to completely abandon x86 compatibility in the 64-bit era, introducing a new architecture called IA-64 technology as the basis for its Itanium processor line. The IA-64 is inherently incompatible with x86 software; It uses various forms of emulation to run x86 software, but emulation is inefficient and can interfere with other programs. AMD has taken the initiative to expand 32-bit x86 (or IA-32) to 64-bit. It emerged as an architecture called AMD64 (also known as x86-64 before the renaming), and the first products based on this technology were the single-core Opteron and Athlon 64 processor families. Since AMD’s line of 64-bit processors came to market first, and Microsoft was unwilling to develop two different 64-bit operating systems for Intel and AMD, Intel was forced to adopt the AMD64 instruction set and add some new extensions to their own products. Named the EM64T architecture (apparently they didn’t want to acknowledge that the instruction set was from its main rival), EM64T was later officially renamed Intel 64(i.e., X64 instruction set).
For iOS programming to run on an emulator, the machine instructions generated by the code need to specify whether to use the i386 or X64 instruction set, since current Macs are based on x86 or X64 cpus.
- ARM instruction set
The reference since: baike.baidu.com/item/ARM/75…
ARM processor is the first RISC microprocessor designed by Acorn Limited in the UK with low power cost. Full name Advanced RISC Machine. The ARM processor itself is a 32-bit design, but also comes with a 16-bit instruction set. On December 5, 1978, physicist Hermann Hauser and engineer Chris Curry founded the CPU Processing Unit in Cambridge, England, with the main business of supplying electronic equipment to the local market. In 1979, CPU Corporation changed its name to Acorn Corporation.
At first, Acorn tried to use MOTOROLA’s 16-bit chip, but found it too slow and expensive. A £500 machine can’t use a £100 CPU!” They turned to Intel for the design of the 80286 chip, but were turned down and were forced to develop their own.
In 1985, Roger Wilson and Steve Furber designed their own first generation of 32-bit, 6M Hz processors,
And it made a RISC instruction set computer, referred to as ARM (Acorn RISC Machine). That’s where the name ARM came from.
At present, the CPU of mainstream smart phones and other mobile devices on the market all adopt ARM architecture. The machine instructions compiled by iOS applications are ARM instructions, so you need to specify the ARMV7 or ARM64 instruction set at compile time.
- MIPS architecture
The reference since: baike.baidu.com/item/MIPS architecture…
MIPS Architecture (English: MIPS Architecture, short for Piped Stages Architecture, is a condensed instruction set (RISC) processor architecture developed in 1981 Developed and licensed by MIPS Technologies, inc., it is widely used in many electronic products, networking devices, personal entertainment devices and business devices. The original MIPS architecture was 32-bit, and the latest version has become 64-bit.
At present, the loongson CPU in China uses MIPS instruction set.
- POWER -PC
The reference since: baike.baidu.com/item/POWER…
Power-pc is a family of high-performance 32-bit and 64-bit RISC microprocessors jointly developed by MOTOROLA and Apple to compete with Intel microprocessors and Microsoft software, which dominate the PC market. The PowerPC microprocessor was introduced in 1994.
IBM used to compete with Intel in the desktop processor market, but it failed to make much money due to poor marketing strategies and decided to exit the desktop market. The PROCESSOR used by Apple computer is just one of the processors in the POWER series, which is said to be a simplified version made by IBM for Apple. Apple’s unique business philosophy makes Apple computer incompatible with other PCS. So the current Power family of processors cannot be used on desktop PCS. Apple now uses Intel processors because PowerPC processors are not suitable for Apple.
Have you seen the power-PC macro definition in the header files of many iOS libraries? Early Macs used power-PC cpus, but now macs have mostly switched to X64 cpus.
Classification of CPU systems
There are some descriptions of CPU architectures and instruction sets listed above. Different architectures have their own advantages and disadvantages, and we can classify cpus from different perspectives:
According to the word length
Word length is the maximum number of digits a CPU instruction can process in a cycle, or the maximum capacity to address memory addresses. Therefore, according to this length, it can be classified as follows:
- 8 bit (e.g. Intel8086/ chips for small appliances)
- 16 bits (e.g. Intel80286)
- 32-bit (such as Intel x86, ARM ARMV7, ARMV7S)
- 64-bit (e.g. Intel X64, ARM ARM64)
In general, large CPU instruction sets are compatible with small CPU instruction sets. Such as a 32-bit application can be executed on a 64 – bit CPU, and small long CPU instruction set is not directly provide characters long instruction set of capabilities, such as the need to support is usually done through simulation, such as a 64 – bit word length of CPU data read instructions in a 32-bit word length cpus can be done through simulated two reads, Some cpus now offer instruction emulation, so some 64-bit applications can still run on 32-bit cpus, but with a significant performance and speed loss.
Instruction complexity
The reference since: wenku.baidu.com/view/b5a138…
The so-called instruction complexity refers to the number of instructions provided by the CPU instruction set, instruction addressing mode, instruction parameters, the complexity of the internal ARCHITECTURE design of the CPU, and the number of bytes occupied by the instruction itself. Generally, there are two types of classification:
-
CISC instruction set. The full English name of CISC is “Complex Instruction Set Computer”, namely “Complex Instruction system Computer”. Since the birth of Computer, people have been using CISC Instruction Set method. Early desktop software was designed with CISC in mind and continues to this day. Today, the x86 architecture popular for desktop computers uses CISC. In CISC microprocessors, the instructions of a program are sequentially executed, and the operations within each instruction are sequentially executed. Sequential execution has the advantage of simple control, but the utilization rate of each part of the computer is not high, and the execution speed is slow. CISC servers are based on the x86/ X64 Architecture, and most of the CISC servers are in the mid-to-low range.
-
RISC instruction set. Reduced Instruction Set Computer (RISC) a Reduced Instruction Set Computer is a microprocessor that executes instructions from a Reduced number of Computer types. The Reduced Instruction Set Computer originated in the 1980s on a MIPS mainframe. The microprocessors used in RISC machines are called RISC processors. As a result, it can perform operations at a much faster rate (more million instructions per second, or MIPS). At present, RISC instruction sets are almost used in the CPU of intelligent mobile devices, which are represented by ARM instruction sets and POWER-PC instruction sets.
The following table illustrates the differences between CISC and RISC architectures:
Follow instruction flow and data flow
The reference since: blog.csdn.net/conowen/art…
The classification of instruction flow and data flow is based on the rules of how many pieces of data can be processed simultaneously by a CPU instruction, or how many pieces of data can be processed simultaneously by instructions, and how many instructions can be executed simultaneously in a CPU time cycle. Therefore, it can be divided into the following four types:
-
Single instruction stream Single data stream machine (SISD) A SISD machine is a traditional serial computer whose hardware does not support any form of parallel computing and all instructions are executed in serial. And the CPU can process only one data stream in a given clock cycle. This machine is therefore called a single instruction stream single data stream machine. Early computers were SISD machines, such as Vonneau. Iman architectures, such as the IBM PC, early mainframes and many 8-bit home machines.
-
Single-instruction stream Multi-data stream machine (SIMD) SIMD uses one instruction stream to process multiple data streams. This kind of machine is very effective in digital signal processing, image processing, and multimedia information processing. MMXTM, SSE (Streaming SIMD Extensions), SSE2 and SSE3 extended instruction sets implemented by Intel processors can process multiple data units within a single clock cycle. In other words, the single-core computers we use today are basically SIMD machines. (PERSONALLY, Gpus fall into this category as well)
-
Multiple instruction streams Single data stream machine (MISD) MISD uses multiple instruction streams to process a single data stream. In the actual situation, it is more effective to use multi-instruction stream to process multi-data stream, so MISD only appears as a theoretical model and has not been put into practical application.
-
Multiple instruction streams Multiple data streams (MIMD) MIMD machines can execute multiple instruction streams simultaneously, each operating on a different data stream. The latest multi-core computing platforms belong to the category of MIMD, such as Intel and AMD dual-core processors are MIMD.
A virtual environment
Finally, we return to the VCPU class, which is a simple simulated implementation of the CPU. We know that vmware software can be used to simulate the hardware environment of an operating system, and realize the function of virtual devices; Microsoft announced in 2017 that Visual Studio 2017 would be able to develop and run iOS applications and seamlessly copy code to XCODE to compile and run. The principle is that Visual Studio2017 itself provides an OC compiler, and it also provides a mock implementation of the Cocoa UI framework, so you can run iOS applications on it.
From the examples above we can see that calls between the various levels of a system are always made through some agreed rules or defined interfaces, and the caller does not know or need to know how the provider implements these capabilities. Everything is always an interface:
It is because of the definition of these interfaces and the formation of standards that we can simulate a real implementation of another virtual implementation. This is the essence of what is called virtualization. Virtualization can occur at any level and can be global or partial. We can simulate CPU instructions and hardware interfaces to build a vmWare-like virtual machine to run any operating system. We can also simulate the interface API provided by the operating system so as to build a set of virtual Windows running environment similar to Wine. We can also simulate the file system or storage system provided by the operating system to provide a set of application containers like Docker; We can also emulate the Cocoa Framework to provide a compilation environment similar to Vistual studio2017 that can run and write OC applications (Microsoft has opened the Framework: Microsoft’s OC implementation support).
Virtualization starts with the definition of an interface standard and then implements its own implementation on top of someone else’s interface. Today’s systems are called through interface protocols from the upper software to the lower hardware, so we can implement virtual capabilities at all levels.
👉 [Back to directory]
Welcome to visit myMaking the address