One of the most popular computers in digital circles these days is apple’s new three models based on its M1 chip: the MacBook Air, the 13-inch MacBook Pro and the Mac Mini. Its heat is also a long stay, even from the press conference has been more than 10 days, but still can see a variety of evaluation videos, articles emerge in an endless stream. Search for videos and articles about the MacBook M1 on several platforms, and you’ll find them invariably: Apple smells good, performance takes off, Intel kills, etc.Recently, I have watched some of the new disassembly videos, and I can find that there is not much change in the mold of the newly released Mac compared with the previous years, and the biggest highlight is its self-developed ARM chip M1.

The appearance of the M1 chip, with fire another term, ARM architecture. It also raises a long-debated question: Will ARM chips really surpass x86 chips?

This article will not discuss the design of THE M1 chip, first to learn two commonly heard but may not be too clear: ARM and x86.

ARM with x86

With the popularity of information technology, if you ask anyone these days if they know what a CPU is, I think the answer will be yes. But if you ask him again, do you know the X86 architecture and ARM architecture of THE CPU, I think some computer science students may not know too well, so before talking about the APPLE M1 chip, let’s talk about ARM and x86 first.

Nowadays, the term “worker” is very popular, and THE CPU is the most hard-working and core worker in the whole computer. X86 and ARM are two different types of CPU workers, and one big difference between the two architectures is the instruction set.

Architecture? Instruction set?

So you’re wondering, what is architecture? What is the instruction set? Don’t worry, we still take the worker as an example, put the CPU into the role to illustrate.

In fact, what the CPU does is very simple and very core, simply to receive instructions + operations. CPU and tens of millions of workers, first of all, to have the normal working ability (that is, the execution capacity/computing capacity), and then have enough logic ability (can understand the order of doing things), and finally have a certain understanding ability, can understand others’ words (that is, the instruction set), to work normally. Together, these are called “architectures”. You can think of “architecture” as a set of “tools,” “methods,” and “specifications.”

Different architectures may have different tools, different methods, and different norms, which leads to the incompatibility between them — after all, no matter how powerful the official is, he cannot cut the official of his own dynasty with the sword of the previous dynasty.

The type of instruction set

The instruction set is basically the language that the CPU can understand. From the invention of THE CPU to the present, there have been many kinds of architectures, from the familiar x86, ARM, to the less familiar MIPS, IA64, they are very wide gap. However, in terms of basic logic, they can be classified into two categories, namely, “complex instruction set” and “reduced instruction set”. So to understand x86 and ARM, you need to understand the Reduced instruction set (RISC) and complex instruction set (CISC).

Take workers as an example, there are two types of workers, one is to the leadership, said one thing, he does one thing, this is ** “streamlined instruction set”. The other is the “complex instruction set”, where the boss doesn’t need to explain everything, but simply sends a command and the boss automatically does it.

For example, when the leader said “send this document to Manager Wang”, the first worker might need to ask the leader which manager Wang is, when to send it to manager Wang, and where his position is. And the second worker may take the documents to find out which is the total Wang and total Wang’s location.

This is the logical difference between a “complex instruction set” and a “reduced instruction set.” In simple terms, a complex instruction set is one where many operations are grouped into a single instruction, which is smarter but also more power-consuming (it takes brains to guess a leader’s mind, after all), whereas a condensed instruction set is the opposite. Therefore, the biggest difference between the two instruction sets is the way their designers think about the problem.

The x86 architecture is the representative of complex instruction set (CISC), and the ARM architecture is the representative of reduced instruction set (RISC). Even the name of ARM directly indicates its technology: Advanced RISC Machine — Advanced RISC Machine.

Pros and cons

Now, some people might look at this and say, well, it’s obviously a complex instruction set, because for the same number of instructions, the complex instruction set has more operations, so you might only have to send one instruction to do something that’s more complicated, and if you have a compact instruction set, you might have to send several instructions.

As a matter of fact, everything has its duality in nature. It is a natural law that there will be disadvantages as well as advantages. So it’s hard to judge who’s good and who’s bad.

And it is the big difference between the design ideas of complex instruction set and simplified instruction set that directly leads to the separation of the two application scenarios. Complex instruction set is more focused on the implementation of high performance but high power consumption, such as large servers, high-performance laptops, and the most common is the desktop processors of Intel and AMD.

The REDUCED command set focuses on small size, low power areas such as smartphones, watches, tablets, etc. Qualcomm, Samsung, Mediatek, Huawei, Apple, etc.

Complex instruction sets have an advantage when performing high density computing tasks, and lean instruction sets have an advantage when performing simple, repetitive tasks, so talking about the advantages and disadvantages regardless of the usage scenario is a bully.

x86

The x86 architecture, which was created in 1978, is more of an old dog than the ARM architecture, which was created in 1991. Considering the x86 architecture’s complex instruction set, it’s not hard to see why. At a time when computer resources were relatively scarce, it was necessary to do as many computing tasks as possible with as few machine language instructions as possible. Is it really that important in that context? Obviously not.

So why did he call it the x86 architecture?

It all started on June 8, 1978, when Intel released a new microprocessor, the “8086.” This processor did not get what attention when it just appeared, but later IBM used the 8086 to create a famous IBM PC, directly led Intel to become the world’s leading chip giant, not only make Intel rise, but also become a standard in the industry.

Later, 80286/80386/80486/80586 appeared, all from the original 80×86 architecture inherited, just constantly optimize, expand functions, improve performance. Other x86 processors, such as AMD, Via, and (now out of the x86 world), are also compatible with the x86 architecture.

So it was called x86 because it was so widely used that it became synonymous with it.

What is x64?

In addition to x86, many people have seen x64, and once thought that x86 is 32-bit, x64 is 64-bit, in fact, this is not a very correct understanding, but before we get to that, let’s talk about what the “bits” in 32-bit and 64-bit are.

We can simply understand the CPU as a device composed of multiple transistors. Transistors are miniature electronic switches. Each switch has an operation bit and each switch has two states: ON (ON) and OFF (OFF), this ON and OFF is equivalent to the transistor connectivity and disconnect, and these two states just with the binary base state “0” and “1” correspond! Different numbers of 01 in different positions can be combined into different instructions, data, thus creating an infinite number of possibilities! (This is also the name of my public account 01 binary origin)

Back in the CPU, there is an area of the CPU called the general purpose register, which is used to store instructions. If the general purpose register is 64 bits wide (which can be simply interpreted as having 64 transistors in it), then the processor can extract 64 bits of data at a time. This is twice as much as 32-bit (extracting 4 bytes at a time), and theoretically doubles the performance.

Back to x64 and x86, x86 is indeed a 32-bit instruction set developed by Intel, but with the progress of hardware technology, THE CPU began to move towards 64-bit, Intel actually chose not to be compatible with x86, instead, they chose to reinvent the wheel and redesign the instruction set. It’s called IA-64. However, ia-64 is not compatible with x86, the market reaction is relatively cold, and is subject to multiple patents, so other manufacturers can not imitate, the scale is not very large. As a result, AMD, another chipmaker, was the first to produce a commercial 64-bit CPU compatible with X86 architecture. AMD called it AMD64, which was widely praised by users after its release. Later Intel had to abandon the IA-64 and chose to support the INSTRUCTION set of AMD64, but changed the name to Intel 64 for its own sake, but its core was almost the same as AMD64.

Later apple and RPM package administrators referred to this 64-bit architecture as “x86-64” or “x86_64.” Oracle and Microsoft call it “X64.” The BSD family and other Linux distributions use “AMD64,” the 32-bit version is called “i386” (or I486/586/686), and Arch Linux calls this 64-bit architecture x86_64. Since then the x64 name has become popular.

scalability

With that said, let’s think back to a common application in our daily life. Isn’t it easy to expand the memory by adding a solid state drive to your computer? You can just buy a memory stick and install it yourself. But if you want to expand your phone to 512 gigabytes, it’s a hassle, not only to go to a dedicated phone store, but also to spend a lot of money. Have you ever thought about the reasons behind this?

This is also related to the CPU architecture design.

X86-structured computers are connected with extended devices (such as hard disks and memory) in a “bridge” way. Moreover, x86-structured computers have been around for nearly 30 years, and they have a wide range of supporting extended devices with relatively low prices. Therefore, x86-structured computers can easily expand their performance, such as increasing memory and hard disks.

The COMPUTER with ARM structure connects the CPU with the data storage device through a dedicated data interface, so it is difficult to expand the storage and memory performance of ARM (generally, the memory and data storage capacity have been determined in the product design), so the system with ARM structure generally does not consider the expansion. The phone has as much memory as you buy. Basically pursue the principle of “enough is good”. This will be mentioned again in the next article on the M1 chip.

authorization

The x86 architecture chip manufacturers, you’ve probably only heard of Intel and AMD, why are there so many ARM architecture chip manufacturers?

In fact, the truth is very simple, simple to say: ARM company does not produce chips, only provide a chip design Idea.

ARM doesn’t make or sell any chips. It just designs its own IP, including the instruction set architecture, the microprocessor, the graphics core, the interconnection architecture, and then sells licenses to whoever likes them. Most companies that produce ARM chips, such as Samsung, Apple, and Qualcomm, are licensed by ARM at the architecture level, which allows them to create their own core architectures based on the ARM instruction set.

It can be said that, as a chip manufacturer that does not produce chips, ARM supports the operation of a variety of embedded devices, smart phones and tablet computers around the world. It is his mechanism that enables enterprises to customize their own chip architecture, and makes ARM ecology stand out, and even has the momentum to surpass x86 for a time.

In contrast, x86 licensing is less flexible. Intel and AMD, as the two companies, are divided into two types:

The first is Intel, which is responsible for the design and production of its own architecture and chips from beginning to end. Doing so is to need extremely strong, omni-directional strength to do guarantee, have money, someone, have technology. Of course, the benefits are also obvious. Not only can they completely control their own lifeblood, but also the profits are extremely considerable. Intel can enjoy very high profits on almost any product and can sell it for as much money as they want.

The other is Fabless. That’s what happened to NVIDIA, and that’s what happened to Intel when AMD couldn’t afford it. Such firms simply design their own chips and hand over manufacturing to contract manufacturers such as TSMC, UMC, GlobalFoundries and Samsung Electronics. The advantages are obvious, the burden is very light, you just design the line, do not have to spend a lot of money to build a wafer fab, develop new process, but the disadvantages are also very prominent: you design, can not be built, even if built and what look like, you can not decide, depends on the OEM partner’s ability.

In the past two years, Intel has always been at 14nm++, and has even been ridiculed as a toothpaste factory. At the same time, with the improvement of 5nm technology of TSMC, AMD rose again. Let’s say it out loud: AMD, YES!

The last

That’s all for this article. It took so many words to talk about x86 and ARM that we said we would introduce the M1 chip, but it will be great for us to understand the next article.

In my next post, I’m going to cover some information about the M1 chip based on what’s available so far. If you find this helpful, please give me a like and follow. Your support is the biggest motivation for me to renew.