First let’s look at the following code:

What compiler optimization optimizes? It optimizes the underlying code execution logic to make project execution more efficient. Assembly is the closest code to the bottom, so let’s look at how compiler optimizations are optimized from assembly. Make a breakpoint at line 15 and run the program to see the following assembly code:

A:DebugThe assembly code in mode is shown above, as we can seea,bThe assignment and addition of variables. We are inBuild SettingsIn the searchoptimization, you can see the optimization level inDebugMode isNoneThere is no optimization.releaseMode isFastest,SmallestIn general, we can set this value.

Join the Fastest section to join the Fastest section. Run the Smallest section to gain the following assembly code:

As you can see, the optimized code is 15 fewer lines and faster. Instead of assigning values to variables A and B, the code directly places the result 0x1e (30) in register W8.

When reading the OC source code, we see a lot of FastPath (x) and slowPath (x), which use compiler optimizations. Take a look at the implementation of these two macro definitions:

#define fastpath(x) (__builtin_expect(bool(x), 1))
#define slowpath(x) (__builtin_expect(bool(x), 0))
Copy the code

The __builtin_expect directive was introduced by GCC to allow programmers to tell the compiler which branch is most likely to be executed. This command is written as __builtin_expect(EXP, N).

The probability that e to the N is equal to N is high.

First we need to be clear:

If (fastPath (x)) is equivalent to if(x) if(slowpath(x)) is also equivalent to if(x)Copy the code

__builtin_expect() is used by GCC (version >= 2.96) for programmers to provide “branch” information to the compiler so that the compiler can optimize the code to reduce the performance degradation caused by instruction jumps. __builtin_expect((x),1) indicates that the value of x is more likely to be true; __builtin_expect((x),0) indicates that the value of x is more likely to be false. That is, with fastPath (), there is a greater chance of executing statements following if, and with slowPath (), there is a greater chance of executing statements following else. In this way, the compiler reduces the performance penalty of instruction jumps by following the more likely code immediately after it during compilation.

Such as:

int x, y;
 if(slowpath(x > 0))
    y = 1; 
else 
    y = -1;
Copy the code

In the above code, GCC’s compiled instruction prereads y = -1, which is suitable for cases where x is less likely to be greater than 0. If x is greater than 0 most of the time, fastPath (x > 0) should be used so that the compiled instruction reads y = 1 in advance. This will reduce the number of revalues when the system runs.

So when we look at the OC source code in the future, slowPath can be ignored because it is mostly fault tolerant, which can improve our source code reading efficiency.