These days, Uber’s development team shared the process of rewriting the client with Swift 3 in a conference. In the video, a very dark technology technique was introduced, which can greatly speed up the compilation speed. After I tried it myself, I found it really works, but there are some drawbacks, so I would like to share with you here.

Uber’s development team stumbled upon the fact that compilation time would be reduced from 1min 35s to 17s if all Model files were compiled in one file, so we could greatly optimize compilation speed by merging all code files into one file.

WHO(whole-module-Optimization) also merges files and compiles them. In practice, we find that while compiles are faster, they are not nearly as effective in reducing compile time as simply merging files together. The main reason is that in addition to merging files, WHO also does these things during precompilation:

  1. Detect methods and types that are not called and remove them at precompilation time
  2. Adding final tags to classes and methods that are not inherited gives the compiler more information so that methods can be optimized for static calls or inline

These optimizations can greatly improve the efficiency of the program, but more contexts need to be loaded at compile time. Every time a file is merged, all files are traversed for the above check, and the compile time increases exponentially with the number of files.

Uber’s team found that it was possible to merge files without optimizing them by adding a compilation macro. Go to the project file Settings -> Build Setting -> Add user-defined Settings. Set key to SWIFT_WHOLE_MODULE_OPTIMIZATION and value to YES. Then set the optimization level to None.

So why isn’t Swift’s compiler optimized in this way?

The answer is simple, because this optimization increases the granularity of incremental compilation from the File level to the Module level.

The process of compiling is usually that each file is compiled separately and then linked together. The compiled cache is the product of the link, and the compiled cache is the product of the merged files when all files are merged together.

Whenever we modify a file in our project and want to compile debug, we have to merge the file again and compile from scratch, instead of reading the cache and skipping the unmodified file.

Pod libraries, storyboards, and XIb files are not affected by this, but once we modify the files, we have to compile the entire module from scratch. There is no essential difference from other modules.)

Therefore, this optimization method is not as useful as expected, anyway, packaging is generally done by CI, not native, and daily debug will directly incremental compilation. Only in a team of Uber’s size, every feature branch needs to be packaged and tested on CI, so it is more practical to use this optimization method.

For the average developer, flow.ci can save a little bit of money by charging by the hour, since the performance requirements for internal testing are not that high. The cost of compiling once can now be used three or four times, and the revenue increases exponentially as the project grows.)


You can follow my blog if you like my articles