preface
The two apps currently developed by Beitchat are Beitchat Parent version and Beitchat Teacher Version. Recently, due to the rapid iterative development of new functions, the project scale has grown rapidly, with about 230,000 lines of code for single end, 60,000 lines of private library, 150,000 lines of code for third-party library and 600,000 lines of code for single client. It now takes 11 to 12 minutes to pack. It’s not nearly as much as Facebook’s 40 minutes, but when we were in private beta, we often had private beta releases two or three times a day. The CPU usage in packaging is basically 100%. Since there is no special CI machine, the colleague in charge of packaging (actually myself) takes up a lot of working time, so I have been looking for a solution to speed up packaging recently.
Current project structure
Our project uses CocoaPods to manage third-party and private library dependencies, which should be standard for most projects. For now it’s a pure Objective-C project, without Swift.
Investigated scheme
Here are some of the mainstream solutions THAT I researched and why I didn’t use them. They had their limitations, but they also gave me ideas that the thought process is just as valuable as the final solution.
cocoapods-packager
Cocoapods-packager allows you to package any pod into a Static Library, which speeds up compilation time, but also has its drawbacks:
- Optimization is not complete, can only optimize the compilation speed of the third party and private Pod, for other frequent changes of the business code
- Subsequent updates to private and third-party libraries are cumbersome, and when there are source changes, they need to be repackaged and uploaded to an internal Git repository
- Too many binaries can slow down Git operations (there is no Git LFS deployed yet)
- Difficult to debug source code
Carthage
This solution is similar to Cocoapods-Packager, with similar advantages and disadvantages, but Carthage makes it easier to debug the source code. Since we’re already using CocoaPods on a large scale, switching to Carthage for package management would require a lot of conversion work, so we’re not considering this option.
Buck
Buck is a generic build system, open-source by Facebook. The biggest feature is that intelligent incremental compilation can greatly improve build speed. When we first heard about Buck, it was only available on Android, but now it’s available on iOS.
The main reason it makes builds faster is because it caches compilation results, constantly monitoring file changes in the project directory and compiling only the files that have changed at each build. Another feature that inspired me was HTTP Cache Server, which uses a Cache file Server to store compilation results so that if one member of the team compiled a file, the rest of the team could download it instead of compiling it.
Buck is a fairly complete solution that many big foreign companies like Uber already use. I also spent a lot of time on research and finally decided that the current situation was not suitable for our project and team. The main reasons are as follows:
- Buck discarded Xcode’s project files, requiring a manual configuration file to specify compilation rules that required significant changes to existing projects. We’re still iterating on new features at a rapid pace, and we don’t have the time or the people to implement them.
- The development and debugging process will have to change a lot. Since Buck takes over project compilation, to debug a project can’t simply allow ⌘+R in Xcode, necessitate having Buck generate Xcode project files first. Uber’s engineers even recommended Nuclide as a development environment instead of Xcode. While it works in principle, it takes time for teams to get used to it, and in the short term productivity declines are inevitable.
- Debugging code with Xcode does not benefit from faster compilation. While it is possible to start the App with the buck command and then start LLDB on the command line for Debugging, Xcode’s Debugging tools such as View Debugging and the Memory Graph Debugger are not available.
Bazel
Bazel is similar to Buck, and is open-source by Google, with similar advantages and disadvantages. I won’t go into details.
Distcc distributed compilation
The idea is to send a portion of the files that need to be compiled to the server, and the server sends the compilation product back when it is finished. I tried the well-known distcc, which was easy to set up and managed to distribute the build across multiple servers on the Intranet. However, the CPU usage of other compilation servers is always low, only around 20%; This means that dispatching tasks is not even as fast as the server can compile, and dispatching tasks and then sending back compiled artifacts takes more time than directly compiling locally. After a lot of trial and error, the compile time didn’t get any faster at all, and even slowed down a bit. Distributed compilation may not be appropriate at the current size of our project.
Final solution: CCache
Let’s start with my appeal for a solution:
- To significantly improve compilation speed, you need to reduce compilation time by at least 50%
- No major changes to the project are required
- There is no need to change the development tool chain
CCache is a tool that can cache compiled intermediates. It has been used in other areas, but not in the iOS world. After my practice, it can meet my three requirements. I realize that it is the earliest search in this article: Using ccache for Fun and Profit | Inside PSPDFKit
If you don’t use CocoaPods, see the article above. Because there are some additional adjustments to be made to CocoaPods, I’ll explain. Here’s how to use CCache in iOS projects that use CocoaPods as a package management tool.
Installation steps:
Note: The project path cannot contain Chinese characters, otherwise the normal operation of CCache will be affected
Install CCache
First you need to install Homebrew on your computer, which should be standard for macOS programmers, so I’ll skip it.
To install CCache from Homebrew, run $brew install CCache from the command line
The installation succeeds after the command runs.
Create CCache compilation script
To get CCache involved in the build process, we use CCache as the project’s C compiler. When CCache can’t find the build cache, it passes the build instructions to the real compiler, Clang.
Create a new file named ccache-clang with the following script and place it in your project
ccache-clang
#! /bin/sh if type -p ccache >/dev/null 2>&1; then export CCACHE_MAXSIZE=10G export CCACHE_CPP2=true export CCACHE_HARDLINK=true export CCACHE_SLOPPINESS=file_macro,time_macros,include_file_mtime,include_file_ctime,file_stat_matches It will be useful to check integration problems later. After the integration is successful, delete it. Export CCACHE_LOGFILE='~/Desktop/ ccache. log' exec CCache /usr/bin/clang "$@" else exec clang "$@" fiCopy the code
On the command line, go to the directory of the ccache-clang file and change its permissions to the executable file $chmod 777 ccache-clang
If your code or third-party library code uses C++, make a copy of the ccache-clang file and rename it to ccache-clang++. The corresponding call to clang must also be changed to clang++, otherwise CCache will not be applied to C++ code.
ccache-clang++
#! /bin/sh if type -p ccache >/dev/null 2>&1; then export CCACHE_MAXSIZE=10G export CCACHE_CPP2=true export CCACHE_HARDLINK=true export CCACHE_SLOPPINESS=file_macro,time_macros,include_file_mtime,include_file_ctime,file_stat_matches It will be useful to check integration problems later. After the integration is successful, delete it. Export CCACHE_LOGFILE='~/Desktop/ ccache. log' exec CCache /usr/bin/clang++ "$@" else exec clang++ "$@" fiCopy the code
These two files should be present in the project upon completion

Xcode project adjustments
In your project’s Build Settings, add a constant CC. This value causes Xcode to treat the executable in the execution path as the C compiler at compile time.
& amp; amp; lt; img src=”https://pic1.zhimg.com/v2-2d7e2c013a001290c3dea8fc13734a68_b.png” data-rawwidth=”1467″ data-rawheight=”251″ class=”origin_image zh-lightbox-thumb” width=”1467″ data-original=”https://pic1.zhimg.com/v2-2d7e2c013a001290c3dea8fc13734a68_r.png”& amp; amp; gt;
Close the Clang Modules
Since CCache does not support Clang Modules, you need to turn Enable Modules off. How this is handled on CocoaPods will be covered later.
Because the Enable Modules are turned off, you must remove all @import statements and replace them with #import syntax such as @import UIKit with #import <UIKit/ UIkit.h >. Later, if you’re using other system frameworks like AVFoundation, CoreLocation, etc., now Xcode doesn’t automatically import them for you, You need to import it manually in the project Target’s Build Phrase -> Link Binary With Libraries.
The test results
Try compiling it and then type cache -s on the command line to see ccache statistics like the following:
cache directory /Users/mac/.ccache primary config /Users/mac/.ccache/ccache.conf secondary config (readonly) / usr/local/Cellar/ccache 3.3.4 _1 / etc/ccache. Conf cache hit 14378 cache hit (preprocessed) 1029 (direct) cache miss 7875 Cache hit rate 66.18% called for link 61 called for preprocessing 48 compile failed 2 preprocessor error 4 can't use precompiled header 70 unsupported compiler option 2332 no input file 11 cleanups performed 0 files in cache 35495 cache Size 1.3 GB Max Cache size 5.0 GBCopy the code
If the access succeeds, you can see that the cache miss is not 0. Since there is no cache for the first compilation, it must be all miss. Then compile the second time and if you see that the number of cache hits starts to spike, congratulations!
The processing of CocoaPods
If your project does not use CocoaPods for package management, then you are fully plugged in and do not need to do the following.
Because CocoaPods packages third-party libraries separately into a Static Library (or Dynamic Framework, if use_frameworks is used! So the Enable Modules option should also be turned off for the Static Library generated by CocoaPods. However, since CocoaPods regenerates the Pods project every time it executes a POD update, if you change the Enable Modules option in the Pods project directly in Xcode, It will be changed back the next time the POD update is executed. We need to add the following code to our Podfile to make the generated project turn off the Enable Modules option and add the CC parameter, otherwise pod will not be able to use CCache acceleration at compile time:
post_install do |installer_representation| installer_representation.pods_project.targets.each do |target| Target. Build_configurations. Each do | config | # close Enable Modules config. Build_settings [' CLANG_ENABLE_MODULES] = 'NO' # Config. Build_settings ['CC'] = '$(PODS_ROOT)/.. /ccache-clang' end end endCopy the code
It is important to note that if you are using a Pod that references a System framework, such as AFNetworking, which references System Configuration, Build Phrase -> Link Binary With Libraries you need to include this symbol in your project’s Build Phrase -> Link Binary With Libraries, otherwise you may receive an error such as Undefined symbols XXX for architecture YYy when compiling. It’s a bit of a throwback to primitive times, but it’s an acceptable price to pay considering the huge increase in compilation speed.
Integration Troubleshooting
Pay attention to the output of the log file and ccache -s command statistics. If you see the word “unsupported Compiler option-fmodules” in the log, it means that your Enable Modules are not turned off. Follow the previous steps to check. For other problems, see Troubleshooting in the official documentation.
Further optimization
Remove Precompiled Header File
The contents of the PCH are appended to each file, and CCache looks up the cache based on the MD4 summary of the file’s contents, so if you change the contents of the PCH or the header file referenced by the PCH, the cache will be completely invalidated and the entire file will have to be recompiled. CCache takes longer to compile for the first time due to the need to update the cache, almost twice as long for Baychat’s project. Therefore, if PCH or the files imported by PCH are frequently modified, the cache will miss frequently, in which case it is better not to use CCache.
To avoid this, I recommend introducing as few headers as possible in PCH and keeping only the less changed headers of the system framework and third party libraries. It’s best to remove PCH completely, as Apple doesn’t recommend using PCH anymore anyway, and new Xcode projects don’t have PCH by default.
Share the cache folder within the team
I have tried this optimization method, but the final effect is not very good, so I did not adopt it. The official documentation for CCache includes a section on shared cache folders, which describes how to modify CCache’s configuration so that the build cache can be shared between multiple computers. In theory, files compiled by one person can be downloaded directly by others, saving the entire team time. Since Buck had a similar mechanism, I thought it was worth trying, so I set up an OwnCloud disk on the company’s Intranet and let people share the CCache cache directory on their computers. Although the experiment was a success, the practical effect was not good. Because synchronization of several gigabytes of cache directories on multiple computers requires a lot of file comparison and transfer work in the background, while compiling at the same time these operations will consume a lot of computing resources, but will slow down the compilation speed. After removing the PCH, the cache hit ratio was already quite good, and there was no need to share the cache to further improve the cache hit ratio, so I finally gave up the idea of sharing the cache. If you’re still not satisfied with the cache hit ratio, consider going in this direction.
conclusion
By integrating CCache, our project packaging time in Xcode (select Product -> Archive from the menu) has been reduced from 11-12 minutes to 130 seconds, which is about a fivefold improvement, and the results are impressive. The integration process was actually very simple. It took me two hours from the start to the success of the integration. If you’re also bothered by long compile times, try it out.