Understand what AOP is

AOP is slicing oriented programming, through the way of preprocessing at compile time or dynamic proxy at run time, does not invade the specific business code of each module, and realizes the unified processing of a certain kind of problem on all aspects.

You can think of OOP as a layer of logic that is handled vertically between subclasses and superclasses, whereas AOP is handled horizontally, not just a series of classes that satisfy an inheritance relationship, but as long as cuts can be found. The core of the concept is not to intrude on the specific business code of each module, otherwise adding uniform logic to a base class can be considered an aspect, but this is generally not called AOP, but an inherited feature of OOP.

AOP is an idea, not limited to language, framework, as long as meet the above concept can be considered AOP, such as we are more familiar with the JDK to provide dynamic Proxy. NewProxyInstance + InvocationHandler

// Centralize the default image display for all XML defined imageViews in an Activity
fun replaceAllImage(context: Context) {
    try {
        val layoutInflater = LayoutInflater.from(context)
        val mFactory2: Field = LayoutInflater::class.java.getDeclaredField("mFactory2")
        mFactory2.isAccessible = true
        val oldField = mFactory2.get(layoutInflater)
        val hookFactory2 = Proxy.newProxyInstance(context.javaClass.classLoader,
                arrayOf<Class<*>>(LayoutInflater.Factory2::class.java)) { _, method, args ->
            val result = method.invoke(oldField, *args)
            if (result is ImageView) {
                result.setImageResource(R.drawable.immersive)
            }
            return@newProxyInstance result
        }
        mFactory2.set(layoutInflater, hookFactory2)
    } catch (exception: Exception) {
        ToastUtils.showShortToast("Hook failure:$exception")}}Copy the code

Another example ActivityLifecycleCallbacks registered in the Application

// Centrally listen for all Activity creation
registerActivityLifecycleCallbacks(object :ActivityLifecycleCallbacks{
    override fun onActivityCreated(activity: Activity, savedInstanceState: Bundle?). {
        Log.d("LifecycleCallbacks"."${activity.componentName}.onActivityCreated")
        replaceAllImage(activity)
    }
}    
Copy the code

The above two pieces of code add default images to all ImageViews in all Activity layouts without invading the Activity code, and while they are apis built into the SDK, they are not sophisticated frameworks, but they fall under the AOP umbrella.

Common AOP tools

There are two kinds of AOP

  1. Compile-time preprocessing, which takes care of code before it runs and during compilation:
    • APT
    • Transform
    • AspectJ
  2. Runtime processing, such as dynamic proxies, various hooks at runtime, and so on:
    • Cglib + DexMaker
    • Dexposed
    • Xposed
    • ADocker

Let’s take a look at how they all work, but for the sake of space, we’ll just look at three ways of precompiling.

APT

APT, also called Annotation Processor Tools, is used to get all classes/methods/parameters that annotate a certain Annotation and perform some operations, generally generating code and putting it into a package, and then providing it to the business code through reflection.

  1. Strictly speaking, APT is not a legitimate AOP in terms of our concepts above, because it requires hacking into business code to manually annotate all locations;
  2. More broadly, or think of AOP, business classes adding annotations can be interpreted as creating an additional aspect manually.

A simple example

  1. inheritanceAbstractProcessor, the rewriteprocessMethod to find and cache all classes that annotate a particular annotation;
  2. throughprocessingEnv.filerOutput a class file.
@AutoService(Processor::class)
class RouterProcessor : AbstractProcessor() {
    private var generateContent = ""

    override fun process(typeElementSet: MutableSet<out TypeElement>, roundEnvironment: RoundEnvironment): Boolean {
        if (roundEnvironment.processingOver()) {
            // Step 2, generate the file
            generateFile()
        } else {
            // First, iterate over the annotation class and cache it in the list
            for (typeElement in typeElementSet) {
                valelements = roundEnvironment.getElementsAnnotatedWith(typeElement) ? :continue
                for (element in elements) {
                    if (element is Symbol.ClassSymbol) {
                        generateContent += element.fullname.toString() + "|"}}}}return false
    }

    private fun generateFile(a) {
        try {
            val source = processingEnv.filer.createSourceFile(ROUTER_CLASS_NAME)
            val writer: Writer = source.openWriter()
            writer.write(
                """
                package com.youcii.advanced;
                
                /**
                 * Created by APT on ${Date()}.
                 */
                public class RouteList {
                    public static final String $ROUTER_FIELD_NAME = \"$generateContent\ "; } "" "
            )
            writer.flush()
            writer.close()
        } catch (ignore: IOException) {
            print("Write failed"$ignore")}}override fun getSupportedSourceVersion(a): SourceVersion {
        return SourceVersion.latestSupported()
    }

    override fun getSupportedAnnotationTypes(a): MutableSet<String> {
        return LinkedHashSet<String>().apply {
            add(Router::class.java.canonicalName)
        }
    }

    companion object {
        /**
         * 生成的类全名
         */
        const val ROUTER_CLASS_NAME = "com.youcii.advanced.RouteList"
        /** * The generated in-class data store variable name */
        const val ROUTER_FIELD_NAME = "list"}}Copy the code

Alternatively, the JavaPoet library can be used to generate class files, for example if you want to generate the class:

package com.youcii.advanced;
import java.lang.String;
/** * Created by APT on Fri Feb 19 14:58:56 CST 2021. */
public class RouteList {
  / * * * storage Router list and | * /
  public static final String list = "xxx";
}
Copy the code

Using the JavaPoet library is written as:

    private fun generateFileWithJavaPoet(a) {
        val listField = FieldSpec.builder(String::class.java, ROUTER_FIELD_NAME)
            .addModifiers(Modifier.PUBLIC, Modifier.STATIC, Modifier.FINAL)
            .addJavadoc("Storage Router list and | segmentation \ n")
            .initializer("\"$generateContent\ "")
            .build()
        val resultClass = TypeSpec.classBuilder("RouteList")
            .addModifiers(Modifier.PUBLIC)
            .addJavadoc("Created by APT on ${Date()}.\n")
            .addField(listField)
            .build()
        val javaFile = JavaFile.builder("com.youcii.advanced", resultClass)
            .build()

        try {
            val source = processingEnv.filer.createSourceFile(ROUTER_CLASS_NAME)
            val writer: Writer = source.openWriter()
            javaFile.writeTo(writer)
            writer.flush()
            writer.close()
        } catch (ignore: IOException) {
            print("Write failed"$ignore")}}Copy the code

Personally, I find JavaPoet tedious to write and unreadable, rather than copying a single file after it is written. And it can only generate Java files, not Kotlin.

The principle of

Why do we define a Processor configuration that gradle can automatically handle?

APT technology is an application of SPI(Service Provider Interface Service dynamically providing Interface). The core class is ServiceLoader, which can be used in all Java projects. The specific process is: The Java Compiler executes the serviceloader.load (xxx. class) method when compiling javac, and internally searches the resource/ meta-INF /services directory to find the full package name file specified xxx. class. And reflect all subclasses declared inside the build file, and then execute the unique interface methods in xxx.class, respectively.

For THE special application APT of SPI, these steps are performed in a Gradle task named kaptKotlin.

Multi-round processing mechanism

The APT handler executes multiple rounds of the Process method in the same Processor object, specifying specific elements to be processed through the RoundEnvironment. For example, the first round passes in all elements to be detected in the ModuleThe last round will pass empty and passprocessingOverMark it as processed

AutoService

This library provides @autoService (processor.class) annotation to avoid manual configuration of resource/ meta-info /services steps.

It also makes use of APT: it iterates through all classes that add @autoService, automaticallybuild/resources/main/META-INF/services/Generates the specified package name file and writes the current annotation class internally.



Why does AutoService continue to process our custom APT after the completion of APT processing? Is it because APT is executed many times? In fact, gradle is compiled in the order of dependencies. As a dependent tripartite library, AutoService is compiled first and its generated Processor will be called in the kAPT task of the internal module of the project.

Gradle Transform API

Transform is a gradle plugin provided by Android-build-tool. It is used to modify a class file before it is compiled into a dex. The execution time is between compile task and D8 task.

  1. A Transform is not required, as long as you find the task before the task compiled by class to dex, and you can insert a custom task by before, but using a Transform is easier.
  2. Transform can also write bytecode streams directly. You don’t have to use class file modification tools like ASM and Javassist, but they make class modification much easier. Because ASM is much more powerful than Javassist and can fulfill almost all requirements, we generally use ASM.

A simple example

  1. Customize gradle plugin, write transform in buildSrc and register.
class TransformPlugin : Plugin<Project> {
    override fun apply(target: Project) {
        val baseExtension = target.extensions.findByType(BaseExtension::class.java) baseExtension? .registerTransform(TestNonIncrementTransform()) } }Copy the code
  1. Create Transform subclassabstract class BaseTransform : Transform(), override the following methods:
    /** * The name of the current Transform stored in the list */
    override fun getName(a): String {
        return javaClass.name
    }
    
    /** * incremental compilation processing is not supported */
    override fun isIncremental(a): Boolean {
        return false
    }

    /** * Filter dimension 1: The input types * CLASSES-- code * RESOURCES-- are neither code nor res RESOURCES in the Android project, but RESOURCES in the Asset directory * * The above two types are only exposed to us, and there are other types available only in the Android Plugin: * DEX, NATIVE_LIBS, CLASSES_ENHANCED, DATA_BINDING, DEX_ARCHIVE, DATA_BINDING_BASE_CLASS_LOG */
    override fun getInputTypes(a): MutableSet<QualifiedContent.ContentType> {
        return TransformManager.CONTENT_CLASS
    }

    /** * Filter dimension two: scope of.class files to be processed. If you want to return null in this method if only read-only, use getReferencedScopes to specify the object to read in each scopes: *  * enum Scope implements ScopeType {* / * * PROJECT * (0x01), * / * / * / EXTERNAL_LIBRARIES(0x10), * / Code that is being tested by the current variant, including dependencies * TESTED_CODE(0x20), * // Local or remote dependencies that are provided-only * PROVIDED_ONLY(0x40), *} *  * * *  * public enum InternalScope implements QualifiedContent.ScopeType { * // Scope to package classes.dex files in the main split APK in InstantRun mode. All other classes.dex will be packaged in other split APKs. * MAIN_SPLIT(0x10000), * // Only the project's local dependencies (local jars). This is to be used by the library plugin, Only (and only when building the AAR). * LOCAL_DEPS(0x20000), * // include dynamic-feature modules * FEATURES(0x40000), * } *  */
    override fun getScopes(a): MutableSet<in QualifiedContent.Scope> {
        return TransformManager.SCOPE_FULL_PROJECT
    }
Copy the code

Key method transform

    If input from getInputs() is consumed, the transform must be output to the next level * 2. If you don't want to make any changes, you should specify the objects to read in getReferencedScopes and return null in getScopes. * 3. Whether incremental compilation to transformInvocation. IsIncremental () shall prevail, if isIncremental = = false Input# getStatus () may not be accurate * /
    @Throws(TransformException::class, InterruptedException::class, IOException::class)
    final override fun transform(transformInvocation: TransformInvocation) {
        super.transform(transformInvocation)
        / / the incremental compilation must clear before all the output, otherwise transformDexArchiveWithDexMergerForDebug
        if(! transformInvocation.isIncremental) { transformInvocation.outputProvider.deleteAll() }val outputProvider = transformInvocation.outputProvider
        transformInvocation.inputs.forEach { input ->
            input.jarInputs.forEach { jarInput ->
                    handleJarInput(jarInput)
                    val dest = outputProvider.getContentLocation(jarInput.file.absolutePath, jarInput.contentTypes, jarInput.scopes, Format.JAR)
                    FileUtils.copyFile(jarInput.file, dest)
            }
            input.directoryInputs.forEach { directoryInput ->
                    handleDirectoryInput(directoryInput.file)
                    val dest = outputProvider.getContentLocation(directoryInput.name, directoryInput.contentTypes, directoryInput.scopes, Format.DIRECTORY)
                    FileUtils.copyDirectory(directoryInput.file, dest)
            }
        }
    }
Copy the code
  1. Traverse the class in the path and jar package, and leave it to youhandleFileBytesTo deal with.
    /** * two ways * 1. Uncompress, modify and re-compress * 2. Iterate directly through JarFile, first writing to a new file, then replacing the original JAR */
    final override fun handleJarInput(jarInput: JarInput) {
        val oldPath = jarInput.file.absolutePath
        val oldJarFile = JarFile(jarInput.file)

        val newPath = oldPath.substring(0, oldPath.lastIndexOf(".")) + ".bak"
        val newFile = File(newPath)
        val newJarOutputStream = JarOutputStream(FileOutputStream(newFile))

        oldJarFile.entries().iterator().forEach {
            newJarOutputStream.putNextEntry(ZipEntry(it.name))
            val inputStream = oldJarFile.getInputStream(it)
            // Modify the logic
            if (it.name.startsWith("com")) {
                val oldBytes = IOUtils.readBytes(inputStream)
                newJarOutputStream.write(handleFileBytes(oldBytes))
            }
            // Do not change the original copy
            else {
                IOUtils.copy(inputStream, newJarOutputStream)
            }
            newJarOutputStream.closeEntry()
            inputStream.close()
        }

        newJarOutputStream.close()
        oldJarFile.close()

        jarInput.file.delete()
        newFile.renameTo(jarInput.file)
    }

    /** * For class changes, you can write new bytes directly back to the original file * note: you must recurse to file and cannot handle path */
    final override fun handleDirectoryInput(inputFile: File) {
        if(inputFile.isDirectory) { inputFile.listFiles()? .forEach { handleDirectoryInput(it) } }else if (inputFile.absolutePath.contains("com/youcii")) {
            val inputStream = FileInputStream(inputFile)
            val oldBytes = IOUtils.readBytes(inputStream)
            inputStream.close()

            val newBytes = handleFileBytes(oldBytes)
            / / note!!!!! When instantiating FileOutputStream, the content of the original file is cleared!!!!
            val outputStream = FileOutputStream(inputFile)
            outputStream.write(newBytes)
            outputStream.close()
        }
    }
Copy the code
  1. Use ASM to process class classes
    fun handleFileBytes(oldBytes: ByteArray): ByteArray {
        return try {
            val classReader = ClassReader(oldBytes)
            val classWriter = ClassWriter(ClassWriter.COMPUTE_MAXS)
            val classVisitor = getClassVisitor(classWriter)
            classReader.accept(classVisitor, Opcodes.ASM5)
            classWriter.toByteArray()
        } catch (e: ArrayIndexOutOfBoundsException) {
            oldBytes
        } catch (e: IllegalArgumentException) {
            oldBytes
        }
    }

    abstract fun getClassVisitor(classWriter: ClassWriter): ClassVisitor
Copy the code

ASM

ASM is a bytecode manipulation tool that processes all elements within a class file in visitor mode. To learn more, check out this: The most accessible ASM tutorial ever

ASM Bytecode Outline plug-in

Using this plug-in helps you look at bytecode and generate ASM code directly. Tips:

How is ASM introduced?

Just like the Transform, ASM is also in the build. Com in gradle. Android. View the build: gradle introduced together, don’t need we introduced separately. For example, in our App version 3.3.2, ASM is 6.0. If you want to specify a specific version, you can use exclude

Implementation 'org. Ow2. Asm: asm: 7.0'... ImplementOnly 'com. Android. Tools. Build: gradle: 3.3.2 rainfall distribution on 10-12', {exclude group: 'org. Ow2. Asm'}Copy the code

It is important to note that the ASM version has requirements for JDK versions, and in the case of heavy use of java8, the minimum is above 5.0. Otherwise the ASM in instantiation ClassReader they have ArrayIndexOutOfBoundsException, IllegalArgumentException error.

ASM version number The highest supported JDK version
5.0-5.2 8
6.0 9
6.1 10
6.2 11
6.2.1-7.0 – 12
7.1 13
7.2 14

Three optimization ideas

The example written above is the simplest template and there is room for further optimization. Transform optimization generally takes place in one of the following three ways.

Narrow the transform range
  1. throughgetInputTypes.getScopes.getReferencedScopesPrecise control of what you care about;
  2. Pass the configuration before the transformFocus on the class/method listFurther narrow the scope of transform processing.

This kind of optimization is highly business related and has no generality.

Concurrent compiling

In dealing with transformInvocation. Inputs. JarInputs/directoryInputs each input can be used in a thread pool concurrent processing, thus reducing the overall execution time. The SDK already provides a WaitableExecutor class that not only provides the basic functionality of a thread pool, but also encapsulates control over the order in which tasks are executed.

    /** * concurrent processing thread pool */
    private val waitableExecutor = WaitableExecutor.useGlobalSharedThreadPool()

    final override fun transform(transformInvocation: TransformInvocation) {
        super.transform(transformInvocation) transformInvocation.inputs.forEach { input -> input.jarInputs.forEach { jarInput -> waitableExecutor.execute { ... }}// Optionally, execute the synchronization code after the waitableDirExecutor completes
            Do not use this sentence if jarInputs are incompatible with directoryInputs
            / /, of course, also can use waitForTasksWithQuickFail
            this.waitableDirExecutor.waitForAllTasks() input.directoryInputs.forEach { directoryInput -> waitableExecutor.execute { ... }}}// Ensure that all tasks are completed before performing the subsequent transform. The true parameter indicates that if one Task throws an exception, the other tasks are terminated
        waitableExecutor.waitForTasksWithQuickFail<Any>(true)}Copy the code
Incremental compilation

Incremental compilation can significantly save compilation time by skipping processing of most unchanged JAR and Directory files. The core point is to judge the modification status of inputFile and perform different processing according to different status.

  • NOTCHANGED: No processing is required because there is a cache and therefore no replication is required;
  • ADDED: Normal processing and replication
  • REMOVED: The cache file under the outputProvider needs to be REMOVED
  • CHANGED: REMOVED+ADDED: The cache file needs to be deleted before normal processing and replication
    final override fun transform(transformInvocation: TransformInvocation) {
        super.transform(transformInvocation)
        / / the incremental compilation must clear before all the output, otherwise transformDexArchiveWithDexMergerForDebug
        if(! transformInvocation.isIncremental) { transformInvocation.outputProvider.deleteAll() }val outputProvider = transformInvocation.outputProvider
        transformInvocation.inputs.forEach { input ->
            input.jarInputs.forEach { jarInput ->
                val dest = outputProvider.getContentLocation(jarInput.file.absolutePath, jarInput.contentTypes, jarInput.scopes, Format.JAR)
                // Determine whether to increment
                if (transformInvocation.isIncremental) {
                    handleIncrementalJarInput(jarInput, dest)
                } else {
                    handleNonIncrementalJarInput(jarInput, dest)
                }
            }
            input.directoryInputs.forEach { directoryInput ->
                val dest = outputProvider.getContentLocation(directoryInput.name, directoryInput.contentTypes, directoryInput.scopes, Format.DIRECTORY)
                // Determine whether to increment
                if (transformInvocation.isIncremental) {
                    handleIncrementalDirectoryInput(directoryInput, dest)
                } else {
                    handleNonIncrementalDirectoryInput(directoryInput.file)
                    FileUtils.copyDirectory(directoryInput.file, dest)
                }
            }
        }
    }
    
     /** * incrementally process JarInput */
    private fun handleIncrementalJarInput(jarInput: JarInput, dest: File) {
        when (jarInput.status) {
            Status.NOTCHANGED -> {
            }
            Status.ADDED -> {
                handleNonIncrementalJarInput(jarInput, dest)
            }
            Status.REMOVED -> {
                if (dest.exists()) {
                    FileUtils.forceDelete(dest)
                }
            }
            Status.CHANGED -> {
                if (dest.exists()) {
                    FileUtils.forceDelete(dest)
                }
                handleNonIncrementalJarInput(jarInput, dest)
            }
        }
    }
    
    /** * Incremental processing class changes */
    private fun handleIncrementalDirectoryInput(directoryInput: DirectoryInput, dest: File) {
        val srcDirPath = directoryInput.file.absolutePath
        val destDirPath = dest.absolutePath
        directoryInput.changedFiles.forEach { (inputFile, status) ->
            val destFilePath = inputFile.absolutePath.replace(srcDirPath, destDirPath)
            val destFile = File(destFilePath)
            when (status) {
                Status.NOTCHANGED -> {
                }
                Status.ADDED -> {
                    handleNonIncrementalDirectoryInput(inputFile)
                    FileUtils.copyFile(inputFile, destFile)
                }
                Status.REMOVED -> {
                    if (destFile.exists()) {
                        FileUtils.forceDelete(destFile)
                    }
                }
                Status.CHANGED -> {
                    if (dest.exists()) {
                        FileUtils.forceDelete(dest)
                    }
                    handleNonIncrementalDirectoryInput(inputFile)
                    FileUtils.copyFile(inputFile, destFile)
                }
            }
        }
    }
Copy the code

Test results show that the speed increase is even more pronounced when incremental compilation is executed twice:

  1. Non-incremental transform:
    • The first compile time after clean is 1m 35s
    • The second compilation time is 1m 13s
  2. Increment the transform:
    • The first compile time after clean is 1m 57s
    • The second compilation time is 56s

Ten thousand holes

There were a lot of pitfalls in writing the demo, most of which were probably pretty basic, but it took a while to figure it out.

Gradle plugin registration is strongly related to the order in which plug-ins are configured
  1. If our self-written plug-in is registered after the BaseExtension plug-in, that is
    apply plugin: 'com.android.application'
    apply plugin: 'xxx'
    Copy the code

    The extension should be written this way, because XXX must be found after application.

    override fun apply(target: Project) {
    	val baseExtension = target.extensions.findByType(BaseExtension::class.java) baseExtension? .registerTransform(TestNonIncrementTransform()) }Copy the code
  2. If the plugin we wrote was registered before, i.e
    apply plugin: 'xxx'
    apply plugin: 'com.android.application'
    Copy the code

    This is how the plug-in should be written, otherwise the baseExtension lookup will be empty because application was not registered when XXX was registered

    override fun apply(target: Project) {
    	target.afterEvaluate {
    		val baseExtension = it.extensions.findByType(BaseExtension::class.java) baseExtension? .registerTransform(TestNonIncrementTransform()) } }Copy the code
  3. What if we write our own plug-ins in project-level build.gradle? AfterEvaluate is also required, in which case the XXX plug-in is applied before the Application plug-in, as is the case when XXX is applied directly in the Module before application.

These gradle plugins are written in the order in which they are introduced. All other combinations will fail.

Compilation fails: transformDexArchiveWithDexMergerForDebug

This is because the old data is not cleared, and all previous output must be cleared if the compilation is non-incremental.

    final override fun transform(transformInvocation: TransformInvocation) {
        super.transform(transformInvocation)
        if(! transformInvocation.isIncremental) { transformInvocation.outputProvider.deleteAll() } ... }Copy the code
Failed to compile: Invalid empty classfile

This error indicates that the new class has no content, because the inputStream and outputStream are used incorrectly when the original file content is modified. The original file content will be cleared when the FileOutputStream is instantiated. Therefore, the data must be read before the instantiation.

/ / 1. Error
val inputStream = FileInputStream(inputFile)
val outputStream = FileOutputStream(inputFile)
...

/ / 2. Correct
val inputStream = FileInputStream(inputFile)
val oldBytes = IOUtils.readBytes(inputStream)
...
val outputStream = FileOutputStream(inputFile)
outputStream.write(handleFileBytes(oldBytes))
Copy the code

How to select?

Based on the above introduction, their core scenarios can be summarized as follows:

APT

APT core functions are: traversal all annotated elements, can dynamically generate new classes for runtime call. That said, it has the following limitations:

  1. Existing code cannot be modified, only new classes can be added or elements can be traversed;
  2. The elements we want to iterate over are newly written later, or we can intervene in the history code to add annotations manually.
Transform

Transform can traverse all codes and resources under the project and dynamically modify or add them. It can be said that it is versatile and can also realize the functions of APT.

The difficulty lies in two things:

  1. How to find a section, that is, the common characteristics of the elements to be dealt with;
  2. ASM is difficult to get started and requires high knowledge of bytecode.

The above complete source code please see: github.com/YouCii/Adva…