preface
To be a good Android developer, you need a complete set ofThe knowledge systemHere, let us grow together into what we think ~.
An awesome Android expert interview questions and answers (continuous updating…)
From dozens of top interview warehouse and more than 300 high-quality interviews summed up a comprehensive system of Android advanced interview questions.
Welcome to 2020 senior Android factory interview secrets, for you to escort gold three silver four, direct Android factory advanced under.
Android excellent tripartite library source code
1. Which open source libraries are used in your project? How does that work?
A,Network underlying framework: OkHttp implementation principle
What does this library do?
Network layer library, it is based on HTTP protocol encapsulation of a set of request clients, although it can also open threads, but it is more to the real request, HttpClient, HttpUrlConnection responsibility is the same. It encapsulates the implementation of network request get, POST and other low-level operations.
Why use this library in your project?
- OkHttp provides support for the latest HTTP protocol versions HTTP/2 and SPDY, which enables all requests to the same host to share the same socket connection.
- If HTTP/2 and SPDY are not available, OkHttp uses connection pooling to reuse connections for efficiency.
- OkHttp provides default support for GZIP to reduce the size of the transferred content.
- OkHttp also provides a caching mechanism for HTTP responses to avoid unnecessary network requests.
- OkHttp automatically retries multiple IP addresses of a host when there is a network problem.
What are the uses of this library? What are the application scenarios?
Get, POST requests, upload files, upload forms, and so on.
What are the strengths and weaknesses of this library, compared to similar types of libraries?
- Pros: On top
- Disadvantages: You still need to do a layer of packaging when using.
What are the core implementation principles of this library? If you were asked to implement some of the library’s core features, how would you do it?
Request flow within OkHttp: Use OkHttp will be when the request to initialize an instance of the Call, and then execute it execute () method or the enqueue () method, internal finally will be executed to getResponseWithInterceptorChain () method, In this method, a response is obtained and delivered to the user through the chain of responsibility composed of interceptors, which successively goes through the interception processes of user-defined common interceptor, retry interceptor, bridge interceptor, cache interceptor, connection interceptor, user-defined network interceptor and access server interceptor. In addition to OKHttp’s internal request flow, caching and connection are two important parts of OKHttp. If you know these three points, you will understand OKHttp.
The role of each interceptor:
- Interceptors: user-defined interceptors
- Retry and redirect retryAndFollowUpInterceptor: responsible for the failure
- BridgeInterceptor: When requesting, add some necessary headers, and when receiving a response, remove the necessary headers
- CacheInterceptor: Is responsible for reading the cache directly back (based on the information from the request and the cached response) and updating the cache
- ConnectInterceptor: Establishes a connection with the server
ConnectionPool:
1. Check whether the connection is available. If the connection is unavailable, obtain the connection from the ConnectionPool.
2, It is a Deque, add add Connection, use thread pool responsible for periodically cleaning cache.
3. Using connection multiplexing eliminates the need for a TCP/TLS handshake.
- NetworkInterceptors: user-defined networkInterceptors
- CallServerInterceptor: Sends request data to the server and reads response data from the server
What valuable or useful design lessons have you learned from this library?
The responsibility chain mode is used to realize the layered design of interceptors, and each interceptor corresponds to a function, which fully realizes the function decoupling and easy maintenance.
Handwriting interceptor?
What are the optimizations of OKhttp for the network layer?
How does OKHTTP handle network caching?
HttpUrlConnection and OKHTTP relationship?
Volley vs. OkHttp:
Volley: HTTPS is supported. Cache and asynchronous requests, not synchronous requests. Http / 1.1 protocol type is Http / 1.0, and the network use HttpUrlConnection/HttpClient, read and write data using IO. OkHttp: HTTPS is supported. Caching, asynchronous requests, synchronous requests. The protocol types are Http/1.0, Http/1.1, SPDY, Http/2.0, WebSocket. Network transmission uses encapsulated sockets, and data reading and writing uses NIO (Okio). The SPDY protocol is similar to HTTP, but is designed to reduce the load time and improve security of web pages. The SPDY protocol reduces load time through compression, multiplexing, and priority.
The subsystem hierarchy diagram of Okhttp is shown below:
Network configuration layer: The Builder mode is used to configure various parameters, such as timeout, interceptor, etc. These parameters are distributed by Okhttp to the subsystems that need them. Redirection layer: Responsible for redirection. Header splicing layer: Is responsible for converting user-constructed requests into requests sent to the server, and for converting responses returned by the server into user-friendly responses. The HTTP cache layer: reads the cache and updates the cache. Connection layer: The connection layer is a relatively complex layer, which implements network protocol, internal interceptor, security authentication, connection and connection pool functions, but this layer has not initiated real connection, it only does some parameters of the connector processing. Data response layer: Responsible for reading the response data from the server. There are several key roles to understand in Okhttp’s overall system:
OkHttpClient: communication client for unified management of initiating requests and parsing responses. Call: Call is an interface that is an abstract description of an HTTP request. The concrete implementation class is RealCall, which is created by CallFactory. Request: encapsulates specific Request information, such as URL and header. RequestBody: the body of a request for communication, form, etc. Response: Indicates the Response information of the HTTP request, such as Response header. ResponseBody: The ResponseBody of an HTTP request that is closed after being read once; therefore, we will return an error if we call responseBody.string() repeatedly to obtain the result of the request. Interceptor: Interceptor is a request Interceptor that intercepts and processes requests. It integrates network requests, caching, transparent compression, and other functions. Each function is an Interceptor. Typical responsibility chain pattern implementation. StreamAllocation: Controls the allocation and release of resources for Connections and Streas. RouteSelector: Select routes with automatic reconnection. RouteDatabase: records the blacklist of routes that fail to connect.
To design your own network request framework, how to do?
Load a 10M image from the network. What do you care about?
How does HTTP know if a file is too large to transfer a response?
What is your understanding of WebSocket?
What is the difference between WebSocket and socket?
Second,Network encapsulation framework: Retrofit implementation principle
What does this library do?
Retrofit is an encapsulation of a RESTful HTTP Web request framework. Retrofit 2.0 began with OkHttp built in, with the former focusing on encapsulation of interfaces and the latter on efficient network requests.
Why use this library in your project?
1. Powerful functions:
- Support synchronous and asynchronous
- Supports multiple data parsing & serialization formats
- Support RxJava
2, simple and easy to use:
- Configure network request parameters through annotations
- Use a number of design patterns to simplify use
3, good scalability:
- Function modules are highly encapsulated
- Decouple thoroughly, such as custom Converters
What are the uses of this library? What are the application scenarios?
Any network scenario should be preferred, especially if the backend API follows the Restful API design style & RxJava is used in the project.
What are the strengths and weaknesses of this library, compared to similar types of libraries?
- Pros: On top
- Disadvantages: Poor scalability, the inevitable result of high encapsulation, if the server can not give a unified FORM of API, it will be difficult to deal with.
What are the core implementation principles of this library? If you were asked to implement some of the library’s core features, how would you do it?
Retrofit implements interface methods in the create method using the dynamic proxy pattern (indirectly accessing the target object by accessing the proxy object). This process builds a ServiceMethod object that fetches the link of the request based on method annotations, parameter types, and parameter annotations. When everything is ready, the data is added to Retrofit’s RequestBuilder. Then when we initiate a network request, we will call OKHTTP to initiate a network request. The configuration of OKHTTP, including the request method, URL and so on, is implemented in the Build () method of Retrofit’s RequestBuilder, and initiate a real network request.
What valuable or useful design lessons have you learned from this library?
Internal use of excellent architectural design and a large number of design patterns, after I analyzed the latest version of Retrofit source code and a number of good source code analysis articles, I found that in order to truly understand Retrofit’s internal core source process and design ideas, first of all, you need to have a certain understanding of the nine design patterns that Retrofit uses. Let me briefly say:
Create Retrofit instance
- An instance of Retroift is created from the internal Builder class using the Builder pattern.
- The network request factory uses the factory method pattern.
Create network request interface instance
- First, the methods that create the network request interface instance and the network request parameter configuration are uniformly invoked using the facade pattern.
- Dynamic proxies are then used to dynamically create network request interface instances.
- Next, the serviceMethod object is created using the Builder pattern & singleton pattern.
- Furthermore, the policy pattern is used to configure network request parameters for serviceMethod objects, that is, to obtain the corresponding network URL, network request executor, network request adapter and data converter from Retrofit objects by parsing the parameters, return values and annotation types of network request interface methods.
- Finally, the decorator pattern ExecuteCallBack is used to add thread switching to the serviceMethod object so that it can be switched from the child thread to the main thread after receiving the data and processed by the Handler.
3. Send network request:
- In asynchronous requests, each parameter in the method of the network request interface is resolved using the corresponding ParameterHanlder through a static delegate proxy.
Parse the data
5. Thread switching:
- The adapter pattern is used to detect different platforms using different callback actuators, and then switch threads using the callback actuators, again using the decorator pattern.
6. Processing results
Android: A comparison of major Open Source libraries for Web Requests (Android-Async-HTTP, Volley, OkHttp, Retrofit)
www.jianshu.com/p/050c6db5a…
Three,Responsive programming framework: RxJava implementation principles
RxJava transform operator map flatMap concatMap buffer?
- Map: [Data type conversion] Converts an event sent by an observer to another type of event.
- FlatMap: [Dissolve loop nesting and interface nesting] Split and transform the sequence of events sent by the observer into a new sequence of events, and finally send.
- ConcatMap: [Ordered] differs from flatMap in that the sequence of events generated by splitting & remerging is in the same order as the sequence produced by the observed old sequence.
- Buffer: Periodically collects a certain number of events from the events sent by the observer and puts them in the cache. The data set is then packaged and emitted.
RxJava map and flatMap operator differences and underlying implementation
Handwritten RXJava traversal number group.
How do you think Rxjava’s thread pool differs from your own implementation of the task management framework?
Four,Picture loading framework: Glide realization principle
What does this library do?
Glide is an image loading library in Android for image loading.
Why use this library in your project?
1, diversified media loading: not only can carry out picture caching, but also support Gif, WebP, thumbnail, and even Video.
2. By setting the binding life cycle: The life cycle of loaded images can be dynamically managed.
3, Efficient cache strategy: Memory, Disk cache, And Picasso can only cache raw size image, Glide cache is a variety of sizes, that is, Glide cache image size based on your ImageView size.
4. Low memory overhead: The default Bitmap format is RGB_565, while Picasso’s default Bitmap format is ARGB_8888, which costs half as much memory.
What are the uses of this library? What are the application scenarios?
Green. With (this).load(imageUrl).override(800, 800).placeholder().error().animate().into().
2. Various media loading: asBitamp, asGif.
3. Lifecycle integration.
4. You can configure disk caching policies ALL, NONE, SOURCE, and RESULT.
What are the strengths and weaknesses of this library, compared to similar types of libraries?
Library relatively large, source code implementation complex.
What are the core implementation principles of this library? If you were asked to implement some of the library’s core features, how would you do it?
- Glide&with:
Initialize various configuration information (including cache, request thread pool, size, image format, etc.) and Glide object.
2, will glide requests and application/SupportFragment/binding in a piece of fragments of life cycle.
- Glide&load:
Sets the request URL and records the state of the URL that has been set.
3, Glide&into:
BitmapImageViewTarget, DrawableImageViewTarget, BitmapImageViewTarget, DrawableImageViewTarget
2, recursively establish thumbnail request, no thumbnail request, then direct normal request.
3, if the width is not specified, the ImageView width is calculated according to the width of the image, and finally executes the engine. Load () method in onSizeReay().
Engine is a class that loads and manages cached resources
- Normal level 3 cache flow: strong reference -> soft reference -> hard disk cache
When you want to load an image in your APP, search for the image in LruCache, and use the image in LruCache. If the image is not in LruCache, search for the image in SoftReference (SoftReference is suitable for cache, and is retrieved only when the memory is tight. WeakReference will be recovered every time System.gc () (When LruCache storage is tight, it will put the least recently used data into SoftReference), if there is one in SoftReference, take the picture from SoftReference and use it. Put the image back to the LruCache. If no image is found in your SoftReference, search for it from the hard disk cache. If no image is found, add it to the LruCache. After the image is downloaded, the image is saved to the hard disk cache and then placed in the LruCache.
- Glide’s three-tier caching mechanism:
Glide cache mechanism is roughly divided into three layers: memory cache, weak reference cache, and disk cache.
The order of fetching is: memory, weak reference, disk.
The order of storage is: weak reference, memory, disk.
The three-tier storage mechanism is implemented in Engine. So what is Engine? The Engine layer is responsible for the logic that manages the memory cache at load time. Hold MemoryCache, Map<Key, WeakReference<EngineResource<? > > >. Load images by load (), before and after the load will do memory storage logic. If it does not exist in the memory cache, the EngineJob layer is used to asynchronously obtain disk resources or network resources. EngineJob is like an asynchronous thread or Observable. Engine is globally unique and is obtained by Glide.getengine ().
An image resource is required. If there is one in Lrucache, return it and remove it from Lrucache and put it into activeResources. The activeResources map is a repository of resources in use, in the form of weak references. At the same time, there are referenced records inside the resource. If the resource has no reference record, it is put back into Lrucache and cleared from activeResources. If it is not in Lrucache, look for it in activeResources and add 1 to the resource reference. If Lrucache and activeResources do not exist, an asynchronous resource request (network /diskLrucache) is made. After the request succeeds, the resource is placed in the diskLrucache and activeResources.
The core idea of Glide source code mechanism:
Use a weak reference map activeResources to host the resources being used in your project. Lrucache does not contain resources in use. A resource has an internal counter to indicate whether it is still referenced. What is the advantage of separating resources that are being used from those that are not? Because when Lrucache needs to remove a cache, the resource-.recycle () method is called. Notice that the comment on this method says that this method can only be called if no consumer references the resource. So why does calling the resource-.recycle () method require that there are no consumer references to the resource? The thing that recycles (), a Resource defined glide, does is put the unused resource (let’s say a bitmap or drawable) into a bitmapPool. BitmapPool is a library for bitmap recycling. A bitmap from this bitmapPool will be reused during the transform. This eliminates the need to recreate bitmaps and reduces memory overhead. Since the bitmap in the bitmapPool is reused, you must ensure that when the resource is recycled (i.e., when the resource is called recycle ()), there really is no external reference to the resource. This is why Glide spends so much logic to keep resources in Lrucache free of external references.
What valuable or useful design lessons have you learned from this library?
Glide’s efficient three-tier caching mechanism is shown above.
How does Glide make sure the image is loaded?
What cache does Glide use?
How does Glide memory cache size control?
Calculate the size of a picture
Image memory is calculated by the following formula: image height * image width * memory size per pixel. Therefore, when calculating the memory size of the image, we should consider the directory where the image is located and the density of the device. These two factors actually affect the width and height of the image. Android will pull up and compress the image.
Loading bitmap process (how to ensure no memory overflow)
Since Android has a memory limit for images, loading large images that are several megabytes in size can run out of memory. Bitmap will load all pixels of the image (i.e. length x width) into memory. If the image resolution is too large, the memory will directly result in OOM. Only when BitmapFactory loads the image, use bitmapFactory. Options to configure relevant parameters to reduce the load of pixels.
Bitmapfactory. Options
(1). The Options. InPreferredConfig values to reduce memory consumption.
For example, change the default ARGB_8888 value to RGB_565, saving half of the memory.
(2). Set options. inSampleSize scale to compress large images.
(3). Set options. inPurgeable and inInputShareable to enable the system to reclaim memory in time.
A: inPurgeable: If this parameter is set to True, the system memory can be reclaimed if it is insufficient. If this parameter is set to False, the system memory cannot be reclaimed. B: inInputShareable: indicates whether to set deep copy. This parameter is used together with inPurgeable. If inPurgeable is false, this parameter is invalid.Copy the code
(4). Use decodeStream instead of decodeResource etc.
The application scenarios of soft and weak references in Android.
Java reference type classification:
In the development of Android applications, in order to prevent memory overflow, soft reference and weak reference technology can be applied as far as possible when dealing with some objects with large memory and long life cycle.
- Soft/weak references can be used in conjunction with a ReferenceQueue (ReferenceQueue). If the object referenced by a soft reference is collected by the garbage collector, the Java virtual machine will add the soft reference to the ReferenceQueue associated with it. Using this queue, you can get a list of objects with soft/weak references that have been reclaimed, thus clearing invalid soft/weak references for the buffer.
- 2. If you only want to avoid OOM exceptions, you can use soft references. If you are more concerned about the performance of your application and want to reclaim some memory-consuming objects as quickly as possible, you can use weak references.
- 3. You can choose soft or weak references based on how often the object is used. If the object is likely to be used frequently, use soft references. If it is more likely that the object will not be used, use weak references.
How to implement memory cache and disk cache in Android.
Memory cache is implemented based on LruCache and disk cache is implemented based on DiskLruCache. Both classes are implemented based on the Lru algorithm and the LinkedHashMap.
The LRU algorithm can be described in one sentence, as follows:
LRU is the abbreviation of Least Recently Used, the Least Recently Used algorithm. As can be seen from its name, its core principle is that if a data has not been Used in a recent period of time, then it is very unlikely to be accessed in the future, and such data items will be preferentially eliminated.
LruCache principle
Previously, we used in-memory caching techniques, known as soft or weak references. Starting with Android 2.3 (APILevel 9), garbage collectors tend to recycle objects that hold soft or weak references, making them less reliable.
In fact, the implementation of the LRU cache is similar to a special stack, and the accessed elements are placed at the top of the stack (if there are elements on the stack, they are updated to the top of the stack; If the number of elements in the stack exceeds the limit, then the bottom element of the stack (the least recently used element) is removed.
It contains a LinkedHashMap and maxSize, stores the recently used objects in the LinkedHashMap with strong references, gives the put and get methods, calculates the total size of all images in the cache each time the image is put. Compared with maxSize, the oldest image is removed if it is larger than maxSize, and added if it is smaller than maxSize.
The principle of LruCache is to use LinkedHashMap to hold the strong reference of the object and eliminate the object according to the Lru algorithm. Specifically, suppose we access data from the tail of the table, delete data from the head of the table, move the accessed item to the tail if it exists in the linked list, and create a new item at the tail otherwise. When the linked list capacity exceeds a certain threshold, the data in the table header is removed.
LruCache maintains a collection of LinkedHashMap, which is sorted in order of access. When the put() method is called, elements are added to the union and trimToSize() is called to determine if the cache is full, if so, the header element, the least recently accessed element, is removed using the iterator of LinkedHashMap. When the get() method is called to access the cache object, the Get () method of the LinkedHashMap is called to retrieve the collection element and the element is updated to the end of the queue.
Core logic of the LruCache put method
After the cache object is added, the trimToSize() method is called to determine if the cache is full, and if so, the least recently used objects are removed. The trimToSize() method continuously removes the elements in the head of the LinkedHashMap squadron, that is, the least recently accessed, until the cache size is less than the maximum (maxSize).
Core logic of the LruCache GET method
When LruCache’s get() method is called to retrieve a cached object from the collection, it represents a visit to that element, and the queue is updated to keep the entire queue sorted in the order it was accessed.
Why choose LinkedHashMap?
This is due to the nature of LinkedHashMap. The constructor of LinkedHashMap has a Boolean parameter accessOrder. When this is true, LinkedHashMap sorts elements in the order they were accessed, otherwise they are sorted in the order they were inserted.
Principle of LinkedHashMap
LinkedHashMap is almost the same as HashMap: technically, the difference is that it defines an Entry<K,V> header, which is not in the Table; it is separate. LinkedHashMap implements sorting by insertion order or access order by inherits Entry<K,V> from the hashMap and adding two attributes Entry<K,V> before,after, and header to form a bidirectional linked list.
DisLruCache principle
DiskLruCache is similar to LruCache except that a journal file is added to manage disk files, as shown below:
libcore.io.DiskLruCache
1
1
1
DIRTY 1517126350519
CLEAN 1517126350519 5325928
REMOVE 1517126350519
Copy the code
Note: the cache directory is application cache directory/data/data/pckagename/cache, not the root of the cell phone can through the following commands into the directory or this directory will be the overall copy:
/ / into/data/data/pckagename/cache directory adb shell run - as com. Your. Packagename cp/data/data/com. Your packagename / Adb backup-noapk com.your.packagename adb backup-noapk com.your.packagename adb backup-noapk com.your.packagenameCopy the code
Let’s analyze the contents of this file:
Line 1: libcore.io.DiskLruCache, a fixed string. DiskLruCache source code version number. Line 3:1, the version number of the App, passed in through the open() method. Line 4:1, each key corresponds to several files, generally 1. Line 5: blank line 6 and subsequent lines: cache operation records. The sixth and subsequent lines represent cached operation records. There are three things to know about operation records:
DIRTY Indicates that an entry is being written. Write in two cases, if successful, will be followed by a CLEAN record; If this fails, a row of REMOVE records is added. Note that only entries in the DIRTY state alone are illegal. A remove record is also written when the remove(key) method is called manually. READ is a record of a READ. CLEAN also records the length of the file. Note that there may be multiple files for each key, so there will be multiple numbers.
Bitmap compression policy
How to load Bitmap:
Four classes of BitmapFactory methods:
- DecodeFile (file system)
- DecodeResourece (resources)
- DecodeStream (Input stream)
- DecodeByteArray (number of bytes)
BitmapFactory. The options parameter:
- InSampleSize Sampling rate that scales the height and width of the image to the smallest ratio (usually an exponent of 2). The width and height scaling ratios are usually calculated according to the actual width and height of the image/the required width and height. However, the smallest zoom ratio should be taken to avoid that the zoom picture is too small and cannot be spread all over the specified control, which needs to be stretched to cause blur.
- InJustDecodeBounds gets information about the width and height of the image, which is given to inSampleSize to select the zoom ratio. InJustDecodeBounds = true and then loading the image can parse only the width and height of the image without actually loading the image, so this operation is lightweight. After obtaining the width and height information, calculate the zoom ratio and reload the image with inJustDecodeBounds = false to load the scaled image.
The process for loading bitmaps efficiently:
- 1. Set bitmapFactory. Options to True and load images to inJustDecodeBounds
- 2. Retrieve the original width and height of the image from bitmapFactory. Options, corresponding to the outWidth and outHeight parameters
- 3. Calculate the sampling rate inSampleSize according to the sampling rate rule and the size of the target view
- 4, Set BitmapFactory.Options to false to reload the image
Bitmap processing:
Bitmapfactory. Option is used to compress the image. InSampleSize means 2^(insamplesize-1).
BitMap caching:
1. Use LruCache to cache memory.
2. Use DiskLruCache to cache disks.
Implement an ImageLoader process
Synchronous asynchronous loading, image compression, memory disk cache, network pull
- 1. Synchronous loading creates only one thread and loads images in sequence
- 2. Asynchronous loading uses thread pool, so that the existing load tasks are in different threads
- 3. In order not to start too many asynchronous tasks, only enable image loading when the list is still
Specific as follows:
-
1. ImageLoader, as a singleton, provides a way to load an image into a specified control: fetch the object directly from the memory cache, or use a ThreadPoolExecutor to perform a Runnable task to load the image if not available. To create a ThreadPoolExecutor, specify the number of core threads (CPU +1), the maximum number of threads (CPU *2+1), and the number of idle threads (10 seconds). You can also add the ThreadFactory parameter to create custom threads.
-
ImageLoader loadBitmap If the LruCache file is empty, then it is loaded from the disk cache. If the LruCache file is empty, then it is loaded from the disk cache. If the LruCache file is empty, then it is loaded from the disk cache. Get the Bitmap object directly from the input stream of the network request via BitmapFactory’s decodeStream.
-
4. LruCache is compatible with version 2.2. LruCache uses LinkedHashMap to store cached objects. To create an object, simply provide the cache capacity and rewrite the sizeOf method: it calculates the sizeOf the cached object. Sometimes the entryRemoved method needs to be overridden to reclaim some resources.
-
4. Use the open method to create DiskLruCache and set the cache path and capacity. Cache addition Creates the output stream through the Editor object, downloads the resource to the output stream, commits, and aborts if it fails. Then flush the disk cache. Cache lookup Uses the Snapshot object to get the input stream, gets the FileDescriptor, and parses the Bitmap object through the FileDescriptor.
-
5, when the list needs to load pictures, when the list does not load pictures in the slide, when the slide stops to load pictures.
How to reuse Bitmap memory when decode, release time
Photo gallery comparison
Stackoverflow.com/questions/2…
www.trinea.cn/android/and…
Fresco vs. Glide:
Glide: Relatively lightweight, simple and elegant to use, supports Gif motion picture, suitable for those apps that do not rely on pictures. Fresco: Uses anonymous shared memory (Native heap) to save images, which effectively avoids OOM. Fresco is a powerful, but large library, suitable for apps that rely heavily on images.
The overall Fresco architecture is shown below:
DraweeView: inherited from ImageView, it simply reads some attribute values of XML files and does some initialization work. Hierarchy is responsible for layer management and layer data acquisition. DraweeHierarchy: Consists of multiple layers of drawables, each of which provides some function (for example: zoom, rounded corners). DraweeController: controls data acquisition and image loading, sends requests to pipeline and receives corresponding events, controls Hierarchy according to different events, receives user events from DraweeView, and then performs operations such as canceling network requests and recovering resources. DraweeHolder: Coordinates management of Hierarchy and DraweeHolder. ImagePipeline: The core Fresco module for capturing images in a variety of ways (memory, disk, network, etc.). Producer/Consumer: There are many kinds of Producer. Producer is used to complete network data acquisition, cache data acquisition, picture decoding and other works. The results produced by Producer are consumed by consumers. IO/Data: This layer is the Data layer, which implements memory caching, disk caching, network caching, and other IO related functions. Throughout the Fresco architecture, DraweeView is the facade that interacts with the user, DraweeHierarchy is the view hierarchy that manages layers, and DraweeController is the controller that manages data. They form the troika of the Fresco framework. Of course, there is our behind-the-scenes hero Producer, who does all the dirty work. The best model 👍
As we understand the overall Fresco architecture, we also understand the key players that play an important role in the mine, as follows:
Supplier: Provides a specific type of object. There are many classes in Fresco that end in Supplier that implement this interface. SimpleDraweeView: This one is familiar, it takes a URL and calls Controller to load the image. This class inherits from GenericDraweeView, which in turn inherits from DraweeView, the top Fresco View class. PipelineDraweeController: responsible for image data acquisition and load, it is inherited from AbstractDraweeController, by PipelineDraweeControllerBuilder construct. AbstractDraweeController implements the DraweeController interface, which is the data manager for Fresco. GenericDraweeHierarchy: Responsible for layer management on SimpleDraweeView. It consists of multiple layers of drawables, each of which provides some function (e.g. Zoom, rounded corners), the class by GenericDraweeHierarchyBuilder build, PlaceholderImage, retryImage, failureImage, progressBarImage, background, overlays, pressedStateOverlay, etc Attribute information set in XML files or Java code is passed to GenericDraweeHierarchy for processing. DraweeHolder: This class is a Holder class associated with SimpleDraweeView, which is managed through DraweeHolder. The DraweeHolder is also used for unified management of the Hierarchy and Controller DataSource. It is similar to the Futures in Java, which represents the source of data. Unlike Futures, it can have multiple results. DataSubscriber: receives the result returned by the DataSource. ImagePipeline: Interface for retrieving images. Producer: loading and processing images, it has multiple implementations, such as: NetworkFetcherProducer, LocalAssetFetcherProducer, LocalFileFetchProducer. The names of these classes tell us what they do. The Producer is built from the ProducerFactory class, and all producers are nested like Java I/O streams, producing only one result. This is a neat design 👍 Consumer: It is used to receive the results produced by Producer, which, together with Producer, constitutes the Producer and consumer model. Note: The class names in the Fresco source code are long, but they follow certain command patterns, such as: Classes that end in Supplier implement the Supplier interface, which provides objects of a certain type (Factory, Generator, Builder, Closure, etc.). A class that ends in Builder is, of course, a class that creates objects in constructor mode.
How does Bitmap handle large images, such as a 30M large image, and how does it prevent OOM?
Blog.csdn.net/guolin_blog…
Blog.csdn.net/lmj62356579…
Use BitmapRegionDecoder to dynamically load the display area of an image.
Understanding Bitmap objects.
Understanding inBitmap.
To implement the image library, how to do? (To expand the development, to modify the closed, while maintaining independence, reference Android source code design pattern analysis of actual combat picture loading library cases can be)
Write a picture browser and say what you think?
Five,EventBus framework: implementation principle of EventBus
Vi.Memory leak Detection framework: LeakCanary implementation principle
What does this library do?
Memory leak detection framework.
Why use this library in your project?
- The Android Activity component has fully automated memory leak detection. In the latest version, the Android.app. fragment component also has automated memory leak detection.
- Easy to integrate and low cost to use.
- Friendly interface display and notification.
What are the uses of this library? What are the application scenarios?
Retrieve the global refWatcher object directly from the Application and use refwatcher.watch (this) in the Fragment or other component’s destruction callback to check for a memory leak.
What are the strengths and weaknesses of this library, compared to similar types of libraries?
The detection results are not particularly accurate, because the release of memory is related to the life cycle of the object as well as GC scheduling.
What are the core implementation principles of this library? If you were asked to implement some of the library’s core features, how would you do it?
It is mainly divided into 7 steps as follows:
- RefWatcher. Watch () creates a KeyedWeakReference for observing the object.
- 2. Then, in the background thread, it checks to see if the reference has been cleared and if GC has not been triggered.
- 3. If the reference is not cleared, it will store the stack information in the.hprof file on the file system.
- 4. HeapAnalyzerService is started in a separate process, and HeapAnalyzer uses the HAHA open source library to parse the heap dump, a stack snapshot file, at a given time.
- 5. From the heap dump, HeapAnalyzer finds the KeyedWeakReference based on a unique reference key and locates the leaked reference.
- 6. In order to determine if there is a leak, HeapAnalyzer calculates the shortest strong reference path to GC Roots, and then establishes the chain reference that leads to the leak.
- 7. The result is passed back to DisplayLeakService in the app process, and a leak notification is displayed.
To put it simply:
After an Activity executes onDestroy(), place it in a WeakReference, and then associate the WeakReference Activity object with the ReferenceQueque. In this case, the ReferenceQueque is used to check whether the object exists. If it does not, gc is performed and the object is checked again. If it still does not exist, a memory leak is determined. Finally, use the HAHA open source library to analyze heap memory after dump (basically creating a HprofParser to parse the corresponding reference memory snapshot file snapshot).
Flow chart:
Some core analysis points in source code analysis:
AndroidExcludedRefs: AndroidExcludedRefs is an enum class that declares cases of memory leaks in the Android SDK and vendor customized SDKS. As you can see from the name of the AndroidExcludedRefs class, these cases will be filtered by Leakcanary.
BuildAndInstall () (the install method) this method should only be called once.
DebuggerControl: Checks whether it is in debug mode. In debug mode, memory leak detection is not performed. Why is that? Error messages may be reported because the previous reference may be retained during debugging.
WatchExecutor: Thread controller that performs memory leak detection after onDestroy() and when the main thread is idle.
GcTrigger: Used for GC. The watchExecutor first detects a possible memory leak and then initiates GC. After GC, the watchExecutor will check again and determine that the leak is still a memory leak.
GcTrigger’s runGc() method: The system.gc() method is not used here, because system.gc() is not executed every time. Instead, a copy of the GC collection code is copied from the AOSP to ensure that the garbage collection works better than system.gc ().
Runtime.getRuntime().gc();
Copy the code
Child thread delay 1000ms;
System.runFinalization();
Install internal ultimately calls the method application of registerActivityLifecycleCallbacks () method, in this way will be able to monitor the activity corresponding to the life cycle of events.
The use of random UUID in refwatch #watch() ensures that the key corresponding to each detection object is unique.
In KeyedWeakReference, key and name are used to identify a detected WeakReference object. In its constructor, the weak reference is associated with the ReferenceQueue of the reference queue. If the object held by the weak reference is reclaimed by the GC, the JVM adds the weak reference to the ReferenceQueue of the reference queue associated with it. That is, if the Activity object held by KeyedWeakReference is recovered by GC, the object will be added to the referenceQueue referenceQueue.
Use the Android SDK API debug.dumphprofData () to generate the hprof file.
In the runAnalysis() method of HeapAnalyzerService (ForegroundService of type IntentService), to avoid slowing down the app process or taking up memory, HeapAnalyzerService is set up in a separate process.
What valuable or useful design lessons have you learned from this library?
How does leakCannary determine if an object has been reclaimed? How do I trigger a manual GC? C layer implementation?
BlockCanary principle:
This component makes use of the message queue processing mechanism of the main thread, the application is delayed, must be a time consuming operation in dispatchMessage. We set a Printer for the main thread Looper, and count the execution time of dispatchMessage method. If the value exceeds the threshold, it means that the delay occurs, and various information is dumped to provide the developer to analyze the performance bottleneck.
Seven,Dependency injection framework: ButterKnife implementation principle
ButterKnife has little impact on performance because instead of using reflection, it uses the Annotation Processing Tool(APT), the Annotation processor, a javAC Tool for scanning and parsing Java annotations at compile time. Executed at compile time, it works by reading Java source code, parsing annotations, and generating new Java code. The newly generated Java code is eventually compiled into Java bytecode, and the annotation parser cannot change the Java classes read in, such as adding or deleting Java methods.
The benefits of AOP IOC and its application in Android development
Eight,Dependency global Management Framework: How Dagger2 is implemented
Nine,Database framework: GreenDao implementation principle
Database framework comparison?
Optimization of database
Database data migration problems
The data structure of a database index
Balanced binary trees
- 1. A non-leaf node can only allow two child nodes at most.
- 2. The data distribution rule of each non-leaf node is that the child node on the left is smaller than the value of the current node, and the child node on the right is larger than the value of the current node (the value here is determined based on its own algorithm rules, such as hash value).
- 3. The level difference between the left and right sides of the tree is no more than 1.
The use of balanced binary tree can ensure that the difference of node levels between the left and right sides of the data is not more than 1. In this way, the tree structure can avoid the impact of the query efficiency due to the increase of deletion into a linear linked list, and ensure that the speed of data search under the condition of data balance is close to the binary search.
At present, most database systems and file systems use B-tree or its variant B+Tree as the index structure.
B-Tree
The B tree is slightly different from the balanced binary tree in that the B tree is a multi-fork tree, also known as a balanced multi-path lookup tree.
- 1. Sorting method: All node keywords are arranged in ascending order and follow the principle of smaller on the left and larger on the right.
- 2. Number of child nodes: the number of child nodes of non-leaf nodes is >1, and <=M, and M>=2, except for empty trees (note: order M represents the maximum number of search paths of a tree node,M= M, when M=2, it is a 2-fork tree, and M=3, it is 3-fork).
- 3, key words: the number of key words of branches is greater than or equal to CeiL (m/2)-1 and less than or equal to m-1 (note: Ceil () is a function rounded in the direction of infinity, such as Ceil (1.1) is 2).
- 4. All leaf nodes are in the same layer. In addition to the pointer containing the keyword and keyword record, leaf nodes also have Pointers to their children except that their pointer addresses are all null corresponding to the space child of the node at the last layer in the figure below.
Compared with balanced binary tree, the difference between B tree and balanced binary tree is that each node contains more keywords. After increasing the number of node keywords, the tree level is less than the original binary tree, which reduces the number and complexity of data search.
B+Tree
Rules:
- 1, B+ is different from B tree. The non-leaf nodes of B+ tree do not store the pointer to the keyword record, but only carry out data index.
- 2. The leaf node of B+ tree stores the pointer of all the keyword records of the parent node, and all data addresses must be obtained from the leaf node. So the number of data queries is the same.
- 3. The keywords of leaf nodes of B+ tree are arranged in order from small to large, and the data at the end of the left will save the pointer to the data at the beginning of the node on the right.
- 4, not a leaf node number of child nodes = key words baidu encyclopedia (source) (according to various information There are two kinds of algorithm is implemented, another key words = child node is a leaf node number 1 source (wikipedia), although they arrange data structure is different, but the principle is still the same Mysql B + tree is to use the first way).
Features:
1. Fewer levels in B+ tree: Compared with B+ tree, each non-leaf node stores more key words, and there are fewer levels in the tree, so data query is faster.
2, B+ tree query speed is more stable: B+ all keyword data address exists on the leaf node, so the search times are the same, so the query speed is more stable than B tree.
3. B+ tree naturally has the sorting function: all the leaf node data of B+ tree constitute an ordered linked list, which is more convenient for querying the data of the size range, with high data tightness and higher cache hit ratio than B tree.
4, B+ tree full node traversal faster: B+ tree traversal the whole tree only need to traverse all leaf nodes, rather than need to traverse each layer like B tree, which is conducive to the database to do the full table scan.
Compared with B+ tree, the advantage of B tree is that if the frequently accessed data is close to the root node, and the non-leaf node of B tree itself has the address of the keyword and its data, the data retrieval will be faster than that of B+ tree.
B*Tree
B* trees are variants of B+ trees, with the following differences:
-
1. Firstly, limit the number of keywords. Cei (m/2) is used for B+ tree initialization, and CEI (2/3m) is used for B tree initialization.
-
2, B + tree node is full will be split, and B * tree node is full when check with siblings (because each node has a pointer to the brothers), if the brother nodes under the brother node metastasis keyword, if brother nodes is full, is from the current node and brothers all come up with a third of the data to create a new node.
On the basis of B+ tree, due to its larger capacity of initialization, the node space utilization rate is higher, and there are Pointers to sibling nodes. The feature of transferring key words to sibling nodes makes the decomposition times of B* tree become less.
Conclusion:
- 1. Same idea and strategy: From the perspective of balanced binary tree, B tree, B+ tree and B* tree, they all implement the same idea, which adopts dichotomy and data balance strategy to improve the speed of searching data.
- 2, different ways of disk space utilization: the difference is that they evolve step by step in the process of evolution through the PRINCIPLE of IO reading data from disk, each evolution is to make the node space more reasonable use, so as to reduce the tree level to achieve the purpose of fast data search;
Balanced binary trees, B trees, B+ trees, B* trees Understand one or the other and you will understand them all.
Hot repair, plug-in, modular, componentalization, Gradle, compilation and piling technology
1. Hot repair and plug-in
Android ClassLoader types & features
- BootStrap ClassLoader (Java) :
Used to load the Android Framework layer class file.
- PathClassLoader (Java App ClassLoader) :
Used to load class files in apK that have been installed on the system.
- DexClassLoader (Java Custom ClassLoader) :
Used to load a class file in a specified directory.
- BaseDexClassLoader:
Is the parent of PathClassLoader and DexClassLoader.
How is hot patch technology implemented, and how is it different from plug-in?
Plug-in: Dynamic loading mainly solves three technical problems:
- 1. Use ClassLoader to load classes.
- 2. Resource access.
- 3. Lifecycle management.
Plug-in is reflected in the function of separation, it will be a function of independent extraction, independent development, independent testing, and then inserted into the main application. To reduce the size of the main application.
Hot repair:
Cause: Short is used to store method ids in a DVM, so the number of methods in dex cannot exceed 65536.
Code hotfix principle:
- The compiled class file is split and packaged into two dex, bypassing the limit on the number of dex methods and the check during installation, and then dynamically loading the second dex file at run time.
- Hot fixes are embodied in bug fixes, which realize that known bugs can be fixed without the need to re-issue and re-install.
- Use PathClassLoader and DexClassLoader to load the class with the same name as the bug class and replace the bug class, so as to repair the bug. The principle is to prevent CLASS_ISPREVERIFIED mark on the class during app packaging. It then dynamically changes the dexElements indirectly referenced by the BaseDexClassLoader object during hot repair, replacing the old classes.
Similarities:
Both use ClassLoader to load new function classes, and both can use PathClassLoader and DexClassLoader.
Difference:
The hot fix is to fix a Bug, so we need to replace the Bug class with a new one. We need to load the new class first instead of the Bug class, so we need to do two more things: During the packaging of the original APP, relevant classes are prevented from displaying the CLASS_ISPREVERIFIED flag, and the dexElements indirectly referenced by the BaseDexClassLoader object is dynamically changed during the hot repair, so as to replace the Bug classes first and complete the system not to load the old Bug classes. Plug-ins only add new function classes or resource files, so they don’t involve preloading new classes. It avoids the dexElements that prevent related classes from marking the CLASS_ISPREVERIFIED flag and dynamically changing indirect references to BaseDexClassLoader objects during hot repair.
Therefore, plug-in is easier than hot repair. Hot repair is to replace old Bug classes on the basis of plug-in.
Thermal repair principle:
Resource repair:
Many hot repair frameworks use the principles of Instant Run for resource repair.
The traditional compilation and deployment process is as follows:
The process for compiling and deploying Instant Run is as follows:
- Hot Swap: Hot Swap is used when modifying code in an existing method.
- Warm Swap: A Warm Swap is used when modifying or deleting an existing resource file.
- Cold Swap: There are many situations, such as adding, removing, or modifying a field and method, adding a class, and so on.
Resource hot repair process in Instant Run:
- 1, create a new AssetManager, through reflection call addAssetPath method to load external resources, so that the newly created AssetManager contains external resources.
- 2. Replace all references to mAssets fields of type AssetManager with the newly created AssetManager.
Code fixes:
1. Class loading scheme:
65536 restrictions:
The main reason for 65536 is the limitation of DVM Bytecode. The invoke-kind index of the method invocation instruction of the DVM instruction set is 16bits and can reference up to 65535 methods.
LinearAlloc restrictions:
- The LinearAlloc in DVM is a fixed cache and an error is reported when the number of methods exceeds the size of the cache.
The Dex subcontracting scheme mainly divides the application code into multiple Dex during packaging, and puts the classes that must be used when the application is started and the directly referenced classes of these classes into Dex, and puts other codes into the sub-dex. When the application starts, the primary Dex is loaded first, and then the secondary Dex is dynamically loaded after the application starts, thus alleviating the 65536 limitation of the primary Dex and the LinearAlloc limitation.
Loading process:
- According to the search process of dex file, we modify the buggy class key. class, and package key. class into Patch package patch. jar containing dex, and put it in the first Element of Element array dexElements. This will first find the key. class in patch. dex to replace the previous buggy key. class. The buggy key. class in dex will not be loaded according to the parent delegate mode of the ClassLoader.
The class loading solution requires the App to restart and then the ClassLoader to reload the new class. Why do you need to restart?
- This is because classes cannot be uninstalled, and reloading a new class requires a restart of the App, so a hot fix framework using class loading is not immediately effective.
The implementation details of each hot repair framework differ:
- Qzone’s super patch and Nuwa are prioritised by placing the patch pack in the first Element of the Element array as described above.
- Wechat Tinker diff the old and new APK to get path.dex, then merge patch.dex with the classes.dex of APK in the phone to generate the new classes.dex. You then place classes.dex in the first element of the Elements array by reflection at run time.
- Ele. me Amigo takes Elements from each dex in the patch pack, then forms a new Element array, and at runtime replaces the existing Elements array with the new Elements array by reflection.
2. Low-level replacement scheme:
When we want to reflect the Key show method, will be called the Key. Class. GetDeclaredMethod (” show “). The invoke (Key. Class. NewInstance ()); In the native layer, the incoming javaMethod will correspond to an ArtMethod pointer in the ART VIRTUAL machine. ArtMethod structure contains all the information of Java methods, including execution entry, access permission, class and code execution address, etc.
Replace the fields in the ArtMethod structure or replace the entire ArtMethod structure, that’s the underlying substitution.
AndFix replaces fields in the ArtMethod structure, which can cause compatibility problems because vendors may modify the ArtMethod structure, causing method replacements to fail.
Sophix replaces the entire ArtMethod structure so there are no compatibility issues.
The underlying replacement replaces the method directly and takes effect immediately without a restart. The underlying replacement scheme is mainly ali system, including AndFix, Dexposed, Alibaichuan, Sophix.
3. Instant Run Scheme:
What is ASM?
ASM is a Java bytecode manipulation framework that can dynamically generate classes or enhance the functionality of existing classes. ASM can either generate class files directly or dynamically change the behavior of classes before they are loaded into the virtual machine.
When Instant Run first builds APK, it uses ASM to inject similar code logic into each method: When change is not null, its accesschange is not null, and when its accesschange is not null, its accessDispatch method is called with the specific method name and method parameters. When the onCreate method of the MainActivity is modified, the replacement class MainActivity Override, which implements the IncrementalChange interface, is generated as well as an AppPatchesLoaderImpl class, The class’s getPatchedClasses method returns a list of modified classes (including MainActivity) that override MainActivity. This class implements the IncrementalChange interface. An AppPatchesLoaderImpl class is also generated. The getPatchedClasses method of this class returns a list of modified classes (including MainActivity), based on which the Override of MainActivity, This class implements the IncrementalChange interface and also generates an AppPatchesLoaderImpl class whose getPatchedClasses method returns a list of modified classes (including MainActivity), The change for MainActivity is set to MainActivity Override based on the list. The last override. The last override. The accessDispatch method of accessoverride’s accessoverride of MainActivity Override is executed, and the onCreate method is executed. The onCreate method is modified.
Robust and Aceso are the hot repair frameworks that refer to the principle of Instant Run.
Dynamic link library fixes:
Reload so.
The load and loadLibrary methods of the System class are used to load so, and eventually the nativeLoad method is called. It calls the JavaVMExt LoadNativeLibrary function to load so.
So repair mainly has two schemes:
-
Insert so patch to the front of NativeLibraryElement array so patch’s path is returned and loaded first.
-
2. Call System load to take over the load entry of so.
Why plug-in?
In traditional Android development, once the application code is packaged into APK and uploaded to various app markets, we can’t modify the source code of the application. We can only control the branch code reserved in the application through the server. However, most of the time we can not predict the requirements and unexpected situations, so we can not reserve branch code in the application code in advance. At this time, dynamic loading technology is needed, that is, when the program is running, dynamic loading of some executable files that do not exist in the program and running the code logic in these files. Executable files include dynamic link library SO and dex related files (dex and JAR/APK files containing dex). With the gradual development of application development technology and business, dynamic loading technology derived from two technologies: hot repair and plug-in. Among them, hot repair technology is mainly used to fix bugs, while plug-in technology is mainly used to solve the increasingly large application and the decoupling of function modules. In particular, it is to solve the following situations:
- 1. Complex business and module coupling: As business more and more complex, the application will be more and more the number of engineering and the function module, an application may be developed by dozens or even hundreds of people come together, one of the project may be developed by a team to maintain, if the coupling between function modules is higher, modify a module will affect other function modules, is bound to greatly increase the cost of communication.
- Between 2, application access: when an application needs to access other applications, such as taobao, drainage to flow to other taobao applications such as: flying pigs tourism, word of mouth take-out, bargain, and so on application, such as using conventional technology, there are two problems: may want to maintain multiple versions of the problem or single application volume will be very big problem.
- 3, 65536 limit, large memory occupation.
The idea of plug-ins:
Installed applications can be thought of as plug-ins, which can be plugged and unplugged freely.
Definition of plug-in:
Plug-ins generally refer to processed FILES such as APK, SO and dex. Plug-ins can be loaded by the host, and some plug-ins can also run independently as APK.
The process of modifying an application as a plug-in is called plug-in.
Advantages of plug-in:
- Low coupling
- Access and maintenance between applications is easier, and each application team only needs to take care of its own part.
- The volume of the application and the main DEX will be correspondingly smaller, indirectly avoiding the 65536 limit.
- Only taobao client is loaded into the memory for the first time. When other plug-ins are used, they will be loaded into the memory to reduce the memory occupation.
Comparison of plug-in frameworks:
- The earliest plug-in framework: Tu Yimin of Dianping launched AndroidDynamicLoader framework in 2012.
- At present, the mainstream plug-in schemes include Didi Ren Yugang’s VirtualApk, 360’s DroidPlugin, RePlugin and Wequick’s Small framework.
- RePlugin is recommended for loading a plug-in that does not require any coupling or communication with the host, such as loading a third-party App, and VirtualApk is recommended for other cases. Since VirtualApk is the preferred plug-in framework for loading coupled plug-ins, it is worth knowing its source code.
Principle of plug-in:
Activity plug-in:
There are three main implementations:
- Reflection: This has an impact on performance and is not used by mainstream plug-in frameworks.
- Interface: dynamic-load-APK used.
- Hook: Mainstream.
There are two Hook implementation methods: Hook IActivityManager and Hook Instrumentation. The main solution is to use an Activity registered in androidmanifest.xml to trap the Activity, which passes the AMS verification, and then replace it with a plug-in Activity when appropriate.
Hook IActivityManager:
1, pit, pass the check:
IActivityManager uses the Singleton class to implement Singleton in Android 7.0 and 8.0 source code, and this Singleton is static, so IActivityManager is a good Hook point.
Next, define the proxy class IActivityManagerProxy to replace IActivityManager. Since IActivityManager is an interface, dynamic proxy is recommended here.
- Intercepts the startActivity method to retrieve the Intent object stored in the args parameter, which is the Intent intended to launch the plug-in TargetActivity.
- Create a new subIntent to start StubActivity and store the previously obtained TargetActivity Intent in the subIntent for later restoration of TargetActivity.
- Finally, assign a subIntent value to the args parameter, so that the target of the launch becomes StubActivity, which is used to pass the AMS verification.
Then replace the IActivityManager with the proxy class IActivityManagerProxy.
- When the version is greater than or equal to 26, use reflection to obtain ActivityManager IActivityManagerSingleton field, when you get less than ActivityManagerNative gDefault in the field.
- Then, the corresponding Singleton instance is retrieved by reflection, and the corresponding IActivityManager is retrieved from the two fields above.
- Finally, the Proxy class IActivityManagerProxy is dynamically created using the proxy.newProxyInstance () method, replacing the IActivityManager with IActivityManagerProxy.
2. Restore the plugin Activity:
- The previous trap Activity passed the AMS verification, but we need to start the plug-in TargetActivity, and we need to use the plug-in TargetActivity to replace the trap SubActivity, the replacement time is after Step 2 in the figure.
- The handleMessage method overridden in ActivityThread’s H class handles the LAUNCH_ACTIVITY type messages and eventually calls the Activity’s onCreate method. If the mCallBack type of the Handelr Callback is not null, then the handleMessage method of the Handelr Callback is executed. So mCallback can be used as a Hook point. We can replace mCallback with a custom Callback.
Callback implements handler. Callback and overwrites the handleMessage method. When receiving a message of type LAUNCH_ACTIVITY, Replace the Intent that starts SubActivity with the Intent that starts TargetActivity. Then use reflection to replace the Handler’s mCallback with a custom CallBack. When used, hook the attachBaseContext method of application.
3. Plug-in Activity lifecycle
- Communication between AMS and ActivityThreads uses tokens to identify activities, and subsequent Activity lifecycle processing uses tokens to identify activities. Because we replace the occupied SubActivity with the TargetActivity plugin when the Activity starts, before performLaunchActivity, So the r.token of performLaunchActivity is TargetActivity. So TargetActivity has a life cycle.
Hook Instrumentation:
Hook Instrumentation implementation also need to use pit Activity, and Hook IActivity implementation is different, with pit Activity replace plug-in Activity and restore plug-in Activity place is different.
Analysis: Before an Activity passes the AMS validation, the startActivityForResult method of the Activity is called, in which the execStartActivity method of the Instrumentation is called to activate the Activity life cycle. And the newActivity method of the mInstrumentation is used in the ActivityThread’s performLaunchActivity, which internally creates an instance of the Activity using the class loader.
Solution: In the Instrumentation execStartActivity method using the pit SubActivity to pass the AMS validation, in the Instrumentation newActivity method restore TargetActivity, Both operations are related to Instrumentation, so we can replace the mInstrumentation with a custom Instumentation. Specific as follows:
- First check whether TargetActivity is registered, and if not, save TargetActivity’s ClassName for later restoration. Then replace the TargetActivity you want to launch with StubActivity, and finally call the execStartActivity method through reflection so that you can use StubActivity to pass the AMS validation.
- Restore TargetActivity by creating the previously saved TargetActivity in the newActivity method. Finally use reflection to replace the mInstumentation with the InstrumentationProxy.
Resource plug-in:
AssetManager is used to plug-in resources and hotfix resource repair.
There are two main plug-in schemes for resources:
- 1. Merge the resource schema to add all the Resources of the plug-in to the host Resources. This schema plug-in can access the host Resources.
- 2. Build a plug-in resource schema. Each plug-in constructs its own Resources, and this schema plug-in cannot access the host Resources.
Plug-in of SO:
The plug-in solution for SO is similar to the first solution for so hotfix, that is, insert the SO plug-in into the NativelibraryElement array and add the file storing the SO plug-in to the nativeLibraryDirectories collection.
Plug-in loading mechanism scheme:
- 1. Hook ClassLoader
- 2. Entrust the ClassLoader of the system to help load.
2. Modularization and componentization
Benefits of modularity
www.jianshu.com/p/376ea8a19…
Analyze the existing componentization schemes:
The componentization scheme of many large factories is based on multi-engineering + multi-module structure (wechat, Meituan and other super apps is a three-level engineering structure of multi-engineering + multi-module + multi-P project (code isolation mode with page as unit). Use Git Submodule to create multiple sub-repositories to manage the code of each module, and package the code of each module as AN AAR and upload it to a private Maven repository. Use remote version number dependence to isolate the code between modules.
Benefits of componentized development:
- Avoid duplication of wheel, can save development and maintenance costs.
- Components and modules can be used to rationalize manpower for business benchmarks and improve development efficiency.
- Different projects can share a single component or module to ensure the consistency of the overall technical solution.
- Prepare for future plug-ins sharing the same underlying model.
Communication across components:
Cross-component communication scenarios:
- The first is page hopping between components (Activity to Activity, Fragment to Fragment, Activity to Fragment, Fragment to Activity) and data passing when jumping (base data types and serializable custom class types).
- The second is the invocation of custom classes and custom methods between components that provide services externally.
Analysis of cross-component communication scheme:
- The first type of page jump between components does not need to be described too much, it is the most basic function of ARouter, the API is relatively simple, when the jump to transfer different types of data also provides the corresponding API.
- The second type of invocation of custom classes and custom methods between components is a bit more complicated and requires ARouter to work with the CommonService implementation in the architecture:
Business modules that provide services:
Declare the Service interface (with custom methods that need to be invoked) in the CommonService, implement the Service interface in your own module, and expose the implementation class through the ARouter API.
Business modules that use services:
Using ARouter’s API to retrieve the Service interface (polymorphic holding, which actually holds the implementation class), you can invoke custom methods declared in the Service interface, thus achieving module interaction. In addition, AndroidEventBus can use its unique Tag, which makes it easier to locate the code that sends and receives events during development. If the component name is used as the prefix of the Tag group, it can also better manage and view the events of each component. Using EventBus too much is not recommended either.
How do I manage excessive routing tables?
The RouterHub exists in the base library and can be viewed as a communication protocol that all components need to follow. It can contain not only routing address constants, but also various keys that are named when passing data across components, with appropriate annotations. Any component developer needs to rely on this protocol without prior communication. You know how to work together, which increases efficiency and reduces the risk of mistakes, and promises are better than words.
Tips: If it is too much trouble to write each routing address into the base RouterHub, you can create a private RouterHub within each component and manage routing addresses that do not need to cross components in the private RouterHub. Only routing addresses that need to cross components are managed in the common RouterHub of the base library, which is also recommended if you don’t need to centrally manage all routing addresses.
ARouter routing principle:
ARouter maintains a routing table Warehouse, which stores all module jump relationships. ARouter actually calls startActivity jump, using the native Framework mechanism, but makes jump rules in the form of APT annotations. And artificially intercept the jump and set jump conditions.