In a job interview, you will be asked these questions (lots of them!). , hiring senior android developers will go deeper, and will ask a extension of two. The following I first put forward a few key points, is the basic interviewer must ask questions, please be sure to understand!
-
Basics – Four components (lifecycle, usage scenarios, how to start)
-
Java Basics – Data structures, threads, MVC framework
-
Communication – Network connection (HttpClient, HttpUrlConnetion), Socket
-
Data persistence – SQLite, SharedPreferences, ContentProvider
-
Performance optimization – layout optimization, memory optimization, battery optimization
-
Security – data encryption, code obfuscation, WebView/Js calls, HTTPS
-
The UI – animation
-
Others – JNI, AIDL, Handler, Intent, etc
-
Open source frameworks – Volley, Gilde, RxJava, etc. (resume says you know it, used it)
-
Extensions – Android6.0/7.0/8.0/9.0 features, Kotlin language, I/O convention
If you’re rushing to send out your resume and get to the interview, take a day or two to go over it again. In order to secure an offer, it is important to understand the implementation principles and the usage scenarios. Don’t carry it! Understand! The interviewer is tired of hearing all this stuff all day and had better be able to offer some insight.
Java reference type differences, specific usage scenarios
In Java, there are four types of reference types: strong reference, soft reference, weak reference, and virtual reference.
Strong references: A strong reference is a reference created through a new object, and the garbage collector will not reclaim the object it points to even if it is out of memory.
Soft references: Soft references are implemented via SoftRefrence, which has a shorter lifetime than strong references, and the garbage collector reclaims objects referenced by soft references before they run out of memory and throw OOM. A common use scenario for soft references is to store memory sensitive caches that are reclaimed when memory runs out.
Weak References: Weak references are implemented through WeakRefrence, which has a shorter lifetime than soft references and is reclaimed by GC as soon as the weakly referenced object is scanned. Weak references are also commonly used to store memory sensitive caches.
Virtual References: Virtual references are implemented through FanttomRefrence, which has the shortest life cycle and can be reclaimed at any time. If an object is referenced only by a virtual reference, we cannot access any properties and methods of the object through a virtual reference. It just makes sure that objects do something after Finalize. A common use scenario for virtual references is to track the activity of objects being garbage collected, and a system notification is received before an object associated with a virtual reference is collected by the garbage collector.
The difference between Exception and Error
Both exceptions and errors are derived from Throwable. In Java, only Throwable objects can be thrown or caught. Throwable is the basic Exception handling type.
Exception and Error represent Java’s classification of different Exception situations. An Exception is an unexpected condition that can and should be caught during normal operation of a program and handled accordingly.
Error refers to the situation that is unlikely to occur under normal circumstances, and most of the errors will make the program in an abnormal and irreparable state. The common OutOfMemoryError is a subclass of Error, since it is not normal and therefore not convenient or necessary to catch.
Exception is also classified as checked Exception and unchecked Exception. Checked Exceptions must be explicitly caught in code as part of the compiler’s checks. Unchecked exceptions are runtime exceptions, such as null-pointer exceptions and array outlaws, that are generally avoidable logic errors and are not mandatory by the compiler.
volatile
In general, the concept of a memory model cannot be mentioned without reference to volatile. As we all know, in the running of the program, each instruction is executed by the CPU, and the execution process of the instruction is bound to involve data reading and writing. The problem with this is that the execution speed of the CPU is much faster than the reading and writing speed of the main memory, so reading and writing data directly from main memory reduces the CPU efficiency. To solve this problem, we have the concept of a cache, a cache in every CPU, which reads data from main memory beforehand and then flusher it to main memory at the appropriate time after the CPU has done the calculation.
This running mode is fine in a single thread, but can cause cache consistency issues in multiple threads. For a simple example: I = I +1, execute this code in two threads, assuming the initial value of I is 0. We expect two threads to run with a 2, so we have a situation where both threads read I from main memory into their respective caches, where I in both threads is zero. After the execution of thread 1, I =1 is obtained, and it is flushed to main memory, thread 2 starts execution. Since I in thread 2 is 0 in the cache, the I flushed to main memory after execution of thread 2 is still 1.
Therefore, this leads to the problem of cache consistency for shared variables. In order to solve this problem, a cache consistency protocol is proposed: When a CPU writes to a shared variable, it tells other cpus to set their internal shared variable to invalid. When other cpus read a shared variable in the cache, they discover that the shared variable is invalid, and it reads the latest value from main memory.
In Java multithreaded development, there are three important concepts: atomicity, visibility, and orderliness. Atomicity: One or more operations are either performed at all or at all. Visibility: Changes made to a shared variable (a member variable or static variable in a class) in one thread are immediately visible to other threads. Orderliness: The order in which programs are executed follows the order in which code is executed. Declaring a variable volatile ensures visibility and order. Visibility, as I mentioned above, is essential in multithreaded development. Again, for efficiency of execution, instruction reordering sometimes occurs, so that in a single thread the output of instruction reordering is consistent with our code logic output. Problems can occur in multithreading, however, and volatile can partly prevent instruction reordering.
Volatile works by adding a lock prefix to the generated assembly code. The prefix acts as a memory barrier that serves three purposes:
-
Make sure that instructions are not rearranged behind the screen in front of the screen.
-
Flush shared variables in the cache to main memory immediately after modification.
-
Caches in other cpus are invalid when a write operation is performed.
Internet related interview questions
The HTTP status code
What’s the difference between HTTP and HTTPS? How does HTTPS work?
HTTP is a hypertext transfer protocol, while HTTPS can be simply understood as the secure HTTP protocol. HTTPS encrypts data by adding SSL to HTTP to ensure data security. HTTPS provides two functions: establishing secure information transmission channels to ensure data transmission security; Verify the authenticity of the site.
The differences between HTTP and HTTPS are as follows:
-
HTTPS requires a CA to apply for a certificate, which is rarely free and therefore costs a certain amount of money
-
HTTP is plaintext transmission with low security. HTTPS uses SSL to encrypt data based on HTTP, ensuring high security
-
The default port used for HTTP is 80. The default port used by HTTPS is 443
HTTPS workflow
When it comes to HTTPS, encryption algorithms fall into two categories: symmetric encryption and asymmetric encryption.
Symmetric encryption: encryption and decryption are using the same secret key, the advantage is fast, the disadvantage is low security. Common symmetric encryption algorithms include DES and AES.
Asymmetric encryption: Asymmetric encryption has a secret key pair, divided into public and private keys. Generally speaking, the private key can be kept by itself and the public key can be disclosed to each other. The advantages of the private key are higher security than symmetric encryption, but the disadvantages are lower data transmission efficiency than symmetric encryption. Information encrypted using a public key can be decrypted only by the corresponding private key. Common asymmetric encryption includes RSA.
In formal use scenarios, symmetric encryption and asymmetric encryption are generally used together. Asymmetric encryption is used to complete the transfer of secret keys, and then symmetric secret keys are used to encrypt and decrypt data. The combination not only ensures security, but also improves data transmission efficiency.
The HTTPS process is as follows:
- The client (usually a browser) first issues a request to the server for encrypted communication
-
Supported protocol versions, such as TLS 1.0
-
A client-generated random number random1, which is later used to generate the “conversation key”
-
Supported encryption methods, such as RSA public key encryption
-
Supported compression methods
- The server receives the request and responds
-
Verify the version of the encrypted communication protocol used, such as TLS 1.0. If the browser does not match the version supported by the server, the server turns off encrypted communication
-
A server generates random number random2, which is later used to generate the “conversation key”
-
Confirm the encryption method used, such as RSA public key encryption
-
Server certificate
- The client first verifies the certificate after receiving it
-
First verify the security of the certificate
-
After the authentication passes, the client will generate a random number pre-master secret, and then encrypt it with the public key in the certificate, and pass it to the server
-
The server receives the content encrypted with the public key and decrypts it using the private key on the server side to obtain the random number pre-master secret. Then according to radom1, Radom2 and pre-master secret, a symmetric encryption secret key is obtained through certain algorithm. Use symmetric keys for subsequent interactions. The client also uses radom1, Radom2, pre-master Secret, and the same algorithm to generate symmetric secret keys.
-
The symmetric secret key generated in the previous step is then used in subsequent interactions to encrypt and decrypt the transmitted content.
TCP three-way handshake process
Android interview questions
How many ways are there to communicate between processes
AIDL, broadcast, file, socket, pipe
Broadcast the difference between static and dynamic registration
-
Dynamically registered broadcasts are not resident broadcasts, which means that broadcasts follow the Activity’s life cycle. Notice that the broadcast receiver is removed before the Activity ends. Static registration is resident, which means that when an application is closed, it is automatically run by system calls if a message is broadcast.
-
When the broadcast is ordered: the highest priority is received first (static or dynamic). For broadcast receivers of the same priority, dynamic takes precedence over static
-
The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.
-
When broadcast is default broadcast: priority is ignored and dynamic broadcast receivers take precedence over static broadcast receivers. The same priority of the same type of broadcast receiver, static: scan first before scan later, dynamic: register first before register later.
Android performance optimization tool use (this problem is suggested to cooperate with the Performance optimization in Android)
Common performance optimization tools for Android include Android Profiler, LeakCanary, and BlockCanary that come with Android Studio
The Android Profiler can detect performance problems in three areas: CPU, MEMORY, and NETWORK.
LeakCanary is a third-party memory leak detection library, and once our project is integrated, LeakCanary will automatically detect memory leaks during application run and output them to us.
BlockCanary is also a library for third-party detection of UI lag. After project integration, Block will automatically detect UI lag during application running and output it to us.
Class loaders in Android
The PathClassLoader can load only the apK DexClassLoader that has been installed in the system, jar, APK, and dex, and apK that has not been installed from the SD card
What are the categories of animation in Android, and what are their characteristics and differences
Android Animation can be roughly divided into three categories: frame Animation, View Animation and Object Animation.
-
Frame animation: Configure a group of images through XML and play them dynamically. Rarely used.
-
View Animation: It can be roughly divided into four types of operations: rotation, transparency, scaling and displacement. Rarely used.
-
Object Animation: Property Animation is one of the most commonly used Animation, which is more powerful than tween Animation. There are roughly two usage types for property animations, ViewPropertyAnimator and ObjectAnimator. The former is suitable for general purpose animation, such as rotation, displacement, scaling, and transparency, and is easy to use using ViewPropertyAnimator via viet.animate () and then perform the corresponding animation. The latter is suitable for animating our custom controls. Of course, first we should add the corresponding getXXX() and setXXX() property getter and setter methods to our custom View. The important thing to note here is that invalidate() is called to refresh the drawing of the View after changing the properties in the custom View in setter methods. The objectAnimator.of attribute type () returns an ObjectAnimator, and the start() method is used to start the animation.
The difference between tween animation and attribute animation:
-
The tween animation is the parent container constantly drawing the view, making it look like it’s moving the effect, but actually the view is still there.
-
By constantly changing the value of the properties inside the view, you really change the view.
Handler mechanism
When it comes to Handler, there are several classes that are closely related to it: Message, MessageQueue, and Looper.
-
The Message. There are two member variables of interest in Message: target and callback. The target is the Handler object that sends the message, and the callback is the runnable task passed in when handler.post(runnable) is called. The essence of the POST event is to create a Message, assigning the runnable we passed to the created Message callback member variable.
-
MessageQueue. The MessageQueue is the obvious queue for messages, and of interest is the next() method in MessageQueue, which returns the next message to be processed.
-
The stars. The Looper message poller is the core that connects the Handler to the message queue. If you want to create a Handler in a thread, Looper is created with looper.prepare (), and then looper.loop () is called to start polling. Let’s focus on these two methods.
Prepare (). This method does two things: it first fetches Looper from the current thread through threadlocal.get (), and if it is not empty, throws a RunTimeException, meaning that no thread can create two Looper. If null, go to the next step. The second step is to create a Looper and pass threadLocal.set (Looper). Bind the Looper we created to the current thread. It is important to note that the creation of the message queue actually takes place in the Looper constructor.
Loop (). This method opens polling for the entire event mechanism. The essence of this is to start an endless loop of getting messages through MessageQueue’s next() method. After get news will call MSG. Target. DispatchMessage () to do the processing. MSG. Target is actually the handler that sends this Message. The essence of this code is to call handler’s dispatchMessage().
- The Handler. With all this foreshadowing, here comes the most important part. Handler analysis focuses on two parts: sending messages and processing messages.
Send a message. In fact, besides sendMessage, there are different ways to send messages, such as sendMessageDelayed, Post and postDelayed. But they all essentially call sendMessageAtTime. EnqueueMessage is called in the sendMessageAtTime method. The enqueueMessage method does two things: it binds the message to the current handler with msg.target = this. Messages are then enqueued via queue.enqueuemessage.
Process messages. At the heart of message processing is the dispatchMessage() method. If msg.callback is null, then execute the runnable. If empty, our handleMessage method is executed.
Android Performance Optimization
In my opinion, performance optimization in Android can be divided into the following aspects: memory optimization, layout optimization, network optimization, installation package optimization.
Memory optimization: The next question is.
Layout optimization: The essence of layout optimization is to reduce the hierarchy of the View. Common layout optimization schemes are as follows
-
Choosing a RelativeLayout in preference to a LinearLayout and a RelativeLayout will reduce the View’s hierarchy
-
Extract commonly used layout components using the < include > tag
-
Load uncommon layouts via the < ViewStub > tag
-
Use the < Merge > tag to reduce the nesting level of the layout
Network optimization: Common network optimization schemes are as follows
-
Minimize network requests and merge as many as you can
-
Avoid DNS resolution. Searching by domain name may take hundreds of milliseconds and there may be the risk of DNS hijacking. You can add dynamic IP address update based on service requirements, or switch to domain name access when IP address access fails.
-
Large amounts of data are loaded in pagination mode
-
Network data transmission uses GZIP compression
-
Add network data to the cache to avoid frequent network requests
-
When uploading images, compress them as necessary
Installation package optimization: The core of installation package optimization is to reduce the volume of APK. Common solutions are as follows
-
Obfuscation can reduce APK volume to some extent, but the actual effect is minimal
-
Reduce unnecessary resource files in the application, such as pictures, and try to compress pictures without affecting the effect of the APP, which has a certain effect
-
When using SO library, the v7 version of SO library should be retained first, and other versions of SO library should be deleted. The reason is that in 2018, the V7 version of the SO library can meet most of the market requirements, maybe eight or nine years ago phone can not meet, but we don’t need to adapt to the old phone. In practice, the effect of reducing the size of APK is quite significant. If you use a lot of SO libraries, for example, a version of SO with a total of 10 MB, then only keep the V7 version and delete the Armeabi and V8 SO libraries, the total size can be reduced by 20 MB.
Android Memory Optimization
In my opinion, Memory optimization for Android is divided into two parts: avoiding memory leaks and expanding memory.
The essence of a memory leak is that an object with a longer life refers to an object with a shorter life.
Common memory leaks:
-
Memory leak caused by singleton pattern. The most common example is when you create this singleton and you pass in a Context of an Activity type. Because of the static properties of the singleton, its life cycle is from the singleton class to the end of the application, so even if you finish the Activity passed in, Because our singleton still holds a reference to the Activity, it causes a memory leak. The solution is simple: don’t use an Activity Context. Use an Application Context to avoid memory leaks.
-
Memory leaks caused by static variables. Static variables are placed in the method area, and their lifetime is from class loading to the end of the program. As you can see, the lifetime of static variables is very long. The most common example of a memory leak caused by static variables is when we create a static variable in an Activity that requires passing in a reference to the Activity’s this. In this case, even if the Activity calls Finish, memory leaks. The reason is that because this static variable lives almost the same as the entire application life cycle, it holds a reference to the Activity, causing a memory leak.
-
Memory leaks caused by non-static inner classes. Non-static inner classes can cause memory leaks by holding references to external classes. The most common example is the use of handlers and Threads in activities. Handlers and Threads created using non-static inner classes hold references to the current Activity while executing delayed operations, which can cause memory leaks if the Activity is terminated while executing delayed operations. There are two solutions: the first is to use a static inner class, where you invoke an Activity with a weak reference. The second method is in Activity onDestroy call handler. RemoveCallbacksAndMessages to cancel delay event.
-
Memory leaks caused by using resources that are not shut down in time. Common examples are: various data streams are not closed in a timely manner, Bitmap is not closed in a timely manner, etc.
-
The third-party library cannot be unbound in time. There are libraries that provide registration and unbinding. The most common is EventBus, which we all know is registered in onCreate and unbound in onDestroy. If unchained, EventBus is a singleton pattern that holds references to activities forever, causing memory leaks. Also common with RxJava is the call to disposable.Dispose () in onDestroy after doing some delay with the Timer operator to cancel the operation.
-
Memory leak caused by property animation. A common example is when you exit an Activity during a property animation execution, and the View object still holds a reference to the Activity, causing a memory leak. The solution is to cancel the property animation by calling the animation’s Cancel method in onDestroy.
-
Memory leak caused by WebView. WebView is special in that it causes a memory leak even when its destroy method is called. In fact, the best way to avoid WebView memory leakage is to make the WebView Activity in another process, when the Activity ends kill the current WebView process, I remember Ali Nail WebView is another process started. This should also be used to avoid memory leaks.
Expand memory, why expand our memory? Sometimes we have to use a lot of third party commercial SDKS in our actual development. These SDKS are actually good and bad. The large SDK may leak less memory, but the small SDK quality is not very reliable. So the best way to deal with this situation that we can’t change is to expand memory.
There are two common ways to expand memory: one is to add the largeHeap= “true” attribute under Application in the manifest file, and the other is to start multiple processes in the same Application to increase the total memory space of an Application. The second method is actually quite common, for example, I have used a Twitter SDK, a Twitter Service is actually in a separate process.
Memory optimization in Android is generally open source and throttling, open source is to expand memory, throttling is to avoid memory leaks.
Binder mechanism
In Linux, processes are independent of each other in order to avoid interference with other processes. There is also user space and kernel space within a process. The isolation here is divided into two parts, inter-process isolation and intra-process isolation.
If there is isolation between processes, there is interaction. Interprocess communication is IPC, and user space and kernel space communication is system call.
In order to ensure the independence and security of Linux, processes cannot directly access each other. Android is based on Linux, so it also needs to solve the problem of interprocess communication.
There are many ways to communicate between Linux processes, such as pipes, sockets, and so on. There are two main reasons why Binder is used for Android interprocess communication rather than Linux: performance and security
Performance. Performance requirements on mobile devices are more stringent. Traditional Linux interprocess communication such as pipes, sockets, etc., replicates data twice with binders. Binder is superior to traditional process communication in terms of performance.
Security. Traditional Linux process communication does not involve authentication between the communicating parties, which can cause some security issues. The Binder mechanism incorporates authentication to improve security.
Binder is based on the CS architecture and has four main components.
-
The Client. Client process.
-
Server. Server process.
-
ServiceManager. Provides the ability to register, query, and return proxy service objects.
-
Binder. Mainly responsible for establishing Binder connections between processes, data interaction between processes and other low-level operations.
The main flow of Binder mechanism is as follows:
-
The server uses Binder drivers to register our services with the ServiceManager.
-
Clients use Binder drivers to query services registered with the ServiceManager.
-
The ServiceManager returns proxy objects to the server using Binder drivers.
-
After receiving the proxy object from the server, the client can communicate with the other processes.
The principle of LruCache
The core principle of LruCache is the effective use of LinkedHashMap, which has a LinkedHashMap member variable inside. There are four methods worth focusing on: constructors, GET, PUT, and trimToSize.
Constructor: Two things are done in the constructor of LruCache, setting maxSize and creating a LinkedHashMap. Note here that LruCache sets the accessOrder of the LinkedHashMap to true, which is the order in which the output of the LinkedHashMap is iterated. True means output in order of access, false means output in order of addition. Since output is usually in order of addition, accessOrder is false by default, but our LruCache needs to output in order of access, so we explicitly set accessOrder to true.
Get method: Essentially the get method that calls the LinkedHashMap, and since we set accessOrder to true, each call to the GET method places the current element we are visiting at the end of the LinkedHashMap.
The put method: essentially calls the LinkedHashMap put method. Due to the nature of LinkedHashMap, each call to the Put method also puts the new element at the end of the LinkedHashMap. After the addition, the trimToSize method is called to ensure that the added memory does not exceed maxSize.
TrimToSize method: Inside the trimToSize method, a while(true) loop is started, which continuously deletes elements from the top of the LinkedHashMap until the deleted memory is less than maxSize and breaks out of the loop.
In fact, we can summarize here, why is this algorithm called least recently used algorithm? The principle is simple: each of our put or get is treated as a visit, and due to the nature of LinkedHashMap, the elements accessed are placed at the end of each visit. When our memory reaches the threshold, the trimToSize method is triggered to remove the element at the head of the LinkedHashMap until the current memory is less than maxSize. The reason for removing the front element is obvious: the most recently accessed elements are placed at the tail, and the elements at the front must be the least recently used elements, so they should be removed first when memory is low.
DiskLruCache principle
Design an asynchronous loading frame for images
To design a picture loading framework, we must use the idea of three level cache of picture loading. The three-level cache is divided into memory cache, local cache and network cache.
Memory cache: The Bitmap is cached in memory, which is fast but has small memory capacity. Local cache: Caches images to files, which is slower but larger. Network cache: Get pictures from the network. The speed is affected by the network.
If we were designing an image loading framework, the process would look something like this:
-
After getting the image URL, first look for the BItmap in memory, if found directly load.
-
If it is not found in the memory, it is searched from the local cache. If it can be found in the local cache, it is directly loaded.
-
In this case, the image will be downloaded from the network. After downloading, the image will be loaded, and the downloaded image will be put into the memory cache and local cache.
Here are some basic concepts. If it is a specific code implementation, there are probably several aspects of the file:
-
First we need to determine our memory cache, which is usually LruCache.
-
To determine the local cache, DiskLruCache is usually used. It should be noted that the file name of the image cache is usually a string of URLS encrypted by MD5, so as to avoid the file name directly exposing the URL of the image.
-
Once the memory cache and local cache have been determined, we need to create a new class MemeryAndDiskCache, of course, whatever the name is, which contains the LruCache and DiskLruCache mentioned earlier. In the MemeryAndDiskCache class, we define two methods, one is getBitmap, the other is putBitmap, corresponding to the image acquisition and cache, the internal logic is also very simple. In getBitmap, bitmaps are obtained by memory and local priority. In putBitmap, memory is cached first and then cached locally.
-
After the cache policy class is determined, we create an ImageLoader class. This class must contain two methods, one is to display the image displayImage(URL,imageView) and the other is to get the image downloadImage(URL,imageView) from the network. SetTag (URL) is used to bind url to ImageView in the method of showing images. This is to avoid the image dislocation bug caused by the reuse of ImageView when loading network images in the list. Then the cache will be fetched from MemeryAndDiskCache. If it exists, load it directly. If not, the fetch image from the network method is called. There are a lot of ways to get images from the web, and I usually use OkHttp+Retrofit here. When you get an image from the network, first check whether imageView.getTag() is consistent with the URL of the image. If so, the image is loaded; if not, the image is not loaded. In this way, the asynchronous loading of images in the list is avoided. At the same time, MemeryAndDiskCache will be used to cache the image after obtaining the image.
Event distribution mechanism in Android
When our finger touches the screen, the event actually goes through a process like Activity -> ViewGroup -> View to the View that finally responds to our touch event.
When it comes to event distribution, the following methods are essential: dispatchTouchEvent(), onInterceptTouchEvent(), onTouchEvent. Next, follow the Activity -> ViewGroup -> View process to outline the event distribution mechanism.
When our finger touches the screen, an Action_Down event is triggered, and the Activity on the current page responds first, going to the Activity’s dispatchTouchEvent() method. The logic behind this method is simply this:
-
Call getWindow. SuperDispatchTouchEvent ().
-
If the previous step returns true, return true; Otherwise return its own onTouchEvent(). If getWindow().superdispatchTouchEvent () returns true, it means that the current event has been handled, so you don’t need to call your own onTouchEvent. Otherwise, the event is not handled and the Activity needs to handle it itself, calling its own onTouchEvent.
The getWindow() method returns an object of type Window, as we all know. In Android, PhoneWindow is the only implementation class for Window. So this essentially calls superDispatchTouchEvent() from PhoneWindow.
In the method in the actual call mDecor PhoneWindow. SuperDispatchTouchEvent (event). The mDecor is DecorView, which is a subclass of FrameLayout and calls super.DispatchTouchEvent () in superDispatchTouchEvent() in DecorView. It should be obvious at this point that DecorView is a subclass of FrameLayout, which is a subclass of ViewGroup that essentially calls the dispatchTouchEvent() of the ViewGroup.
Now that our event has been passed from the Activity to the ViewGroup, let’s examine the event handling methods in the ViewGroup.
The logic in dispatchTouchEvent() in ViewGroup looks like this:
-
Use onInterceptTouchEvent() to check whether the current ViewGroup intercepts events. The default ViewGroup does not.
-
If intercepted, return its own onTouchEvent();
-
If not, judge by the return value of child.dispatchTouchEvent(). If true, return true; Otherwise return its own onTouchEvent(), where unhandled events are passed up.
In general, the onInterceptTouchEvent() of the ViewGroup returns false, that is, no intercepting. The important thing to note here is the sequence of events, such as Down events, Move events… Up event, from Down to Up is a complete sequence of events, corresponding to the finger from press Down to lift the series of events, if the ViewGroup intercepts the Down event, then subsequent events will be handed to the ViewGroup onTouchEvent. If the ViewGroup does not intercept a Down event, it sends an Action_Cancel event to the View that handled the Down event, notifies the child View that the subsequent sequence of events has been taken over by the ViewGroup and that the child View can be restored to its previous state.
Here is a common example: in a Recyclerview clock there are a lot of buttons, we first press a Button, then slide a distance and then loosen, at this time Recyclerview will follow the slide, will not trigger the Button click event. In this example, when we press the button, the button receives the Action_Down event, and normally the subsequent sequence of events should be handled by the button. But when we swipe a little bit, the Recyclerview senses that this is a swipe, intercepts the sequence of events, and goes through its own onTouchEvent() method, which is reflected on the screen as a swipe of the list. The button is still in the pressed state, so it needs to send an Action_Cancel to tell the button to restore its previous state during interception.
Event distribution eventually goes to the View’s dispatchTouchEvent(). There is no onInterceptTouchEvent() in the dispatchTouchEvent() of a View, which is easy to understand. A View is not a ViewGroup and does not contain other child views, so there is no intercepting or intercepting. Ignoring a few details, the View dispatchTouchEvent() directly returns its own onTouchEvent(). If onTouchEvent() returns true, the event is processed, otherwise unprocessed events are passed up until a View handles the event or until the Activity’s onTouchEvent() terminates.
People here often ask the difference between onTouch and onTouchEvent. First, both methods are in the View’s dispatchTouchEvent() logic:
-
If touchListener is not null, the View is enable, and onTouch returns true, the onTouchEvent() method will not be used.
-
If one of the above conditions is not met, it goes to the onTouchEvent() method. So the onTouch sequence is before the onTouchEvent.
View drawing process
The view is drawn from the performTraversals() method of ViewRootImpl, which calls mview.measure (), mview.layout () and mview.draw () in sequence.
The drawing process of View is divided into three steps: measurement, layout and drawing, which correspond to three methods: Measure, layout and draw.
Measurement phase. The measure method will be called by the parent View. After some optimization and preparation in the measure method, the onMeasure method will be called for actual self-measurement. The onMeasure method does different things in a View and ViewGroup:
-
The View. The onMeasure method in the View calculates its size and stores it via setMeasureDimension.
-
The ViewGroup. The onMeasure method in a ViewGroup calls all the child View’s measure methods to measure itself and save. Then calculate its own size from the size and position of the child View and save it.
Layout stage. The Layout method is called by the parent View. The Layout method saves the size and position passed in by the parent View and calls onLayout for the actual internal layout. OnLayout also does different things in View and ViewGroup:
-
The View. Since the View has no child views, the onLayout of the View does nothing.
-
The ViewGroup. The onLayout method in the ViewGroup calls the Layout methods of all child Views, passing them dimensions and positions, and letting them complete their own internal layout.
Drawing phase. The draw method does some scheduling, and then calls the onDraw method to draw the View itself. The scheduling flow of draw method is roughly like this:
-
Draw the background. Corresponds to the drawBackground(Canvas) method.
-
Draw the body. Corresponds to the onDraw(Canvas) method.
-
Draw the child View. Corresponds to the dispatchDraw(Canvas) method.
-
Draw slide correlation and foreground. Corresponding onDrawForeground (Canvas).
Android source code common design patterns and their own common design patterns in development
How does Android interact with JS
In Android, the interaction between Android and JS is divided into two aspects: Android calls the method in JS, JS calls the method in Android.
Android js. There are two ways to tune JS on Android:
-
Webview.loadurl (” method name in javascript: JS “). The advantage of this method is that it is very simple, but the disadvantage is that there is no return value. If you need to get the return value of the JS method, you need js to call the method in Android to get the return value.
-
WebView. EvaluateJavaScript (the method name “in the” javascript: js, ValueCallback). The advantage of this method over loadUrl is that the ValueCallback callback gets the return value of the JS method. The disadvantage is that Android4.4 only has this method, poor compatibility. However, in 2018, most apps on the market require a minimum version of 4.4, so I think this compatibility is not a problem.
Js Android. There are three ways to tune Android with JS:
-
WebView. AddJavascriptInterface (). This is the official solution for JAVASCRIPT to call Android methods. Note that the @javascriptInterface annotation should be added to Android methods for JAVASCRIPT to avoid security vulnerabilities. The downside of this solution is that Android4.2 had security holes before, but they have been fixed since. Again, compatibility is not an issue in 2018.
-
Override WebViewClient’s shouldOverrideUrlLoading() method to intercept the URL, parse the URL, and call the Android method if it meets the requirements of both parties. The advantage is to avoid the security vulnerability before Android4.2, the disadvantage is also very obvious, can not directly get the return value of the call Android method, only through Android call JS method to get the return value.
-
Overwrite the WebChromClient onJsPrompt() method, as in the previous method, after the URL is parsed, if the two rules are met, then the Android method can be called. Finally, if a return value is required, result.confirm(” Return value of Android method “) is used to return the return value of Android to JS. The advantage of the method is that there are no vulnerabilities, there are no compatibility restrictions, and it is also easy to obtain the return value of the Android method. The important thing to note here is that in addition to onJsPrompt there are also onJsAlert and onJsConfirm methods. So why not choose the other two methods? The reason is that onJsAlert has no return value, while onJsConfirm has only true and false return values. Meanwhile, in front-end development, prompt method is rarely called, so onJsPrompt is used.
Principle of thermal repair
Activity startup process
SparseArray principle
SparseArray, generally speaking, is a data structure used in Android to replace HashMap. To be precise, it replaces a HashMap with a key of type Integer and a value of type Object. Note that SparseArray only implements the Cloneable interface, so it cannot be declared with a Map. Internally, SparseArray consists of two arrays. One is mKeys of type int[], which holds all the keys. The other is mValues of type Object[], which holds all values. SparseArray is most commonly compared to a HashMap. SparseArray uses less memory than a HashMap because it consists of two arrays. As we all know, add, delete, change, search and other operations need to find the corresponding key-value pair first. SparseArray is addressed internally by binary search, which is obviously less efficient than the constant time complexity of HashMap. Binary search, the other thing to mention here is binary search is that the array is already sorted, and that’s right, SparseArray is going to sort it up by key. Taken together, SparseArray takes up more space than HashMap, but is less efficient than HashMap, and is a typical time-swap space suitable for smaller storage. From a source code perspective, I think the things to watch out for are SparseArray’s remove(), PUT (), and GC () methods.
-
Remove (). The remove() method of SparseArray does not simply DELETE and then compress the array. Instead, the value to be deleted is set to the static property of DELETE, which is essentially an Object Object, The mGarbage property in SparseArray is also set to true, which makes it easier to compress arrays by calling its own GC () method when appropriate to avoid wasting space. This improves efficiency by overwriting the DELETE with the value to be added if the key to be added in the future is equal to the deleted key.
-
The gc (). The GC () method in SparseArray has absolutely nothing to do with the JVM’s GC. The inside of the gc() method is actually a for loop, which compresses the array by moving the non-DELETE key-value pairs forward and overwriting the DELETE key-value pairs, while setting mGarbage to false to avoid wasting memory.
-
Put (). The put method is the logic that overwrites the value if a binary lookup finds a key in the mKeys array. If the index is not found, the key index that is closest to the key to be added in the array will be retrieved. If the value corresponding to the index is DELETE, the new value can be overwritten DELETE. In this case, the movement of array elements can be avoided, thus improving efficiency. If value is not DELETE, mGarbage is determined; if true, gc() is called to compress the array. After that, the appropriate index is found, the indexed key-value pair is moved back, and a new key-value pair is inserted, which may trigger array expansion.
How to avoid OOM loading images
We know that the size of the Bitmap in memory is calculated as: pixels long * pixels wide * memory per pixel. There are two ways to avoid the OOM: scale down the length and width and reduce the amount of memory per pixel.
-
Scale down the length and width. We know that bitmaps are created using the factory methods of BitmapFactory, decodeFile(), decodeStream(), decodeByteArray(), decodeResource(). Each of these methods takes a parameter of type Options, which is an inner class of BitmapFactory that stores BItmap information. Options has one property: inSampleSize. We can modify inSampleSize to reduce the size of the image, thus reducing the memory footprint of the BItmap. Note that the inSampleSize size needs to be a power of 2. If it is less than 1, the code forces inSampleSize to be 1.
-
Reduce memory occupied by pixels. Options has a property inPreferredConfig, ARGB_8888 by default, representing the size of each pixel. We can reduce the memory by half by changing it to RGB_565 or ARGB_4444.
A larger load
Loading a large hd image, such as Riverside Scene at Qingming Festival, is impossible to display on the screen at first, and considering the memory condition, it is impossible to load all the images into the memory at one time. This time you need to load the local, Android has a responsible for local loading class: BitmapRegionDecoder. Use method is simple, through BitmapRegionDecoder. NewInstance () to create objects, called after decodeRegion (the Rect the Rect, BitmapFactory. Options Options). The first argument rect is the area to display, and the second argument is Options, an inner class in BitmapFactory.
Android tripartite library source code analysis
Because the source code analysis is too large, so here I posted my source code analysis link (nuggets).
OkHttp
OkHttp source code analysis
Retrofit
Retrofit source analysis 1 Retrofit source analysis 2 Retrofit source analysis 3
RxJava
RxJava source code analysis
Glide
Glide source code analysis
EventBus
EventBus source code analysis
The general process is as follows: Register:
-
Gets the subscriber’s Class object
-
Use reflection to find a collection of event handlers in subscribers
-
Iterate through the set of event processing methods, call subscribe(Subscriber, subscriberMethod), and subscribe method:
-
If the event inheritance is true, the Map type stickEvents are iterated over and the isAssignableFrom method is used to determine whether the current event is the parent of the iterated event, and if so, the event is sent
-
If event inheritance is false, the event is retrieved via stickyEvents. Get (eventType) and sent
-
If the set of event types is empty, a new set is created. This step is intended to delay the initialization of the set
-
Take the set of event types and add the new event type to the set
-
If the Subscription collection is empty, a new collection is created. This step is intended to delay the initialization of the collection
-
Take the Subscription collection and iterate through it, adding new Subscription objects to the appropriate location by comparing the priority of event handling
-
Get the eventType eventType for processing through subscriberMethod
-
Bind subscriber and method subscriberMethod together to form a Subscription object
-
Through subscriptionsByEventType. Get (eventType) for the Subscription set
-
Get (subscriber) to get the set of event types
-
Check whether the current event type is sticky
-
If the current event type is not sticky (sticky event), subscribe(subscriber, subscriberMethod) ends here
-
If it is sticky, it determines the inheritance property of an event in EventBus. Default is true
Post:
-
postSticky
-
Add events to a collection of Map type stickyEvents
-
Call the POST method
-
post
-
Event inheritance to true, find the current event the father of all types and call postSingleEventForEventType method to send events
-
Event inheritance is false and only events of the current event type are sent
-
There are four cases in postToSubscription
-
POSTING calls invokeSubscriber(subscription, event) to handle events, essentially method.invoke() reflection
-
MAIN, if in the MAIN thread directly invokeSubscriber processing; Instead, switch to the main thread via handler and call invokeSubscriber to handle the event
-
BACKGROUND, if not in the main thread directly invokeSubscriber handle events; Instead, start a thread in which invokeSubscriber is called to handle events
-
ASYNC, starts a thread in which invokeSubscriber is called to process events
-
In the postSingleEventForEventType, through subscriptionsByEventType. Get (eventClass) get the Subscription type set
-
Iterate over the collection, calling postToSubscription to send the event
-
Adds an event to the event queue of the current thread
-
Events are continually pulled from the event queue through a while loop and sent by calling the postSingleEvent method
-
In postSingleEvent, event inheritance is determined. Default is true
Unregister:
-
Delete all subscription associated with the subscriber in subscriptionsByEventType
-
Deletes all types associated with subscribers in typesBySubscriber
Data structures and algorithms
Handwritten fast row
Handwriting merge sort
Handwriting heap and heap sort
Let’s talk about the difference between sorting algorithms (time and space complexity)
1. The Activity startup process (don’t answer the lifecycle)
Blog.csdn.net/luoshengyan…
1.2. The startup mode and usage scenario of the Activity
(1) the manifest Settings, (2) startActivity flag blog.csdn.net/CodeEmperor… Extended here: the difference between stack (First In Last Out) and queue (First In First Out)
3. Two startup modes of Service
(1) the startService (), (2) the bindService () www.jianshu.com/p/2fb6eb14f…
4.Broadcast registration modes and differences
(1) static registration (minifest), (2) dynamic registration www.jianshu.com/p/ea5e233d9… Extended here: When to use dynamic registration
5. Differences between HttpClient and HttpUrlConnection
Blog.csdn.net/guolin_blog… Which request method is used in Volley (HttpClient before 2.3, HttpUrlConnection after 2.3)
6. The difference between HTTP and HTTPS
Blog.csdn.net/whatday/art… Extended here: HOW HTTPS works
7. Handwriting algorithm (choose bubble must be able to)
www.jianshu.com/p/ae97c3cee…
8. Process survival (Undead process)
www.jianshu.com/p/63aafe3c1… Here: what is process priority (the following article, have said) segmentfault.com/a/119000000…
9. Means of interprocess communication
AIDL (1), (2) radio, (3) the Messenger AIDL: www.jianshu.com/p/a8e43ad5d… www.jianshu.com/p/0cca211df… Messenger: blog.csdn.net/lmj62356579… The extension: Binder, blog.csdn.net/luoshengyan…
10. Load the larger image
PS: : THERE is a small company (size write false, to deceive the past), directly show me the project, let me talk about the implementation principle. One of the most frustrating interviews I’ve ever had was one where I was almost wearing my pants and writing code for them. Blog.csdn.net/lmj62356579…
11. Level 3 cache (all big picture frames can be pulled to this)
(1) Memory cache, (2) local cache, (3) Network memory: blog.csdn.net/guolin_blog… Local: blog.csdn.net/guolin_blog…
12.MVP Framework (mandatory)
Blog.csdn.net/lmj62356579… Extension here: The handwritten MVP example, the difference between MVC, the MVP advantage
13. Explain the Context
Blog.csdn.net/lmj62356579…
14.JNI
www.jianshu.com/p/aba734d5b… Extend here: where JNI is used in the project, such as: core logic, key, encryption logic
15. Differences between the Java VIRTUAL Machine and the Dalvik Virtual Machine
www.jianshu.com/p/923aebd31…
What is the difference between sleep and wait
Blog.csdn.net/liuzhenwen/…
17.View, ViewGroup event distribution
Blog.csdn.net/guolin_blog… Blog.csdn.net/guolin_blog…
18. Save the Activity state
OnSaveInstanceState () blog.csdn.net/yuzhiboyi/a…
19.WebView and JS interaction (which apis to call)
Blog.csdn.net/cappuccinol…
20. Memory leak detection and memory performance optimization
Blog.csdn.net/guolin_blog…
21. Layout optimization
Blog.csdn.net/guolin_blog…
22. Customize views and animations
Both of the following explanations are very thorough, and most interviewers will not go into too much depth in this part of the interview, or will give you an effect to explain the rationale. (1) www.gcssloop.com/customview/… (2) blog.csdn.net/yanbober/ar…
conclusion
What puzzles have you solved at work and what fulfilling projects have you undertaken (this question will definitely come up, so be sure to be prepared)
In fact, these problems still depend on the accumulation of daily life. For me, the most rewarding thing is the development of KCommon project, which greatly improves my development efficiency.
Refer to www.jianshu.com/p/564b39206…
To read more
In addition to programmers, besides writing good code, you should learn these!
Technical highlights. What did I do for the first half of the year
Here’s how to create a sliding top visual effect
NDK project actual combat – high imitation 360 mobile phone assistant uninstall monitoring
If you feel good, forwarding is a great support for me!
It’s not just technology that’s gained here!