preface

To be a good Android developer, you need a complete set ofThe knowledge systemHere, let us grow together into what we think ~.

An awesome Android expert interview questions and answers (continuous updating…)

From dozens of top interview warehouse and more than 300 high-quality interviews summed up a comprehensive system of Android advanced interview questions.

Welcome to 2020 senior Android factory interview secrets, for you to escort jin Sanyin four, direct Android advanced chapter (1).

Android Advanced Interview questions (⭐⭐⭐)


First, performance optimization

1. App stability optimization

1. What stability optimizations did you make?

As the gradually mature of the project, user base, DAU continue to rise, we met a lot of stability problems, students met a lot of challenges for our technology, users often use our App caton or function is not available, so we went for the stability of open special optimization, we optimize the main three items:

  • (=>2)
  • Performance stability optimization (=>2)
  • Service stability optimization (=>3)

Through the optimization of these three aspects, we built a mobile terminal high availability platform. At the same time, a lot of measures have been taken to make the App truly highly available.

2, performance stability is how to do?

  • Overall performance optimization: startup speed, memory optimization, rendering optimization
  • Offline problem discovery and optimization
  • Online monitoring
  • Crash optimization

We have made multi-dimensional optimization for starting speed, memory, layout loading, lag, weight loss, flow, power and other aspects.

Our optimization is mainly divided into two levels, namely online and offline. For offline, we focus on finding problems and solving them directly, with the goal of solving problems as far as possible before going online. When we go online, our primary purpose is to monitor, and monitor at each performance latitude, so that we can get the alarm of abnormal conditions as early as possible.

At the same time, for the most serious online performance problem: Crash, we made a special optimization, not only optimized the specific index of Crash, but also as much as possible to obtain the detailed information of Crash, combined with back-end aggregation, alarm and other functions, so that we can quickly locate the problem.

3. How to guarantee business stability?

  • Data acquisition + alarm
  • It is necessary to monitor the main process and core path of the project.
  • We also need to know how many exceptions occur at each step, so that we know the conversion rates of all business processes and the corresponding interfaces
  • Combined with the market, if the conversion rate is below a certain value, alarm
  • Exception monitoring + single point tracing
  • Out strategy

Mobile client service high availability it focus on the user complete functions available, mainly in order to solve some online users some abnormal situation cause he didn’t collapse, no performance problems, however, is just a simple function is not available, we need to project the main process, the core of path for burial site monitoring, every step to calculate how much is it real conversion rate, at the same time? You also need to know exactly how many exceptions occur at each step. So that we know all the business process of conversion and the corresponding interface conversion rate, with the data of the market, we know that, if the conversion or some monitor success rate is lower than a certain value, it is very likely is the emergence of online abnormal, combined with the corresponding alarm function, we don’t need to wait for users to feedback, This is the basis of business stability assurance.

At the same time, for some special cases, for example, the development process and/or in the code of some catch block, capture the exception, let the program does not collapse, it is not reasonable, program didn’t collapse, the function of the program has become unavailable at that time, so, these are the catch exceptions that we also need to report, In this way, we can know what problems the user has caused the exception. In addition, there are some online single point problems, such as the user click login has been unable to enter, this is a single point problem, in fact, we can not find the common with other problems, so we have to find its corresponding detailed information.

Finally, if something unusual happens, we take a series of measures to quickly stop losses. (= > 4)

4, if there is an abnormal situation, how to quickly stop loss?

  • Function of the switch
  • All jump center
  • Dynamic repair: hot repair, resource pack update
  • Self-repair: Safe mode

First of all, the App needs to have some advanced capabilities. For any new function to be launched, we need to add a function switch. By configuring the switch issued by the center, we can decide whether to display the entrance of the new function. If something goes wrong, you can close the entry of the new function and keep the App under control.

Then, we need to set up routing retreats for the App. All interface retreats need to be distributed through routing. If we match the need to redirect to such a new function with bugs, we will not redirect, or we will redirect to the unified interface where exceptions are being processed. If neither of these two methods can be used, then dynamic repair can be considered by means of hot repair. The current hot repair scheme is relatively mature, and we can add hot repair capability in our project at a low cost. Of course, it would be better if some functions are implemented by RN or WeeX. Dynamic updates can be implemented by updating resource bundles. Which if not possible, then you can consider your application to add a self-healing capacity, if the App start many times, consider to empty all the cache data, will be reset to the App install state, to the most serious level, can block the main thread, at this time must be App, such as hot fix after the success had only then allow the user to enter.

For a more comprehensive and in-depth understanding, check out the in-depth Exploration of Android stability optimization

2. Optimized App startup speed

1. How does startup optimization work?

  • Analyze the status quo and identify problems
  • Targeted optimization (first summarize, guide its in-depth)
  • Maintain long-term optimization effect

After a certain release, we found that the startup speed became very slow, and users started to give us more and more feedback, so we started to think about optimizing the startup speed of the application. We then did a code-level review of the startup code, found that the startup process of the application was very complex, and then used a series of tools to determine if too many time-consuming operations were being performed in the main thread.

After looking through the code, we found that there were too many tasks in the main thread of the application, so we came up with a solution to solve this problem, namely asynchronous initialization. However, we also found that some of our initialization code is not so high in priority. It can be executed later instead of onCreate in the Application. Because we did lazy initialization for this code, we finally combined with idealHandler to create a better lazy initialization scheme that could be initialized during the main thread’s idle time to reduce the lag caused by startup time. Once we’ve done that, we start up really fast.

Finally, let me talk briefly about how we can maintain startup optimization over time. First of all, we made our launcher, combined with our CI, and added many aspects of monitoring online. (Lead => Question 4)

2, is how asynchronous, asynchronous encounter problems?

  • Reflect the evolution process
  • This section describes the initiator

We initially used a common asynchronous solution, that is, new Thread + set Thread priority to background Thread in the Application onCreate method to initialize asynchronously. Later, we used Thread pool, IntentService, however, Found in our application of evolution, the code will become elegant enough, and some of the scenes are very bad processing, such as multiple initialization task dependencies directly, such as an initialization tasks need to be in a particular life cycle in the initialization is complete, these are all using a thread pool, IntentService unable to realize. So we started thinking about a new solution that would perfectly solve the problems we had just encountered.

This solution is the initiator we are currently using. In the concept of an initiator, we abstract each initialization code into a Task, then sort them, arrange a directed acyclic graph based on their dependencies, and then execute them using an asynchronous queue. And the asynchronous queue, which is strongly correlated with the number of cores in the CPU, maximizes the ability of both our main thread and other threads to perform our task, which means that everyone can do it at the same time.

3. What are the overlooked points of startup optimization?

  • CPU time and Wall time
  • Note the optimization of lazy initialization
  • Let’s talk about hacking

First of all, there are two important metrics in CPU Profiler and Systrace, namely CPU time and wall time. It is important to understand the difference between CPU time and Wall time. Wall time refers to the time the code executes. CPU time is the amount of time the code consumes CPU, and lock conflicts can make the time difference too large. We need CPU time as a direction for our optimization.

Second, we not only pursue the speed of a promotion, also need to pay attention to delay initialization of an optimized, for lazy initialization, typically after interface according to load, but if the interface to interact with the user of a series of operation such as sliding, there will be a very serious card phenomenon, Therefore, we use idealHandler to implement idle CPU time to execute time-consuming tasks, which greatly improves the user experience and avoids page lag caused by starting time-consuming tasks.

Finally, there are some dark techniques for boot optimization. First, we use Class preloading. We start a thread after the multidex. install method and use class.forname to preload the Class, and then when our Class is actually used, You don’t have to do class loading anymore. At the same time, we see Systrace figure, part of the mobile phone application is not really for us to run the CPU, for example it has eight nuclear, but they only gave us 4 nuclear etc. All of these cases, and then, some applications are made some black science and technology, it will be the core of the CPU and the frequency of the CPU at startup to a violent ascension.

4. Is there a good solution to slow startup caused by version iteration?

  • starter
  • Combining with the CI
  • Monitor the perfect

We’ve seen this before, and it’s really hard to solve. However, after repeated thinking and trying, we finally found a better solution.

First, we use the launcher to manage each initialization tasks, and starter in the execution of each task are automatically assigned, that is to say, these automatically assigned task we will do our best to guarantee it will average distribution of thread in each and every one of us, this and our common asynchronous is not the same, It’s a great way to slow down the startup of our application.

Second, we also is a combination of CI, for example, we now limit some classes, such as Application, if someone changed it, we will not let this part of the code is merged into the branch or changing the later there will be some internal tools such as E-mail sent to me, then I will and he confirmed that he and how much of this code is time-consuming, If asynchronous initialization is not possible, consider delayed initialization. If the initialization time is too long, consider whether lazy loading can be carried out, and then use it when needed, etc.

Then, we expose problems as much as possible before we go live. At the same time, when we are really in an online environment, we have made a improvement in monitoring. We not only monitor the entire startup time of the App, but also monitor each life cycle. For example, we monitor the onCreate and onAttachBaseContext methods of the Application, and the time interval between these two life cycles. If we find that the startup speed is slow next time, we can find out which link is slow. We’re going to compare it to the previous version, and when we’re done, we can look for this new piece of code.

5,Open issues: How to design a lazy loading framework or SDK and what to look for if you want to improve startup speed

For a more comprehensive and in-depth understanding, please see the in-depth exploration of Android startup speed optimization

3. App memory optimization

1. What is the process of your memory optimization project?

1. Analyze the status quo and identify problems

We find that our APP may have a big problem in memory. The first reason is that our online OOM rate is relatively high. Second, we see a lot of memory jitter in our Android Studio Profiler tool. This is me a preliminary status quo, and then after we know the present status of the preliminary, the confirmation of the problem, we through a series of investigation and research, we finally found our the following major problems existing in the project, such as: memory thrashing, memory, memory leaks, and we use the Bitmap is very straightforward.

2. Targeted optimization

For example, solve Memory jitter -> use of Memory Profiler tools (render jagged graphics) -> analyze specific code problems (log string concatenation in frequently called methods), or talk about Memory leak or overflow solutions.

3. Efficiency improvement

In order not to increase the workload of business students, we used some tools or ARTHook, which is not invasive, and we taught these techniques to all of us, so that we could improve our work efficiency together.

We are familiar with the use of Memory Profiler and MAT, so we have written a series of solutions to a series of different problems and shared them with you. In this way, our entire team members became more aware of memory optimization.

2. What are your biggest feelings about memory optimization?

1, sharpening the knife does not mistakenly cut wood workers

At the beginning, we did not directly analyze the Memory problems in the code of the project, but first learned some official Documents of Google, such as the use of Memory Profiler tool and MAT tool. After we learned these tools skillfully, when we encountered Memory problems in our project, We’ll be able to quickly troubleshoot and locate the problem.

2. Technical optimization must be combined with business code

In the beginning, we did the whole APP running phase of a memory, then, our memory consumption in some key module for some monitoring, but found behind the monitor is not closely combined with our business code, such as after combing project, found that using multiple image gallery in our project, The memory caches of the multiple image libraries were definitely not public, which resulted in a very high memory usage for our entire project. So the technical optimization must be combined with our business code.

3, systematic improvement of solutions

In the process of memory optimization, we not only optimized the Android terminal, but also reported some data collected from the Android terminal to our server, and then transmitted to our background. In this way, it is convenient for our Bug tracker or Crash tracker to solve a series of problems.

3. How to detect all unreasonable places?

For example, one of our initial schemes for detecting large images was to inherit ImageView and override its onDraw method. However, as we promoted it, we found that a lot of developers didn’t accept it, because a lot of ImageView had been written before, and it was expensive to ask them to replace it. So then we thought, is there a solution that doesn’t have to be replaced, and we ended up with ARTHook Hook.

How to avoid memory jitter? (Code considerations)

Memory jitter is caused by a large number of objects in and out of the new area in a short time. It is accompanied by frequent GC, which will occupy a large amount of UI threads and CPU resources, resulting in the overall app lag.

Tips for avoiding memory jitter:

  • Try to avoid creating objects inside the loop and move object creation outside the loop.
  • Note that the onDraw() method of a custom View is called frequently, so objects should not be created frequently in this View.
  • When you need to use a lot of Bitmaps, try caching them in arrays or containers for reuse.
  • For objects that can be reused, you can also cache them using object pools.

For a more comprehensive and in-depth understanding of Android performance optimization memory optimization, explore Android memory optimization

4. App drawing optimization

1. What tools do you use to optimize your layout?

In the process of layout optimization, I used a lot of tools, but each tool has its different use scenarios, different scenarios should use different tools. Next, I will analyze it from two perspectives: online and offline.

For example, if I want to count online FPS, I use the Choreographer class, which has the following features:

  • 1. Get the overall frame rate.
  • 2. It can be brought online for use.
  • 3. The frame rate it gets is almost real-time, which is what we need.

At the same time, offline, if you want to optimize the time consumption caused by layout loading, you need to detect the time consumption of each layout. For this, I use AOP method, which is not invasive, and also do not need other developers to access, you can easily obtain the time consumption of each layout loading. If even more granular ground to check every control loading time consuming, you will need to use LayoutInflaterCompat. SetFactory2 this method for the hooks.

I also use LayoutInspector and Systrace, which makes it easy to see how long each frame takes and what it’s actually doing in the layout. The LayoutInspector makes it easy to see the layout hierarchy of each interface, helping us optimize the hierarchy.

2. Why does layout stall, and how do you optimize it?

After analyzing the loading process of a layout, we found that there are four possible causes of layout stalling:

  • 1. First, the system will load our Xml file into our memory by IO mapping, and the IO process may cause lag.
  • 2. Second, the layout loading process is a reflection process, and the reflection process may also cause stalling.
  • 3. At the same time, if the layout level is deep, the layout traversal process will be time-consuming.
  • Finally, unreasonably nested layouts with RelativeLayout can lead to excessive redraws.

In this regard, our optimization methods are as follows:

  • 1. For optimization of layout loading Xml files, we use an asynchronous Inflate, or AsyncLayoutInflater. The idea is to load our Layout in a child thread, and then send the View to the main thread through the Handler. So it doesn’t block our main thread, and the load time is all consumed in the asynchronous thread. And this is just an idea to alleviate from the side.
  • 2. Later, we found a way to solve the above pain point from the root, which is to use the X2C framework. Is one of its core principles in the development process we use to write XML layout, but it will be used at compile time APT way to transform the XML layout for the Java way layout, through this way to write the layout, it has the following advantages: 1, it leaves out the use of IO way to load the time-consuming process of XML layout. 2. It uses Java code to create control objects directly as new, so there is no performance penalty associated with reflection. This fundamentally solves the problem of layout loading.
  • 3. Then, we can use ConstraintLayout to reduce the nesting levels of our interface layout. The deeper the original layout is, the more it reduces the nesting levels. It can also be used to avoid too many redraws from nesting a RelativeLayout layout.
  • 4, in the end, we can use the AspectJ framework (AOP) and LayoutInflaterCompat setFactory2 way respectively to set up the layout of the offline global load speed and load control monitoring system.

3. What are the results of layout optimization?

  • 1. First, we set up a systematic monitoring approach, which also refers to an integrated online plus offline approach. For offline, using AOP or ARTHook, it is very easy to obtain the loading time of each layout and each control. For online, we through the Choreographer. GetInstance (). The collected FPS postFrameCallback way, so that we can know the user what interface appeared lost frames.
  • 2. Then, for layout monitoring, we set up a series of indicators such as FPS, layout loading time and layout hierarchy.
  • 3. Finally, before each release we do a Review of our core path to make sure that our FPS, layout load times, layout levels, etc. are in a reasonable state.

4. How do you do Caton optimization?

From the initial stage of the project to the growth stage, and finally to the maturity stage, each stage has done a different treatment for caton optimization. What each phase does is as follows:

  • 1. Positioning and solving system tools
  • 2. Automation caton scheme and optimization
  • 3. Construction of online monitoring and offline monitoring tools

I do caton optimization is also went through some stages, initially appeared some of our project module after caton, I through the positioning system tools, I used the Systrace, then watch the CPU caton cycle conditions, at the same time combining with the code, the module for the refactoring, asynchronous and delay is made for the part of the code, This is how problems were solved early in the project. However, with the expansion of our project, there are more and more problems of offline lag. At the same time, there are also online feedback of lag, but it is difficult for us to reproduce the online feedback of lag. Therefore, we started to look for automatic lag monitoring scheme, which is based on the message processing mechanism of Android. Any code executed by the main thread goes back to the Looper.loop method, which has a mLogging object that is called before and after each message. This is the time we use to automate the monitoring scheme. At the same time, at this stage, we also improved the online REPORTING of ANR. The way we adopted was to monitor the information of ANR, and combined with ANR-watchdog, as a supplementary scheme for the higher version without file permissions. After finishing this caton detection scheme, we also did the construction of online monitoring and offline detection tools, and finally realized a complete set of comprehensive and multi-dimensional solutions.

5. How do you automate the acquisition of caton information?

The main thread executes any code that goes into the Looper. Loop method. This function has a mLogging object that gets called before and after each message processing. So we have to execute the time consuming code in the dispatchMessage method, so before we execute the message, we can postDelayed a task in the child thread, and the Delayed time is the threshold that we set, If the main thread messaege completed within the threshold value, then cancel this task among the child thread, if the main thread of the message within the threshold value has not been completed, the task of child thread will be implemented, it will get to the current one of the main thread execution stack, then we can know where the card.

Through practice, we found that the scheme for the stack information it is not necessarily accurate, because access to the stack information it is likely to be the main thread in one location, and really took place in fact already completes, then, we did some optimization on the plan that we take the high frequency sampling scheme, Also is in a cycle we will collect the main thread’s stack information for many times, if there is a card, then we will this caton information reported to the APM backstage after compression, then find out the repeated stack information, these recurring stack is caton happened a big probability position, thus improve the caton information a accuracy.

6. How does Caton’s whole solution work?

First of all, we combined online and offline tools for caton. For offline tools, we tried to expose problems as early as possible, while for online tools, we focused on the comprehensiveness of monitoring, automation and sensitivity of abnormal perception.

At the same time, the Caton problem has many problems. For example some code, it is less than you, a threshold value of the card but too much, or it mistakenly carried out many times, it can also lead to the user senses a card, so we in the offline through the way of AOP has carried on the Hook to common time-consuming code, and then for a period of time to obtain the data for analysis, We can know when and how many times these time-consuming pieces of code occur and how long they take. Then, we see if it meets one of our expectations, and if it doesn’t, we can go offline and change it. At the same time, caton monitors it with a lot of blind areas that are easy to be ignored, such as an interval in the life cycle. For this particular problem, we used compile-time annotations to modify all the Handler parent classes in the project, and monitored two of the methods. We know the execution time of the main thread Message and their call stack.

For online lag, in addition to the calculation of App lag rate, ANR rate and other conventional indicators, we also calculated the page open rate in seconds, the execution time of life cycle and so on. In addition, we also saved as much information about the current scene as possible at the moment of the deadlock, which left a basis for us to solve or reproduce the deadlock later.

7, TextView setText time-consuming reasons, TextView drawing layer source code understanding?

Open problem: Optimize the speed and smoothness of opening a list page.

For a more comprehensive and in-depth understanding, please see The Drawing optimization of Android performance Optimization, in-depth explore Android Layout optimization (top), in-depth explore Android layout optimization (bottom), in-depth explore Android Caton Optimization (top), in-depth explore Android Caton optimization (bottom).

5. App weight loss

6. Network optimization

1. Several points of network data optimization for mobile terminal acquisition

  • 1. Connection multiplexing: saves connection establishment time, such as keep-alive. By default on Android, keep-Alive is enabled for both HttpURLConnection and HttpClient. HttpURLConnection has a connection pooling Bug before 2.2.

  • 2. Request merging: that is, multiple requests are combined into one request. The most common one is CSS Image Sprites in web pages. If there are too many requests on a page, you can also consider merging some requests.

  • 3, reduce the size of the request data: For POST requests, body can be gzip compressed, header can be compressed (HTTP 2.0 only).

The body of the returned data can also be compressed by GZIP, and the volume of the body data can be reduced to about 30% of the original (the volume of the key data of the returned JSON data can also be compressed, especially for the returned data format does not change much, which is used for the data returned by Alipay chat).

  • 4, according to the user’s current network quality to determine what quality of the download picture (more electricity commercial).

  • DNS supports TCP as well as UDP, but most standard DNS is based on UDP and DNS server port 53 interaction. HTTPDNS, by contrast, uses the HTTP protocol to communicate with port 80 of the DNS server. This avoids traditional DNS resolution and bypasses carrier’s LocalDNS server, effectively preventing domain name hijacking and improving domain name resolution efficiency.

Refer to the article

2,Client network security implementation

3. Design a network optimization scheme for the weak network environment of mobile terminal.

7. App power optimization

8. Security optimization of Android

1. How to improve app security?

2, Android app reinforcement how to do?

3. What’s the confusion behind Android?

Talk about your understanding of Android signature.

9. Why does WebView load slowly?

This is because in the client, the WebView needs to be initialized before loading the H5 page. Until the WebView is fully initialized, the subsequent loading process of the interface is blocked.

Optimization means revolve around the following two points:

  • Preload the WebView.
  • Request H5 page data while loading the WebView.

So the common approach is:

  • Global WebView.
  • Client proxy page request. After the WebView is initialized, it requests data from the client.
  • Asset Stores offline packages.

There are a few other optimizations:

  • Script execution is slow, so you can let the script run last without blocking page parsing.
  • DNS connections are slow, so clients can reuse domain names and links.
  • The React framework code executes slowly. You can split this code and parse it in advance.

How to optimize custom View

To speed up your view, you need to minimize unnecessary code for frequently called methods. Starting with onDraw, it is important to be careful that you should not do memory allocation here, as it can lead to GC and thus lag. Allocate memory during initialization or between animations. Do not allocate memory while the animation is executing.

You also want to minimize the number of times onDraw is called, which is mostly due to invalidate(). So minimize the number of invaildate() calls. If possible, try to call invalidate() with four arguments instead of no arguments. Invalidate with no arguments forces the entire view to be redrawn.

Another time-consuming operation is the request layout. Any time requestLayout() is executed, the Android UI will traverse the View hierarchy to calculate the size of each View. If it finds conflicting values, it needs to recalculate several times. Also, try to keep the hierarchy of the View flat, which is very efficient.

If you have a complex UI, you should consider writing a custom ViewGroup to perform its Layout operations. Unlike built-in views, custom views allow applications to measure only this part of the view, eliminating the need to walk through the view hierarchy to calculate the size.

11. When will Force Close (FC) appear?

Error, OOM, StackOverFlowError, Runtime, such as null pointer exception

Solutions:

  • Pay attention to memory usage and management
  • Using Thread. UncaughtExceptionHandler interface

12,Java multithreading caused by performance problems, how to solve?

13. Implementation principle of TraceView and analysis of data error sources.

14. Have you used SysTrace and understood the principle?

Mmap + Native log optimization?

Traditional log printing has two performance problems, one is repeatedly operating the file descriptor table, the other is repeatedly entering the kernel state. So you need to use Mmap to read and write memory directly.

Ii. Android Framework

1. Android system architecture

Android is an open source software stack based on Linux, created for a wide range of devices and models. Here are the five components of the Android platform:

1. Apps

Android comes with a core set of apps for email, messaging, calendar, Internet browsing and contacts. Apps that come with the platform have no special status, just like apps that users can choose to install. So third-party apps can become users’ default web browser, SMS Messenger, and even their default keyboard (with some exceptions, such as the system’s Settings app).

System applications are available to users’ applications and provide major functions that developers can access from their own applications. For example, if your application wants to send SMS messages, you don’t need to build the feature yourself, you can instead call your installed SMS application to send messages to the recipient you specify.

2. Java API framework

You can use the entire feature set of Android OS through apis written in the Java language. These apis form the building blocks needed to create Android applications. They simplify the reuse of core modular system components and services, including the following:

  • A rich, extensible view system that can be used to build an application’s UI, including lists, grids, text boxes, buttons and even an embeddable Web browser
  • Resource manager for accessing non-code resources, such as localized strings, graphics, and layout files
  • Notifications manager, which allows all applications to display custom alerts in the status bar
  • The Activity manager, which manages the application lifecycle, provides a common navigation back stack
  • Content providers that allow applications to access data from other applications (such as the Contacts application) or share their own data

Developers have full access to the framework apis used by Android applications.

3. System runtime

1) Original C/C++ library

Many of the core Android system components and services, such as ART and HAL, are built from native code, requiring native libraries written in C and C++. The Android platform provides Java framework apis to show applications some of the functionality of its native libraries. For example, you can access OpenGL ES through the Android framework’s Java OpenGL API to support drawing and manipulating 2D and 3D graphics in your application. If you are developing an application that requires C or C++ code, you can use the Android NDK to access some native platform libraries directly from native code.

2)Android Runtime

For devices running Android 5.0 (API level 21) or higher, each application runs in its own process and has its own Android Runtime (ART) instance. ART is written to run multiple virtual machines on low-memory devices by executing DEX files, a bytecode format designed for Android that is optimized to use very little memory. A compilation tool chain (such as Jack) compiles Java source code into DEX bytecode, making it available to run on the Android platform.

Some of the main functions of ART include:

  • AOT and JUST-in-time (JIT) compilation
  • Optimized garbage Collection (GC)
  • Better debugging support, including dedicated sampling analyzers, detailed diagnostic exception and crash reports, and the ability to set up monitoring points to monitor specific fields

Prior to Android version 5.0 (API level 21), Dalvik is the Android Runtime. If your application works well on ART, it should also work on Dalvik, but not the other way around.

Android also includes a set of core runtime libraries that provide most of the functionality of the Java programming language used by the Java API framework, including some Java 8 language functionality.

4. Hardware Abstraction Layer (HAL)

The Hardware Abstraction Layer (HAL) provides a standard interface that exposes device hardware functionality to a higher-level Java API framework. HAL contains multiple library modules, each of which implements an interface for a specific type of hardware component, such as a camera or Bluetooth module. When the framework API requires access to device hardware, the Android system loads library modules for that hardware component.

5. Linux kernel

The Android platform is based on the Linux kernel. For example, The Android Runtime (ART) relies on the Linux kernel to perform low-level functions, such as threading and low-level memory management. Using the Linux kernel allows Android to take advantage of major security features and allows device manufacturers to develop hardware drivers for well-known kernels.

For Android application development, it is best to sketch the following system architecture diagram:

2. View event distribution mechanism? How to resolve sliding conflicts?

Understand the structure of an Activity

An Activity contains a Window object, which is implemented by PhoneWindow. PhoneWindow uses a DecorView as the root View of the entire application window, which in turn divides the screen into two areas: a TitleView and a ContentView, which is what we normally write to display.

The type of touch event

Touch events correspond to the MotionEvent class, and there are three main types of events:

  • ACTION_DOWN
  • ACTION_MOVE(Moves beyond a certain threshold will be considered an ACTION_MOVE action)
  • ACTION_UP

View event distribution is essentially the process of MotionEvent distribution. When a MotionEvent occurs, the system passes the click event to a specific View.

Event Distribution Process

The event distribution process is accomplished in three ways:

DispatchTouchEvent: method returns true to indicate that the event is consumed by the current view; A return of super.dispatchTouchEvent means that the event continues to be distributed, and a return of false means that onTouchEvent processing is handed to the parent class.

OnInterceptTouchEvent: The method returns true to intercept the event and send it to its own onTouchEvent method for consumption; Returning false means no interception and needs to continue to be passed to the subview. If return super. OnInterceptTouchEvent (ev), event interceptor in two situations:

  • 1. If the View has a sub-view and the sub-view is clicked, it will not be intercepted and continue to distribute

For child View, return false.

  • 2. If the View has no children or children but does not click the subview (then the ViewGroup)

Is equivalent to a normal View), then the View’s onTouchEvent responds, which is equivalent to return true.

Viewgroups such as LinearLayout, RelativeLayout, and FrameLayout are not blocked by default, whereas viewgroups such as ScrollView and ListView are blocked by default.

OnTouchEvent: method returns true to indicate that the current view can process the corresponding event; A return value of false means that the event is not handled by the current view and is passed to the parent view’s onTouchEvent method for processing. If return super.onTouchEvent(EV), event handling is divided into two cases:

  • 1. If the View is clickable or longClickable, return true, indicating consumption

Return this event as if it were true;

  • 2. If the View is not clickable or longClickable, false is returned

Consume the event, which will be passed up as if it were false.

Note: In Android, there are three classes with event-passing capabilities:

  • Activity: Has both distribution and consumption methods.
  • ViewGroup: has distribute, intercept, and consume methods.
  • View: Has distribution and consumption methods.

The relationship between the three methods is expressed in pseudocode as follows:

public boolean dispatchTouchEvent(MotionEvent ev) {
    boolean consume = false;
    if (onInterceptTouchEvent(ev)) {
        consume = onTouchEvent(ev);
    } else {
        coonsume = child.dispatchTouchEvent(ev);
    }
    
    return consume;
}
Copy the code

Through the above pseudo-code, we can roughly understand the click event delivery rules: For a root ViewGroup, the click event is first passed to it, and its dispatchTouchEvent is called. If the onInterceptTouchEvent method returns true, it intercepts the current event. The event is then handed to the ViewGroup, and if its mOnTouchListener is set, onTouch will be called, otherwise onTouchEvent will be called. In onTouchEvent, onClick is called if mOnCLickListener is set. OnTouchEvent () returns true whenever either the View’s CLICKABLE or LONG_CLICKABLE is true. If the onInterceptTouchEvent method of the ViewGroup returns false, it does not intercept the current event. The current event is then passed to its children, whose dispatchTouchEvent method is then called. This is repeated until the event is finally processed.

Some important conclusions:

OnTouch > onTouchEvent > onClickListener. OnClick

2. Normally, a time series can only be intercepted and consumed by one View. Because once an element intercepts the event, all events in the same sequence are handed to it directly (that is, the View’s interception method is no longer called to ask it whether to intercept, but the remaining ACTION_MOVE, ACTION_DOWN, etc.). Exception: You can force the event to be handled by another View by returning false to the onTouchEvent that overrides the View.

3. If the View consumes no events other than ACTION_DOWN, the click event will disappear, the parent element’s onTouchEvent will not be called, and the current View will continue to receive subsequent events, which will eventually be passed to the Activity.

4. ViewGroup does not intercept any events by default (return false).

5. The View’s onTouchEvent will consume the event by default (returning true) unless it is unclickable (both Clickable and longClickable are false). The longClickable property of a View is false by default. The Clickable property of a Button is true by default and that of a TextView is false by default.

6. The Enable property of the View does not affect the default return value of onTouchEvent.

7, through requestDisallowInterceptTouchEvent method can intervene in the parent element in child elements distribution of events, except ACTION_DOWN events.

Remember the order in which this diagram is delivered, so you can draw it in detail during the interview:

When does ACTION_CANCEL trigger? Does touching the button and sliding outside to lift trigger the click event, and sliding back to lift?

  • Generally ACTION_CANCEL and ACTION_UP are used as the end of a View event. If an ACTION_UP or ACTION_MOVE is blocked in the parent View, the parent View specifies that the child View will not accept subsequent messages and will receive an ACTION_CANCEL event as soon as the parent View intercepts the message for the first time.
  • Action_cancel appears if you touch a control but lift it (move it somewhere else) over the area of the control.
Click event is blocked, but want to pass to the View below, how to operate?

Rewrite the subclass requestDisallowInterceptTouchEvent () method returns true won’t perform the superclass onInterceptTouchEvent (), the click event can be spread to the View below.

How to resolve View event conflicts? What is an example from development?

Common development events conflict between ScrollView and RecyclerView sliding conflict, RecyclerView embedded sliding in the same direction at the same time.

Handling rules of sliding conflicts:

  • In the case of slide collisions due to inconsistent external and internal slide directions, it is possible to determine who intercepts the event based on the direction of the slide.
  • For sliding conflicts caused by the same external and internal sliding directions, depending on business requirements, you can specify when to let the external View intercept the event and when to let the internal View intercept the event.
  • For the nesting of the above two cases, relatively complex, according to the same needs to find a breakthrough in the business.

Implementation method of sliding conflict:

  • External interception: the click event is first intercepted by the parent container. If the parent container needs the event, it will be blocked. Otherwise, it will not be blocked. You need to override the onInterceptTouchEvent method of the parent container and intercept it internally.
  • Internal interception: The parent does not intercept any events, but passes all events to the child. If the child needs the event, the child consumes it; otherwise, the parent processes it. Specific methods: need to match the requestDisallowInterceptTouchEvent method.

Better understanding, GOGOGO

3. The drawing process of View?

The DecorView is loaded into the Window

  • Starting with the Activity’s startActivity, the Activity is eventually created by calling the handleLaunchActivity method of the ActivityThread. First, the performLaunchActivity method is called. The Activity’s onCreate method is executed internally to complete the creation of the DecorView and Activity. Then, handleResumeActivity is called, which first calls performResumeActivity to execute the Activity’s onResume() method, which returns an ActivityClientRecord object, It then gets the DecorView through r.window.getDecorView(), then gets the WindowManager through A.GetwindowManager (), and finally adds the DecorView by calling its addView() method.
  • The WindowManager implementation class is WindowManagerImpl, which internally delegates addView logic to WindowManagerGlobal. As you can see, interface isolation and delegate patterns are used to decouple implementation and abstraction. The addView() method in Windows ManagerGlobal not only adds the DecorView to the Window, but also creates the ViewRootImpl object, And load the DecorView into the Window with the ViewRootImpl object and the DecorView via root.setView(). The ViewRootImpl here is the ViewRoot implementation class that is the link between Windows Manager and the DecorView. View’s three major processes are completed through View Wroot.

Understand the overall process of drawing

Drawing starts with the performTraversals() method of the root View ViewRoot and traverses the tree from top to bottom. Each View control paints itself, and the ViewGroup tells its children to draw.

Understand the MeasureSpec

MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec = MeasureSpec MeasureSpec is a static inner class of the View class that describes how the View should be measured. It consists of three measurement modes, as follows:

  • EXACTLY: Exact measurement mode, valid when the width and height of the View is specified as match_parent or a specific value, indicating that the parent View has determined the exact size of the child View. In this mode, the measured value of the View is the value of SpecSize.
  • AT_MOST: maximum measurement mode. This mode takes effect when the width and height of the view are specified as wrAP_content. In this case, the size of the child view can be any size that does not exceed the maximum size allowed by the parent view.
  • UNSPECIFIED: The measurement mode is not specified. The parent view has no limitations on the size of its child views, which can be of any size as desired. The child views are used internally and are rarely used in application development.

MeasureSpec avoids excessive object memory allocation by packaging SpecMode and SpecSize into an int value. For ease of operation, MeasureSpec provides package and unpackage methods: makeMeasureSpec package method, getMode and getSize package method.

The rules for creating a MeasureSpec for a normal View are as follows:

For a DecorView, its MeasureSpec is determined by the window size and its own LayoutParams; For a normal View, its MeasureSpec is determined by the parent’s MeasureSpec and its own LayoutParams.

How to implement a custom ViewGroup for a waterfall flow according to MeasureSpec?

View The Measure of the drawing process

  • First, the measureChildren() method in a ViewGroup traverses all views in the ViewGroup and does not measure views when their visibility is in the GONE state.
  • Then, when measuring a given View, the child View’s MeasureSpec is calculated based on information such as the parent container’s MeasureSpec and the child View’s LayoutParams.
  • Finally, pass the calculated MeasureSpec into the Measure method of the View. Here, the ViewGroup does not define the specific process of measurement, because the ViewGroup is an abstract class, and the onMeasure method of the measurement process needs to be implemented by various subclasses. Different ViewGroup subclasses have different layout characteristics, resulting in different measurement details, and subclasses can override this method if they need to customize the measurement process. (The setMeasureDimension method is used to set the View’s measurement width and height. If the View does not override the onMeasure method, getDefaultSize will be called by default.)
GetSuggestMinimumWidth analysis

If the View does not have a background, it returns the value specified by the Android :minWidth property, which can be 0; If the View has a background, it returns the maximum of android:minWidth and the minimum background width.

Handle wrAP_content manually when customizing a View

Controls that inherit directly from the View need to override the onMeasure method and set the size of wrap_content itself, otherwise using wrap_content in the layout is equivalent to using match_parent. At this point, you can specify the internal width/height (mWidth and mHeight) in the case of wrAP_content (corresponding to MeasureSpec.AT_MOST).

LinearLayout onMeasure method implementation analysis

MeasureChildBeforeLayout (measureChildBeforeLayout, measureChildBeforeLayout, measureChildBeforeLayout, measureChildBeforeLayout, measureChildBeforeLayout) The system uses the mTotalLength variable to store the initial vertical height of the LinearLayout. MTotalLength will increase every time a child element is measured, including the height of the child element and the vertical margin of the child element.

Get the width and height of a View in the Activity

Since the View’s measure procedure and the Activity’s lifecycle methods are not executed synchronously, the width/height obtained is 0 if the View has not finished measuring. Therefore, the width and height of a View cannot be correctly obtained in onCreate, onStart and onResume. Solutions are as follows:

  • The Activity/View# onWindowFocusChanged: At this point, the View has been initialized, and the Activity window is called once when it gains focus and once when it loses focus. If onResume and onPause are used frequently, onWindowFocusChanged is also called frequently.
  • View.post (Runnable) : Post a runnable to the end of the message queue, initialize it and wait for Looper to call runnable the next time the view is initialized.
  • ViewTreeObserver# addOnGlobalLayoutListener: when the state of the tree View or change the View tree inside the visibility of the View changes, onGlobalLayout method will be called back.
  • View.measure(int widthMeasureSpec, int heightMeasureSpec) : match_parent; Fix the value directly with makeMeasureSpec and then call view.. Measure will do; Wrap_content, in maximized mode, it makes sense to construct a MeasureSpec with the maximum that the View can theoretically support.

View drawing process Layout

First, we set the position of the View’s four vertices in the parent container using the setFrame method. The onLayout empty method is then executed. Subclasses of ViewGroup override this method to implement the layout flow of all View controls in the ViewGroup.

LinearLayout onLayout method implementation analysis (layoutVertical core source code)

The setChildFrame method of each child View is iterated over to determine the position of the child element. The childTop grows, meaning that subsequent child elements are placed lower.

Note: In the default implementation of a View, the measurement width/height and final width/height are the same, except that the measurement width/height is formed in the measure process of the View, while the final width/height is formed in the layout process of the View. In some special cases the two are not equal:

  • Rewrite the View layout method so that the final width is always 100px wider/taller than measured.
  • A View needs several measures to determine its measurement width/height. In the process of the first several measurements, the measurement width/height may be inconsistent with the final width/height, but in the end, the measurement width/height is the same as the final width/height.

View’s Draw process

The basic flow of Draw

Drawing can basically be divided into six steps:

  • First draw the background of the View;
  • If necessary, keep the Canvas layer ready for FADING;
  • Next, draw the contents of the View;
  • Next, draw the child views of the View;
  • If necessary, paint the View’s fading edges and restore the layers;
  • Finally, draw the View’s decorations (such as scroll bars, etc.).
The role of setWillNotDraw

If a View does not need to draw anything, then setting this flag bit to true will be optimized accordingly.

  • By default, the optimization bit is not enabled for a View, but it is enabled for a ViewGroup by default.
  • When our custom control inherits from the ViewGroup and does not have the ability to draw itself, we can turn on this flag bit to facilitate the system for subsequent optimization.
  • When we explicitly know that a ViewGroup needs to draw with onDraw, we need to explicitly turn off the WILL_NOT_DRAW flag.

Requestlayout, OnLayout, onDraw, DrawChild

The requestLayout() method: causes the measure() procedure and layout() procedure to be called, which will determine if ondraw is needed based on flag bits.

OnLayout () method: If the View is a ViewGroup object, implement this method to lay out each child View.

OnDraw () method: Draws the View itself (this method needs to be overridden for every View, ViewGroup does not need to implement this method).

DrawChild () : To re-call the draw() method of each subview.

How does invalidate() differ from postInvalidate()?

Invalidate() and postInvalidate() are both used to refresh views. The main difference is that invalidate() is called in the main thread, and if used in child threads, a handler is required. PostInvalidate () can be called directly from a child thread.

More details can be found here

Thank you for reading this article and I hope you can share it with your friends or technical group, it means a lot to me.

I hope we can be friends inGithub,The Denver nuggetsTo share knowledge.