App container, in short, App carries a certain type of application (H5/RN/Weex/ applets /Flutter…) For example, page data prefetch shorts page availability time, WebAR gives AR capability to H5, Native map and H5 composite rendering interaction.
Data prefetching | WebAR ability | Native is recombined with H5 |
---|---|---|
In this paper, H5 container (WebView) related construction is summarized.
Let’s start with an analogy and take a quick look at Android and iOS from the H5 perspective to make it easier to understand how WebView containers are built.
content | H5 | Android | iOS |
---|---|---|---|
window | Window | PhoneWindow | UIWindow |
Page control | Html | Activity | UIViewController |
Content area | body | contentView(DecorView) | view |
General UI(container, text, images) | Div, content, img | View, TextView, and ImageView | UIView, UILabel, UIImageView |
From the above comparison, Native and H5 have many similarities, such as: H5 is an HTML to create a page; Android is an Activity that creates a page; IOS is UIViewController to create a page. The difference lies in that Native itself has a perfect page stack management mechanism, which controls the page conversion in the same Runtime environment. Can also manage multiple Windows (Windows), multi-threaded/process (Android only) to assist the rational use of resources to ensure the performance of the main thread/process, is the App experience. H5 itself is limited by the operating environment, can only be active in a window, currently lacks a mature page stack management mechanism in the same Runtime, the current SPA mode to switch view to simulate “page transition”, has been a better implementation of WebApp experience. Therefore, in the App, H5 expects to make use of more Native capabilities.
In the App, the h5 page is accessed through the WebView.
A View that displays web pages. This class is the basis upon which you can roll your own web browser or simply display some online content within your Activity. It uses the WebKit rendering engine to display web pages and includes methods to navigate forward and backward through a history, zoom in and out, perform text searches and more.
In fact, the WebView implementation on both Android and iOS is inherited from its View/UIView base class. For Native, WebView itself generates view by loading H5 page, analyzing and synthesizing UI through Chromium/WebKit kernel. The Activity/UIViewController instance View through addView (Android)/addSubview (iOS) web View added to the View layer, UI synthesis, and then on the screen display.
We know that apps can use system capabilities, but WebView (analog browser) is not provided by default for security reasons and so on. One of the main purposes of building the container is to open up the available capabilities of the App to the container. Let’s talk about how this part of the capacity is provided, that is, the following to talk about the bridge channel related construction.
Bridge channel
Bridge has the meaning of connection, which connects the two which are not connected by the bridge, and visually describes the “App capabilities” in the form of “channels” to the container for use. In essence, it is how to make our H5 can use these capabilities. The following separately look at the realization of the two ends of the train of thought:
Android
Through the WebView addJavascriptInterface method to establish Java and JS object mapping interworking. The system version requires 4.2+, the mainstream App is basically based on 5+ adaptation, compatible with the low version of the scheme will skip. As follows:
-
Define the Android JavascriptInterface implementation class and call method that map to the WebView JS context (that is, the Window mount object)
package. .import android.webkit.JavascriptInterface; public class JSBridgeChannel { @JavascriptInterface // JS side json. stringify serializes the options object public void call(String api, String options){ // Handle Native functionality based on API and passed parameters. }}Copy the code
-
At the loadUrl time of WebView (ensure that the bridge channel is ready before H5 JS runs), establish the mapping relationship between the Android class instance object and JS object through the addJavascriptInterface method of WebView
@Override public void loadUrl(String url) {.../ /... Constructor parameters; AliJSBridge is the name of the JS mapping object mWebView.addJavascriptInterface(newJSBridgeChannel (...). ."JSBridge"); } Copy the code
-
WebView JS can be called synchronously
window.JSBridge.call("api".JSON.stringify(options)); Copy the code
You can also use JS provided by Android system to debug Native code and print log API onConsoleMessage to obtain message for communication data, but it is not recommended.
iOS
WKWebView provides MessageHandler method to handle JS and Native data interaction, its advantages are synchronous call, performance/stability is better, less memory consumption. The system version requires iOS 8+, and the mainstream App is basically based on 9+ adaptation.
When WKWebView is initialized: [[WKWebView alloc] initWithFrame:frame Configuration :config], whose configuration parameter is the WKWebViewConfiguration type parameter, WKWebViewConfiguration has a property called userContentController, which is the WKUserContentController type parameter, WKUserContentController has an instance method [addScriptMessageHandler:name:](https://developer.apple.com/documentation/webkit/wkusercontentcontroller/1537172-addscri Ptmessagehandler? Language =objc), which establishes a communication channel between JS and Native:
Adding a script message handler with name name causes the JavaScript function window.webkit.messageHandlers.name.postMessage(messageBody) to be defined in all frames in all web views that use the user content controller.
-
In VC init time WKWebView initialization, create WKWebViewConfiguration object, configure MessageHandler object. Through addScriptMessageHandler: name: adding implementation WKScriptMessageHandler object of the agreement, as well as the method name is called JS. Note: remember when VC dealloc, released by removeScriptMessageHandlerForName removed.
WKWebViewConfiguration *configuration = [[WKWebViewConfiguration alloc] init]; // Add ScriptMessageHandler, which is a method bridge that H5 can call [configuration.userContentController addScriptMessageHandler:self initWithDelegate:self] name:@"bridge"]; / / create WKWebView [[WKWebView alloc] initWithFrame:frame configuration:configuration]; Copy the code
-
Implementation WKScriptMessageHandler agreement agent method, when JS through the window. Its. MessageHandlers send Native news, this method will be invoked
- (void)userContentController:(WKUserContentController *)userContentController didReceiveScriptMessage:(WKScriptMessage *)message{ if ([@"bridge" isEqualToString:message.name]){ / / API NSString *api = [NSString stringWithString:message.body[@"_api_"]].// Call Native functions based on API and passed parameters. }}Copy the code
-
WebView JS can be called synchronously
Const options = {// Note that this is a handy way to avoid too many messageHandlers registered with _api_: 'apiName'... }; window.webkit.messageHandlers['bridge'].postMessage(options);Copy the code
Unified bridge API call mode
As mentioned above, both ends mount the object methods that can be called by JS to the JSContext of WebView in a synchronous way, that is, the Window, but there are still differences in the ways of using the two ends. In order to facilitate the use of H5 unified, both ends can also be injected with a compatible package of JS script, the bridge function of both ends unified into jsbridg. call(API, options, success, failure) form of call.
-
In lateral Android JS script injection (mWebView. AddJavascriptInterface after injection) :
String js = ! "" (_ => {// JS encapsulate code})();; mWebView.evaluateJavascript(js, new ValueCallback<String>() { @Override public void onReceiveValue(String value) {... }});Copy the code
-
IOS JS script injection:
NSString *js = @! "" (_ => {// JS encapsulate code})();; WKUserScript *script = [[WKUserScript alloc] initWithSource: js injectionTime:WKUserScriptInjectionTimeAtDocumentStart / / injection timing on-demand configuration can be: WKUserScriptInjectionTimeAtDocumentStart - > just begin to create the Dom; When WKUserScriptInjectionTimeAtDocumentEnd - > DomReady forMainFrameOnly:NO]; [_webView.configuration.userContentController addUserScript: script]; Copy the code
If you are interested in the lower level implementation, you can go further:
-
For Android, the Java JVM is developed in C/C++, and the V8 programming language is C++. The Value class and its subclasses are provided for conversion between data types. V8 enables any C++ application to expose its own objects and functions to JavaScript code. Same with JNI for Java.
-
For iOS, objective-C is a strict superset of THE C language. It supports C++, as does the JavaScriptCore programming language. It also provides a JSValue class for converting between data types.
A JSValue instance is a reference to a JavaScript value. … You can also use this class to create JavaScript objects that wrap native objects of custom classes or JavaScript functions whose implementations are provided by native methods or blocks.
-
C++ is almost a superset of C, in understanding JS and Java, OC interaction is clear. For V8 examples, such as implementing a function that lets JS call:
In V8, there are two Template classes: ObjectTemplate and FunctionTemplate, which define JavaScript objects and JavaScript functions.
/ / the function definition Handle<Value> fn(const Arguments& args){ //do something } // JSContext Handle<ObjectTemplate> global = ObjectTemplate::New(); // Bind the fn Binding to the JSContext, and JS can call fn Handle<FunctionTemplate> fn_template = FunctionTemplate::New(fn); global->Set(String::New("fn"), fn_template); Copy the code
With bridge support, H5 has app-level capabilities. Let’s talk a little bit about container performance optimization builder-network optimization.
Network optimization
The key to smooth Native experience is that most of the static resources that its pages depend on have been embedded in the installation package and followed the App installation to the local user, saving network I/O overhead. Along this line of thinking, before 5G is truly civilianized, network IO consumption is still an optimization point to improve performance, this briefly talk about the idea.
There are mainly two aspects:
-
Static resources, such as HTML, JS, CSS, images, fonts, and videos, can be used offline, preloading, lazy loading, or enabling WebView cache reuse. When the current user accesses the page to load resources, the container intercepts the resource network request, hits the offline or cached resource files, and uses them. Offline and preloading can save the first network I/O performance. Other methods can save the second network I/O performance.
- Offline: Enter the App installation package like Native
- Preloading: After the App starts and before the page is used, resources are loaded to the local user for backup
-
Interface data prefetch is used to obtain the interface data before the user accesses the page. When the user accesses the page, the intercepting interface network request directly hits the local cache data and uses it.
This layer of optimization, need to have a supporting remote control mechanism and management background. The master control mechanism is designed to control the release, version /patch update, and offline of static files, and the matching rules of interface data prefetch, life cycle management, and static/dynamic parameter configuration and processing.
We mainly focus on the implementation of static resource network optimization, and briefly introduce the implementation idea at each end, mainly for the interception and processing of static resource network requests:
Android
Use the shouldInterceptRequest method to intercept (system requires 4.0+).
Notify the host application of a resource request and allow the application to return the data. If the return value is null, the WebView will continue to load the resource as usual. Otherwise, the return response and data will be used.
@Override
public WebResourceResponse shouldInterceptRequest(WebView view, String url) {
WebResourceResponse response = null;
if(// Match the blocking rule){
// The container generates response
// WebResourceResponse(String mimeType, String encoding, int statusCode, String reasonPhrase, Map<String, String> responseHeaders, InputStream data)
response = newWebResourceResponse(...) ; }return response;
}
Copy the code
iOS
Affected by the “official” factors, the implementation of iOS is relatively complex, and those who have known about the release review should understand, so I will skip over here…
Matters needing attention
The Response header should be handled correctly
- The content-type of the intercepting resource is handled correctly
- To handle intercepting resources correctly
Access-Control-Allow-Origin
To avoid cross-domain resource verification
Interface data prefetch is essentially an interception and identification process, because it involves static/dynamic parameter processing, validity period control, etc., which is much more complex, so I will not expand it here.
In the following section, we will briefly talk about the implementation of container enhancement capabilities ~ WebAR support:
WebAR
Before I talk about WebAR, let’s take a look at AR. Briefly speaking, AR is by starting camera obtain realistic environment, in the form of a video frame realistic environment to identify data transmission module and the drawing module, recognition module will result data is passed to the map module (according to the rules of business configuration recognition processing into the corresponding virtual things), rendering module will reality environment data after data processing and identification results show.
WebAR is two concepts, is Web + AR, on the Web side to provide AR capabilities. The Web itself, relying on WebGL real-time graphics rendering and WebRTC real-time video streaming processing capabilities, can realize AR experience. However, due to different operating environment standards, poor rendering performance and other factors, we carried out capability intervention support on the container side to ensure its function and performance experience.
In the container side construction, AR capability is integrated into the App, and then AR capability is provided to WebView (combined with the bridge channel capability understanding mentioned above), which is presented in the form of H5 page. What we access on the Android side is the AR capability provided by UCWebView, which uses WebGL rendering. IOS is an integrated ARKit for hybrid rendering by placing a WebView with a transparent background on a full-screen OpenGLView.
Both ends support the configuration identification module. At present, recognition is mainly divided into two categories: one is maker-based recognition (which is commonly used to identify tag maps), and the other is location-based recognition (generally understood as enhancing immersion through the direction and position of mobile phone sensors, such as the content displayed far away is smaller, and the other is larger).
The current small program is in its heyday, its ability is a good combination of Native and H5, namely “render”, below we also “on paper” once. However, we also have business demands, to do the construction of same-layer rendering, Native Video and other Native components applied to WebView to meet business experience needs.
The render tree
The same layer rendering here is to synthesize Native View into WebView. You can control the style of Native View on the page through CSS.
The following two ends of the scheme, is based on the author understand the best implementation of the scheme.
Android
Need to expand self-developed WebView based on Chromium kernel. Chromium supports a WebPlugin mechanism for identifying and parsing DOM tags. The idea:
- HTML to create a DOM node, specifying the component type, for the container to identify processing
- Chromium creates the WebPlugin instance and generates the RenderLayer, which creates a separate layer and returns the corresponding canvas for view drawing
- Android initializes a corresponding native component based on the identified component type
- Android draws the native component’s view onto the SurfaceTexture bound to the RenderLayer (sending the Android UI Toolkit view data to the Texture for openGL drawing).
- Chromium combines this RenderLayer with the View of the Web page for composite rendering
iOS
Based on WKWebView, WK internally adopts a hierarchical approach for rendering. Generally, multiple DOM nodes are combined into a single layer for rendering. Therefore, there is no one-to-one correspondence between DOM nodes and layers. However, if you set the CSS property of a DOM node to “overflow: After scroll “, WKWebView will generate a WKChildScrollView for it, and the WebKit kernel has processed the hierarchical relationship between WKChildScrollView and other DOM nodes. There is now a one-to-one correspondence between dom nodes and layers. Therefore, the same layer rendering can be implemented based on WKChildScrollView:
- Create a DIN node in HTML and set its CSS property to Overflow: Scroll, specifying the component type for the container to identify and process
- IOS finds the native WKChildScrollView component corresponding to the DOM node
- IOS initializes a native component based on the identified component type
- IOS mounts the native component to the WKChildScrollView node as a child view, so that the native component is inserted into the webView
- WebKit completes the rendering
conclusion
So much for 👀
We also have container URL unification (a URL can be dynamically configured to render pages H5, Native, etc.), mini program container construction, Flutter container construction, container snapshot caching, pre-render, hybrid render, etc. Soon 🚀 to join us 👊 here can accompany you to grow up Happy friends, welcome to pay attention to the front end of the flying pig team public number, direct contact through the public number!