The preface

In this paper, how to design multi – terminal container to make implementation scheme. In this article, multi-terminal is not cross-terminal. Multi-terminal containers generally refer to containers that provide unified capabilities for downstream applications to load and run properly in a variety of environments. The capabilities provided by each end container may be different, and the downstream does not need to care about the upper implementation and dependencies. It only relies on abstract interfaces and follows the dependency inversion principle.

You should be familiar with the following requirement scenarios: The application can run in Web Browser, or embed webView, third-party application (Alipay, WX, micro-application, etc.) browser in Hybrid mode. Some students may ask, “Isn’t the Browser Environment providing the same contextual capabilities?” but in reality, there are containers that limit the ability to load content and add features, and the end container needs to align them. Call the community’s animation library when in an environment with DOM access, call JSBridge when in a native application using the interaction capabilities of the native application, and use the capabilities provided by the tripartite ecosystem when at other ends such as applets — automatically relegate to a bottom-pocket solution when the optimal solution is not available.

WxH5 and Dingding microapps, for example, will have a default navigation bar, some apps will open the ability to customize the application navigation bar, and the native navigation bar should be displayed when the environment is in Web Browser. This is only part of the difference, but when maintaining an application across multiple environments, the problem becomes magnified.

In summary, multi-terminal containers solve the following problems:

  • Provide multi-terminal unified API, abstract API convention, downstream no sense.
  • Logical isolation facilitates maintenance and management.
  • Dependency inversion, decoupling increases substitutability.

The target

Container capacity

  1. Common directives API(TOAST/Alert/Confirm) and common utility classes (clipboard/local storage/preview images/environment segmentation).
  2. E.g. fetch/jsonp/ custom injection gateway
  3. ComponentLoader, with preloading function.
  4. NavgationAPI, navigation and route caching.
  5. ErrorBoundary, can be customized error bottom view automatically reported.
  6. Lazy, componentLoader loads transition animations.

use

import { containerContext, Container } from '@iron-man/container-api';
function App(){
    return (
      <Container {. options} >
        <AnyChild />
      </Container>)}function AnyChild() {
    const containerAPI = useContext(containerContext);
    useEffect(() = > {
      // use container ability
    }, [])
    return / /...
}
Copy the code

Some of you might be wondering, why use Context? Often see performance in some articles and components predictable data complexity between two aspects, first of all, it is the preferred way of sharing data across components, after defining properties cannot be rewritten to represent will not make cross component hierarchy relationships between components is complex and unpredictable, and some article points out its criticism is likely to cause loss of performance, For example, the Context was not split as a Redux to build a single data tree, forcing the component out of the bailout logic. However, one of the key aspects of React performance optimization was to reduce unnecessary render, which could be avoided by using the Memo.

Design stratification

The whole block is divided into basic container layer, routing layer, ErrorBoundary layer and Suspense layer. All “layers” are literally containers, and the two meanings are roughly the same.

Other layers rely on the basic capabilities provided by the base container. In addition to the base container, each function can be combined and split without coupling without affecting other capabilities. There are also buried point containers, Profiler containers, animation transition containers and other free play.

  • Base container layer
    • 1.2.3 Is biased towards container apis without other dependencies and is suitable to be placed in the base container layer or to provide capabilities for downstream layers.
      • The generic API provides the instruction tool methods commonly used by the context.
      • Service gateways provide service invocation capabilities.
      • ComponentLoader transition animation depends on Suspense and fault tolerance depends on ErrorBoundary. Instead of directly relying on ErrorBoundary of container layer, it reuses basic component, which only provides functions for loader as a basic component.
  • The routing layer
    • Dependent on the base container ComponentLoader, navigation routing in the previous article has been introduced to implement the portal, this chapter does not repeat the introduction.
  • ErrorBoundary layer
    • The purpose of multiplexing the ErrorBoundary component is to show the difference between errors occurring during loading of ComponentLoader and error isolation at runtime.
  • Suspense layer
    • As with ErrorBoundary, it is also used to distinguish the transition animation when ComponentLoader loads from the react. lazy trigger.

API agreement

In order to ensure the API unification of multi-end implementation, it is necessary to define an abstract interface center, which exposes the abstract API interface, and introduces the real implementation of each end container as the core specification constraint.

erDiagram
abstract-api ||--|{ web-impl : web
abstract-api ||--|{ app-impl : app
abstract-api ||--|{ xxx-impl : xxx

Finally, the application entry is injected into the downstream application by means of Dependency Injection (DI) based on the determined environment to achieve multi-terminal compatibility of downstream applications.

Container – API: @iron-man/container-${platform}-impl; container-${platform}-impl However, small programs are too limited, and even with a good cross-end framework, the solution is not entirely suitable, mainly in routing and script loading.

The product will ultimately provide two ways of access, with little difference depending on the architectural pattern of the platform.

  1. DI, class Requirejs, all ends are introduced uniformly@iron-man/container-api, the platform container automatically injects dependent files of the corresponding platform according to the environment judgment.
  2. Binary package, like NPM module, each end directly into the corresponding end environment of the package.

To avoid publishing test packages on NPM, the container Abstraction & Web-IMPl section of this article uses Learn management, and the demo uses Webpack External + RequireJS to mock each side of the business.

The interface definition

Depending on the design hierarchy, the following structure can be obtained, with the base container Layer subdivided into layers and the other layers classified into wrappers.

  • Layers
    • Basic // Basic layer
    • Scheme // Protocol layer
    • Service // Gateway layer
  • Wrappers
    • Navigation // Routing layer
    • ErrorBoundary // Fault tolerant layer
    • Suspense // Load layer

Basic

Some common tool methods

export interface TipUtils {
  alert: () = > void;
  confirm: () = > void;
  toast: () = > void;
}

export interface StorageApi {
  get: (key: string) = > Promise<string | null>;
  set: (key: string, value: string) = > Promise<void>;
  del: (key: string) = > Promise<void>; getJSON: <T = unknown>(key: string) => Promise<T | null>; setJSON: <T = unknown>(key: string, value: T) => Promise<void>; } export interface PracticalUtils {/** * previewImage: (text: String) => void, /** * copyToClipboard */ copyToClipboard: (text: string) => Promise<void>}; export interface BasicLayerAbility extends TipUtils { //... }Copy the code

Schema

Protocol layer, mainly through componentURI resolution to generate the actual component/script reference address to provide component rendering and preloading functions.

import { ComponentType } from "react";
import { SuspenseAbility } from '.. /wrapper/suspense';

export interface ComponentLoaderProps extends SuspenseAbility {
  /** * Component protocol URI */
  componentURI: string;
  /** * component inner Props */props? : Record<string.any>;
}

export type SchemaLayerAbility = {
  ComponentLoader: ComponentType<ComponentLoaderProps>;
  preLoadComponent: (componentName: string) = > Promise<any>;
}
Copy the code

Suspense & ErrorBoundary

Provides custom add-on methods and custom error bounds, which share onError and renderError methods passed in from the container side to support dynamic configuration.

export type ErrorBoundaryAbility = {
  /** * error events (including load error and render error) */onError? :(error: any.type: 'load' | 'render') = > void;
  /** * Custom failed page */renderError? :(error: any.type: 'load' | 'render') = > React.ReactNode;
}

// Suspense
export interface SuspenseAbility extends ErrorBoundaryAbility {
  /** * Custom loading interface */renderLoading? :() = > ReactElement;
   /** * Successfully loaded event */onLoad? :(componentClass: any) = > void;
}
Copy the code

Navigator

The implementation idea of the runtime isto use the history API to simulate a PageStack, all navigation method operations are based on PageStack, with popstate event response to return events, routing navigation ability based on ComponentLoader. On its basis, it added the function of single route match multiple copies and route callback, which was described in the previous article, portal.

export interface NavigationAbility {
  navAPI: {
    navigateTo(page: string, params? : Record<string.any>) :void;
    back(): void;
  
    navigateAndWaitBack(page: string, params? : Record<string.any>) :Promise<any>;
    backWithResponse(data: any) :void;
  
    replace(page: string, params? : Record<string.any>) :void;
  
    open(page: string, params? : Record<string.any>) :void;
    reload(): void; }}Copy the code

Container Side Configuration

For details about the configuration, see Container-API.

export interfaceContainerProps { children? : ReactNode;/** * environment injection */envType? : EnvTypeEnum;/** * Single-route multi-copy cache rule */cacheOptions? :Array<RegExp>;
  /** * Custom parsing */customResolveModuleRule? :(componentURI: string) = > ModuleType | null;
  /** * preset component */modules? : {readonly [moduleURI: string]: RemoteModule | LazyModule | LocalModule;
  };
  /** * custom add animation */
  readonlywithSuspense? :boolean | React.ComponentType<any>;
  /** * custom error boundaries */
  readonlywithErrorBoundary? :boolean | React.ComponentType<any>;
}
Copy the code

The Mock implementation

In addition to defining interfaces, you need to implement some functions. In this way, you can directly package container-API and import the declaration file in demo.

import type * as ContainerAPI from '.. /.. /context/index';
declare module '@iron-man/container-api' {
  export = ContainerAPI;
}
Copy the code

Web-implement

Initialization context

Web-side implementations, which can use a variety of open capabilities, first need to initialize the Context to provide a preliminary mock implementation of each API before implementation.

import { BasicLayerAbility, ServiceLayerAbility, SchemaLayerAbility, NavigationAbility, ContainerAbility } from '@iron-man/container-api';

import { unimplementedAsyncFunction, unimplementedFunction, unimplementedComponent } from './utils/initialFunction';

const defaultBasicAbility = {
  alert: unimplementedFunction,
  // ...
}
const defaultSchema = {
  componentLoader: unimplementedComponent,
  // ...
}
// other ...
export constcontainerContext: Context<ContainerAbility> = createContext({ ... defaultBasicAbility, ... defaultSchema, ... defaultService, ... defaultNavigation });Copy the code

Basic

The next step is to initialize the base container, and createContainer is responsible for integrating the layers within the base container layer to provide iterative initialization.

Tools layer

// layers
import { ContainerOptions, ContainerAbility } from '@iron-man/container-api';
import BasicLayer from './basic';
import SchemaLayer from './schema/index';
import ServiceLayer from './service/index';
import { LayerIteration } from './interface';

const layers: LayerIteration[] = [BasicLayer, SchemaLayer, ServiceLayer];
// Initialize the underlying context
function createContainer(originContainer: ContainerAbility, options: ContainerOptions) {
  return layers.reduce((preContainer, layer) = > layer(preContainer, options), originContainer);
}

import { ContainerProps } from '@iron-man/container-api';
import { containerContext as ContainerContext } from '@/containerContext';
/** * base container layer */
export default function BasicContainer(props: ContainerProps) {
  const{ children, ... restProps } = props;const propsRef = useSafeTrackingRef(restProps);
  const originContext = useContext(ContainerContext);
  const value = useMemo(() = > createContainer(originContext, propsRef.current), []);
  return <ContainerContext.Provider value={value}>{children}</ContainerContext.Provider>;
}
Copy the code
function BasicLayer(origin: ContainerAbility, options: ContainerOptions) :ContainerAbility {
  const basic = createBasicLayer(origin, options)
  return{... origin, ... basic }; }function createBasicLayer() {
  return { alert: AM.alert, toast: AM.toast, ... }}Copy the code

Antd-mobile wireless terminal is widely used in project requirements, and its implementation is provided here. Care should be taken to adapt the interface parameter type. By following suit and defining the other tool apis, the tools part of the base container layer is complete.

Service Gateway Layer

Service invocation includes basic Request/JSONP. Each company may have its own set of gateways, or customize team collaboration specifications according to Natty-Fetch. Injection service provides customized services, but parameters must be fixed.

function ServiceLayer(origin: ContainerAbility, options: ContainerOptions) {
  return {
    ...origin,
    service: {
      ...createServicePortal(origin, options), // Register the service invocation. createCustomServicePortal(origin, options),// Provide registration custom services}}; }Copy the code

According to the context injected domain& determine the environment load service, here request with fetch to do a simple implementation. CreateCustomServicePortal packaging custom service.

function createServicePortal(origin: ContainerAbility, options: ContainerOptions) :Pick<ServiceLayerAbility['service', 'jsonp'|'request'|'getService'> {
  const Request = createRequestService(options.envType || 'online'); // Create the Request service
  const Jsonp = createJsonpService(options.envType || 'online');

  const presetServices = {
    request: Request,
    jsonp: Jsonp,
  };
	// Provide a unified access to services
  const getService = (name: string): TypeRequest | CustomFetcher | null= > {
    if (Object.prototype.hasOwnProperty.call(presetServices, name)) {
      return presetServices[name as keyof typeof presetServices];
    } else if (typeof name === 'string') {
      return origin.service.getCustomService(name)
    }else {
      return null}};return {
    ...presetServices,
    getService,
  };
}
Copy the code

Protocol Layer (Component/module loader)

The protocol layer is more troublesome, it supports three types of module introduction, remote module, local module, lazy loading module, local module is very container understand, lazy loading module is roughly similar to remote module, the difference is that the remote module can introduce AMD, CMD, UMD, ESM module.

In addition, there are several issues to consider

  • Protocol parsing, how to determine which module you want according to protocol parsing, and can customize the resolution module
    • Remote module => URL
    • Local module => Built-in/Preset components
  • Protocol caching and component preloading.
  • Custom add animation and error boundaries
Protocol specification

Components of any type have their own “identity information”, which is only available to the loader during the loading process. In addition, there is a unified method for external installation to call. Loading is distinguished from rendering, and preloading is actually just a call to the load method.

To generate component identity information, componentLoader accepts a componentURI parameter, which is based on the protocol header, protocol body, and protocol parameters in the protocol resolution. The default protocol supports the following three formats. The process is as follows

"group://chat.list#default" // Team repository module
"internal://Loading" // Local/built-in module
"https://xxx.com/group/repo_name/version/index.js#Home" // Remote protocol module
Copy the code
Graph TD Loader --ComponentRegister--> uri-parse uri-parse --GroupRepoModule--> LoadScript Uri-parse --RemoteModule--> LoadScript --cache--> Render uri-parse --LocalModule --> LocalModule --> LazyModule LazyModule --Y--> LoadScript LazyModule --N--> Render

Through the address resolution protocol to get the modules, this address may be outside the chain can also be local, it is possible that local + mixed – lazyModule chain outside resources, to get the component identity information, according to the module type load scripts, equipped with animation and render out after error bounds, also need to consider in the process of the cache and to avoid the problem of repeated loading. The difference with preloading is that there is no final Render.

export const createComponentLoader = (origin: ContainerAbility, options: ContainerOptions): Pick<ContainerAbility, 'ComponentLoader' | 'preLoadComponent'> = > {const { getComponentInfo } = createComponentRegister(options, INTERNAL_MODULES);
  // Load the component
	const componentLoader = (loaderProps: ComponentLoaderProps) = > {
    // ...
    return <ComponentRender componentURI={componentURI} innerProps={innerProps} />
  };
  // Render component
	const ComponentRender: React.FC<ComponentRenderProps> = (props) = > {
    const { componentURI, innerProps = {} } = props;
    // Get component identity information
    const componentInfo = useMemo(() = > getComponentInfo(componentURI), [componentURI]);
    if(! componentInfo) {// Kick out message data that does not conform to the protocol format
      throw new Error(`unknown componentURI ${componentURI}`);
    }
    // Fragment To avoid ts errors
    return <>{componentInfo.render(innerProps)}</>;
  };
  return {
    componentLoader,
    preLoadComponent: (componentURI: string) = > {
      const componentInfo = getComponentInfo(componentURI);
      if(! componentInfo) {throw new CantGetModuleInfo(componentURI);
      }
			// Just call the load method
      returncomponentInfo.load(); }}}Copy the code

GetComponentInfo (componentURI) does three things

  1. Parsing protocols and providing apis for external loading.
  2. Provides protocol resolution and component caching.
  3. Provides a custom protocol resolution process.
const createComponentRegister = (options: ContainerOptions, internalModule: ContainerOptions['modules']) = > {
  const registerMap = useMemo(() = > new Map(), []);
  function getComponentInfo(componentURI: string) {
    / / cache
    if(registerMap.has(componentURI)) {
      return registerMap.get(componentURI);
    }
    // Self-parsing protocol > Built-in Modules > Protocol Parsing
    constinfo: ModuleType = options.customResolveModuleRule? .call(null, componentURI) || 
      internalModule && internalModule[componentURI] ||
      resolveModule(componentURI, options);
      if(! info) {throw new CantGetModuleInfo(componentURI);
      }
    	
      return registerComponentFromModuleInfo(componentURI, info);
    }
  return {
    registerMap,
    getComponentInfo
  }
}
Copy the code
Protocol parsing

Protocol resolution is easier to deal with, and there are only three possible types.

export type RemoteModule = {
  type: 'remote';
  url: string; format? :'AMD' | 'UMD' | 'CMD';
  /** Component export name */exportName? :string;
};

export type LocalModule = {
  type: 'local';
  module: unknown;
};
export type LazyModule = {
  type: 'lazy';
  module: () = > Promise<unknown>;
};
export type ModuleType = LocalModule | RemoteModule | LazyModule;
Copy the code

Then parseURI parses the protocol URI to get the following fields

type moduleURIInfo = {
  protocol: string; / / agreement
  path: string; // File path
  subPath: string; // Module path
  // cacheId: string; // Cache is used in the routing layer and can be ignored for now
  componentURI: string; // Full path
};
Copy the code

Matches the protocol body resolution logic based on the protocol field

enum ProtocolEnum {
  http = 'http',
  https = 'https',
  group = 'group',
  internal = 'internal'
}

export default function resolveModule(componentURI: string, options: ContainerOptions) :ModuleType {
  const uriInfo = parseURI(componentURI, options);

  switch (uriInfo.protocol) {
    case ProtocolEnum.http:
    case ProtocolEnum.https: { // Remote protocol module
      return parseRemoteModule(uriInfo, options);
    }
    case ProtocolEnum.group: { // Team repository module
      return parseGroupModule(uriInfo, options);
    }
    case ProtocolEnum.internal: { // Built-in module
      return Reflect.get(options.modules || {}, uriInfo.path)
    }
    default: {
			throw newInvalidComponentURIProtocol(uriInfo.protocol); }}}function parseGroupModule(uriInfo: moduleURIInfo, options: ContainerOptions) :ModuleType {
  // custom rule for group ...
  return {
    type: 'remote'.url: `//xxx.com/${uriInfo.protocol}/${uriInfo.path}.js`.exportName: uriInfo.subPath,
    format: 'UMD'}; }Copy the code

Other matching logic is similar. You should also refine the component version information and protocol header control. For example, inject a global component version mapping table into the template, and use more flexible protocol header match rules, such as group => hotline-group…

Agreement to load

Next comes the loading process, with self-injecting global event hooks such as registed, beforeLoad, and Loaded.

function registerComponentFromModuleInfo(componentURI: string, info: ModuleType) {
    let _component: any = null;
    // the wrapper waits for the injection to load the request logic
    function createComponentInfo(fetchComponent: Fetcher<any>) :ComponentInfo {
      const load = () = > {
        const Component = fetchComponent();
        if(! Component) {throw new InvalidComponentError(componentURI, getModuleName(info));
        }
        return _component = Component;
      }
      return {
        id: componentURI,
        render: (props: any) = > {
          const Component = load();
          return React.createElement(Component, props);
        },
        load,
        get component() {
          return_component; }}; }if(info.type === 'local') { // Local module
      return register(componentURI, createComponentInfo(() = > info.module));
    }

    if(info.type === 'lazy') { // Lazy loading of modules
      return register(componentURI, createComponentInfo(createFetcher(info.module)));
    }

    if(info.type === 'remote') {
      const fetcher = createFetcher(async () => loadModule({
          name: info.name,
          url: info.url,
          exportName: info.exportName || ' '.format: info.format || 'UMD'}));return register(componentURI, createComponentInfo(fetcher));
    }
    throw new InvalidModuleURIError(componentURI);
  }
Copy the code

It is well known that functions other than hooks are executed multiple times in function components. To load components only once, use closure functions. LoadModule is a script-loading function that shields some details, such as handling possible sandboxes, requirejs, and link restrictions that cannot inject scripts (fetch content eval, note cross-domain). The Web uses urL-package-loader directly. This package comes with a built-in AMD compatibility scheme. If your environment does not support AMD, you need to configure the libraryName to be reduced to UMD.

export async function loadModule<T = any> (config: LoadModuleOptions) :Promise<T> {
  const { url, name, format = 'UMD', exportName } = config;
  let module;
	// PackageLoader has built-in AMD judgment
  if (format === 'UMD' || format === 'AMD') {
    module = await new PackageLoader({ name, url }).loadScript();
  }

  if (module && exportName) {
    return get(module, exportName);
  }

  if (module && module.__esModule && ! exportName) {// es module
    return module.default;
  }
  return module;
}
Copy the code

At this point, the base container is ready to run.

const App: FC<ContainerProps> = (props) = > {
  <BasicContainer {. props} >
    <Demo />
  </BasicContainer>
}

const Demo = () = > {
  const container = useContext(containerContext);
  const { ComponentLoader } = container;
  useEffect(() = > {
    // container.toast('xxx')
  }, [])
  return (
      <div>
    	<ComponentLoader
        componentURI="https://0.0.0.0:8082/todoList"
        props={{
          title:'vegetables',itemList:['🥒', '🥔', '🎃'],}} />
      <ComponentLoader
        componentURI="group://todoList#default"
        props={{
          title:'fruit',itemList:['🍌', '🍊', '🍐', '🍉'],}} />
    </div>)}Copy the code

Navigator

Although this has been covered in detail in previous articles, the routing layer solves this problem

  1. None Refresh switched routes. They are not coupled with or conflicting with upper-layer routes and can be used together.
  2. Route multiple copy caches, such as multiple chat interfaces.
  3. The silent route is not updated.
  4. Provides routing hooks and routing callbacks.

Data control routing implements state persistence

ErrorBoundary

The essence is to control the error boundary, so that the entire application will not crash, but also provide a bottom-pocket view, in addition to improving the user’s sense of body can also be added other functions, such as “click retry”, “error display/report”, on this basis, to provide a custom boundary interface.

const ErrorBoundaryContainer: React.FC<ContainerProps> = props= > {
  const { children, withErrorBoundary } = props;
  if (withErrorBoundary === true || withErrorBoundary === undefined) {
    return <ErrorBoundaryWrapper>{children}</ErrorBoundaryWrapper>;
  }
  if (typeof withErrorBoundary === 'function') {
    return createElement(withErrorBoundary, {}, children);
  }
  return children as ReactElement;
};
Copy the code

Suspense

Its principle is to judge the completion of load rendering by capturing the Promise state thrown by sub-components. Code-splitting effect can be achieved with react. lazy + dynamic import. Similarly with ErrorBoundary, it should provide an interface for customizing Suspense. Note some cases where Suspense is not supported. For example, there is no dynamic import and customization of Suspense needs to be done with ErrorBoundaryWrapper.

const MockSuspense = () = > {
  / /... other
    const handleError = useCallback((boundaryError: any) = > {
        // Error or not Promise, continue throw
        if (boundaryError instanceof Error| |! isPromiseAlike(boundaryError)) {throw boundaryError;
        }
        const thePromise = boundaryError;
        promiseIdRef.current = Date.now() + Math.random();
        const thePromiseId = promiseIdRef.current;

        // promise only
        thePromise.then(() = > {
            // success 
          },
          (err: any) = > {
            if (thePromiseId === promiseIdRef.current) {
              // fail}}); } []);return (
    <ErrorBoundaryWrapper onError={handleError} renderError={renderFallback}>
      {children}
    </ErrorBoundaryWrapper>)}// SuspenseContainer.tsx
const SuspenseContainer: React.FC<ContainerProps> = (props: ContainerProps) = > {
  const { children, withSuspense } = props;

  if (withSuspense === true || withSuspense === undefined) {
    return <Suspense fallback={<Loading />}>{children}</Suspense>;
  }

  if (typeof withSuspense === 'function') {
    return createElement(withSuspense, { fallback: <Loading /> }, children);
  }
  return children as ReactElement;
};
Copy the code

This completes the Web-impl, which is very simple to use because you are using the Context API, and you only need to apply the outermost wrapper, like react-Route, which provides the withContext injection method and the Hooks API that comes with the Context.

Webpack.js.org/configurati…

Requirejs requirejs.config({paths: {“@iron-man/container-web-api”: “//0.0.0.0:7105/index”}})

const Container: FC<ContainerProps> = ({ children, ... props }) = > (
  <BasicContainer {. props} >
    <NavigatorContainer {. props} >
      <ErrorBoundaryContainer {. props} >
        <SuspenseContainer {. props} >{children}</SuspenseContainer>
      </ErrorBoundaryContainer>
    </NavigatorContainer>
  </BasicContainer>
);

const App = () = > (
  <Container {. options} >
  	<Demo />
  </Container>
)

const Demo = () = > {
  const container = useContext(containerContext);
}
// or
const Demo = withContext<DemoProps>((props) = > {
  const { navAPI } = props;
})
Copy the code

Most of the other end containers are similar, and only need to be implemented differently for different business systems and end environments.