This article was first published on wechat public number — interesting things in the world, handling reprint please indicate the source, otherwise will be held responsible for copyright.

Making the address

Library dependencies: implementation ‘com. Whensunset: becomes: 0.2’

I haven’t updated my blog for nearly two months. I feel past my prime, haha. In fact, I am preparing a big move, and this big move takes a long time to prepare, everyone look forward to it. This article is a prelude to filling in the gaps after so long without an update. Of course, this article is not a quick one, and there is a lot of dry stuff in it, because I have been working on this for the last few months — the text of story, the sticker control.

Reading Instructions:

  • 1. Text, normal stickers, dynamic stickers, and so on are collectively referred to as — elements
  • 2. There are some Abbreviations: TextureView – TV, RenderThread – RT, ViewGroup, VG, sets, ins, ElementContainerView – ECV, DecorationElementContainerView – DECV, ElementActionListener – EAL, WsElement – WE, RLECV – RuleLineElementContainerView, TECV – TrashElementContainerView
  • 3. Tik Tok, Doa — Tik Tok

This article is divided into the following chapters, which can be read on demand:

  • 1. Technical analysis of Story product — Talk about the functions and possible technical implementation of apps that can release Story in the market.
  • 2.Android end sticker text structure and implementation — talk about how to achieve a collection of the advantages of Android end text sticker function.
  • 3. Copy a Douyin sticker control — Based on the core code in 2, simply implement the sticker control in Douyin App.

I. Technical analysis of Story products

First of all, there are many apps that support capturing and publishing stories and videos with similar concepts. The ancestor of story abroad is Ins. Domestic wechat moment video, multi-flash video shooting, douyin shooting, etc., all refer to the STORY of Ins. The analysis in this chapter is also based on the analysis of the above four apps.

1. Product function analysis

The following table is my conclusion after carefully playing the well-known Chinese and foreign apps that can release Story video. Let’s make a careful analysis according to the functions of each product.

Instagram trill Many flash WeChat
The text Have, function most rich Have, function is rich Have, function is less Have, function least
Text zoom When there is emoji, it is fuzzy, when there is no emoji, it is clear, and the enlargement is not slow No blur, big caton No blur, big caton A little fuzzy, magnification does not stick
Dynamic stickers Have, support GIF only, follow hand Have, support video format, do not follow Have, support video format, do not follow Have, support GIF only, follow hand
Function of stickers Have, function rich Yes, general function Yes, general function There are, only geo-location stickers
Ordinary sticker Have, very follow hand Yes, do not follow Yes, do not follow Have, very follow hand
Whether text and stickers can cover each other can Can not be Can not be can
  • 1. First of all, Instagram is the uncrowned king. After all, the concept of Story is popularized by Instagram. It can be said that the function of Ins is the most complete and detailed, if we want to find a benchmark then it is not INS.
  • 2. We can see from the above picture that Douyin and Shanduo have very similar functions. After all, they are father and son, so we can analyze them as one family (later called Doushan). There was one experience point in my experience (not listed in the chart) that was better than Instagram. That’s the smoothness of the text editing state switch, where flicker uses a smooth transition animation, and Ins is a stiff appearance and disappearance. There is something here in my opinion, as to what it is I will point out later in the technical analysis.
  • 3. From this point of view, wechat seems to have a kind of confidence. In fact, both functions and experience are not as good as those of the other three players. Whether that counts as a blow is for the reader to judge.
  • 4. Look at the follow-up problem of stickers and whether text stickers can cover each other.
    • 1. We found that if the stickers only supported GIFs, they would follow. If the sticker supports video format, it will not follow.
    • 2. Similarly, if the sticker supports GIF, the text and the sticker can overwrite each other, but not each other.
    • 3. I will also point out the answers to the two questions mentioned above in the later technical analysis.
  • 5. The last problem is the problem of text enlarging and blurring. When the text on wechat is enlarged, it will appear a little blurry, while the flicker will not (this refers to the edited video, not the posted video). Wechat uses a very clever way to make the text not too fuzzy, which is to limit the magnification of the text, and the text does not allow adjusting the font size. However, the problem with flicker is that the text containing multiple emojis will be very slow and flash, while wechat does not have this problem, no matter how many emojis zoom is very smooth. Instagram is a special case. When there is emoji, it will be more blurred, while when there is no emoji, it will not be blurred, and the enlargement will not be slow. I will explain this problem in detail in the technical analysis.

2. Technical analysis

A feature is born when the product and technology compromise (or quarrel). So in this section, I will talk about the technical reasons why the four app experiences analyzed in the last section are not perfect, and also to solve the problems for the following technologies.

(1).TextureView(SurfaceView) vs. ViewGroup

Those of you who follow me should know that my last blog post was a full source analysis of the SurfaceView family. When I knew I needed to do this, actually the first thing I thought was to use TV. Text and stickers can be drawn on the Surface, and the performance doesn’t seem to be too bad. But the end result was that I took a few extra days to completely refactor the code that used TV as the base for drawing containers. A thousand words make a poem: Ten thousand lines of code, think about the first line. Got no idea what the plan was. Worked till dawn. Now LET me talk about the advantages and disadvantages of TV and VG as base drawing containers:

  • 1. Advantages of TV:
    • 1. The drawing logic is clear and the drawing process can be manually controlled.
    • 2) There seems to be no…
  • 2. Advantages of VG:
    • 1. There are a large number of off-the-shelf controls that can be combined to meet almost all of our needs. Can facilitate the development of functional stickers.
    • 2. A complete event distribution process can be used to facilitate elements to respond to events.
    • 3. In cases where there is already a TV (for example, video is played on TV while editing), the VG refresh has little impact on RT. TV increases the load of RT. The intuitive experience here is that when zooming and moving elements, the video playback is very slow because our TV refresh is taking CPU time away from the TV that the video is playing on (which is why I finally abandoned the TV).
    • 4. Using VG, we can use a variety of animations to optimize the user experience and make state transitions smooth. An example of this is the animation of a text editing state switch.
    • 5. We don’t need to write all kinds of drawing logic with canvas by ourselves. ** The code was readable by god and me when it was written, but after a few months only God could understand it. ** In zhihu’s words, this kind of code is — shitshan.
  • 3. We found that most of the benefits of VGS come from the Android framework layer. If we did it with TV, we would just recreate a flawed wheel. ** VG is better than TV in terms of duration, user experience, code expansibility and so on. ** Please forgive me for the stupid choice I made to choose TV two months ago. ** Dear readers, if you think I have helped you out of this big hole, then quickly follow my wechat official account: Interesting things in the world. ** more dry goods waiting for you to see.

(2). How to display dynamic stickers

From the previous comparison, we know that there is an irreconcilable contradiction between resources that support video formats and those that follow them. Instagram and wechat chose to follow, while Tiktok chose to support video format resources. Next, we will analyze the technical principles and trade-offs

  • 1. The first thing we need to know is that displaying multiple video player Windows in the framework layer in order to support multiple dynamic stickers with video resources is silly. Since our background is generally a video player, we can integrate multiple dynamic sticker video resources into the video player through the ability of native layer. This means ** always has only one video player, and the resources for dynamic stickers are assigned to the player to play. ** Of course, such video players need to be tailored according to their own functions even if they are open source. Among the four apps, Tiktok chooses this scheme. We can simply judge the ability of this player:
    • 1. Be able to play regular videos (this is crap)
    • 2. Able to perform operations such as displacement, zooming and rotation on the video
    • 3. The player can add multiple sub-videos when the video is playing, and the sub-videos also support displacement, rotation, zooming and other functions.
    • 4. Various information of sub-videos can be changed in real time during the playing of the main video. It is important that the performance is not too bad.
  • 2. Instagram and wechat have chosen to follow, so it is obvious that their implementation is in the framework layer GIF/WebP resources displayed on the view, so follow is a matter of course. What controls can display GIF and WebP images? That’s Fresco, of course, which happens to be made by FaceBook.
  • 3. Now we know that supporting video format resources is much more difficult than GIF, I can make this function independently with GIF only. Once the video format is supported, I can’t handle it by myself at present (of course, I should be able to handle it after the development of our video editing SDK is completed). So what are the benefits of resources that support video formats? Let me list them
    • 1. Can finely control the display range of dynamic stickers, because we can’t control GIF of framework layer. For video resources, the native layer can control the progress, playing area and other properties of the video.
    • 2. The video format is more expansive than GIF and displays a higher degree of detail.
  • 4. In fact, there is another disadvantage in the implementation of the way of blinking: the text and stickers cannot cover each other, because the stickers are always rendered in the video, while the text is displayed by view. The sticker will always be below the text on the Z axis.

(3) the dispute over the display mode of the text

If the reader sees through (1) and (2), then I’m sure you have a pretty good idea of how the four apps display text. Here IS a brief analysis:

  • 1. There is no doubt that all the four apps use VG as the basic drawing container. Instagram and wechat because of GIF support, needless to say, must be a view to display GIF. Although the buffeting stickers are handed to the player for rendering, they have various functional stickers, and the combination of these stickers can only be combined using view. Otherwise the code is really unmaintainable, and I’ve seen it myself.
  • 2. So now the question is why the final performance of Tiktok, INSTAGRAM and wechat is different from that of view to display text. One key point here is the type of view.
    • 1. The first thing we can confirm is that wechat will get the view screenshot of EditText after the text editing is completed. Finally, a class ImageView is rotated on the interface, which explains the phenomenon of text enlargement and blurriness. Wechat also “cleverly” limits the number of times the text can zoom in, so that users don’t end up with text that feels fuzzy.
    • 2. Tiktok is the same company, so they display text in the same way, using EditText to display the finished text. So the view that you’re zooming in and out of on the interface is the EditText. The benefits are obvious: users can display text very clearly even if it is large. ** However, as I mentioned earlier, there is a drawback to such a scheme: EditText can be very slow to operate when there are many emojis and the magnification is large, and sometimes the screen flashes. ** This is a bug in EditText itself, and if Google doesn’t fix it, it’s going to keep it.
    • 3. Ins combines these two approaches. This morphed into using ImageView to display a screenshot of the text in the case of emoji and EditText to display the text in the case of no emoji. That’s one reason to call Ins the uncrowned king. It takes care of all the details of the user experience and tries to give users the best possible experience. But in the end, it’s up to readers and users to judge who has the best plan.
    • 4. I mentioned earlier that Tiktog does a better job of switching text editing states than Instagram because they use animations to do it. Wechat because the use of ImageView to show the text screenshot is not easy to do this animation is understandable. However, THERE should be a way for Instagram to make this animation. I think it is probably not made in order to make users experience the same without emoji.

(4).View zoom displacement dispute

We all know that there are two ways to change the size and position of a view in Android. One is to change the real properties of the view in LayoutParam, and the other is to set the scale and translation of the view. Let’s talk about the characteristics of both approaches, and of course our implementation will eventually have both

  • 1. Change the LayoutParam to change the view’s characteristics:
    • 1. The content in the view is always the size originally defined. For example, if there is text in the view, the font size of the text will not change.
    • 2. View will be rearranged if it is a VG.
    • 3. It is convenient to distribute events. For example, in my current implementation, accurate event distribution can be carried out in this mode.
  • 2. Use Scale and translation to change the characteristics of the View:
    • 1. Content in view can be directly zoomed in and out, which is suitable for most of our requirement scenarios.
    • 2. Measure, layout and draw will not be re-implemented in view. Performance seems to be a bit better than the previous method.
    • 3. Event distribution can also be carried out, but there should be some problems. Currently, I cannot achieve accurate event distribution in this mode, maybe there is something wrong with my implementation.

Second, Android end sticker text control architecture and implementation

1. Architecture

In our first section, we’ll talk about the architectural implementation of the text sticker control, based on figure 1 below and the code on Github. I suggest you clone the code down, of course, don’t forget to give a star.

Let’s start with figure 1 to describe the overall control architecture

  • 1. 1. Smart refrigerator
    • 1. We analyzed in the previous chapter that the drawing container of the whole control should be a VG. So the ElementContainerView in the figure is such a container, and briefly it has these functions:
      • 1. Handle various gesture events, including one finger and two fingers.
      • 2. Add and remove some Views. The view here is used to draw various elements.
      • 3. Provide some API for outsiders to manipulate the View.
      • 4. Provide a listener so that external users can listen to internal processes.
    • 2. With the draw container, we need to add views to the draw container. And the view needs to have all kinds of data as the user is doing it, so I’m using it hereWETo encapsulate the view that needs to be displayed, which has the following inside:
      • 1. Data required by user operations, such as Scale, rotate, X, and Y.
      • 2. There are ways to update views with data.
      • 3. Provide some API for ECV to manipulate views in WE.
    • 3. ECV and WE can continue to inherit a variety of extension controls.
  • 2. Having covered the whole picture, we can talk about the process in detail
    • 1. Start with the horizontal arrow:External/internal calls, the external need to call ECV to add, delete, change, check and other operations on WE will enter this path, this path can have the following operations:
      • 1. AddElement: Adds an element to the ECV.
      • 2. DeleteElement: Deletes an element from the ECV.
      • Update: Update the state of the view in WE based on the current data.
      • 4. FindElementByPosition: Find the topmost WE in the passed coordinates.
      • 5. SelectElement: Select a “WE” and move it to the top layer.
      • 6. UnSelectElement: Unselect a WE.
    • 2. Now for the upright arrow:Gesture event flowThere’s some internal logic going on here, and as we’ll see later, the final flow of events triggers the following sequence of actions:
      • 1. The whole process of single movement: When WE select a WE, WE can move it. Movement here can be divided into beginning, ongoing and ending. Each event calls the corresponding method of WE to update its internal data and then update the View.
      • 2. The whole process of two-finger rotation and scaling: When WE select a WE, WE can scale and rotate it with our two fingers. It can be divided into beginning, ongoing and ending. The corresponding method of WE is also called here to update the data and then update the view.
      • 3. Select the element and click again: When WE select a “WE”, WE can click it again. Because WE represents a View, WE can hand events directly to the view to trigger various responses within it. Of course WE can also add a VG as a draw view for WE. At this point we can hand the click event to the VG, which can then continue to distribute the event to the child view. Note: Because the ECV needs to receive movement events, only click events can currently be distributed.
      • 4. Click blank area: When WE do not click any WE, WE can perform some operations, such as clearing the current WE selected state. This behavior is inheritable and can be overridden by subclasses.
      • 5. OnFling: This is a “throw” gesture that performs playful actions such as sliding a little farther while the finger is raised. This behavior is also inheritable and can be overridden by subclasses.
      • 6. Subclass events: We feel that there are fewer events triggered. So when down, move, up to priority calls three methods downSelectTapOtherAction, scrollSelectTapOtherAction, upSelectTapOtherAction. These three methods can be overridden by subclasses and return true to indicate that the event has been consumed and the ECV will not fire any more events. This way subclasses can also extend gestures, such as holding down a place to zoom with one finger, and so on.
      • 7. In my diagram, ECV also implements a subclass DECV, which simply adds two gestures:
        • 1. One-finger zooming: Similar to tiktok, you can drag to zoom and rotate elements while holding down the lower right corner of the element.
        • 2. Delete: Like Tiktok, click on the upper left corner of the element to delete the element directly.
    • 3. There is one feature in Figure 1 that is not actually drawn because it cannot be drawn, and that is: almost all the behavior of ECV in 1 and 2 can be listened on externally. ElementActionListener is the interface responsible for listening. ECV contains one EAL set so multiple listeners can be added.

2. Implementation of technical points

I in the development of the whole control of the time encountered more technical implementation of the difficulties, so this section on the selection of some to tell, so that readers will not be particularly confused when looking at the source code.

(1). Define data structure and draw coordinate system

-- -- -- -- -- code block1----- com.whensunset.sticker.WsElement

public int mZIndex = -1; // Image hierarchy
  
  protected float mMoveX; // Relative to the center of mElementContainerView after initialization
  
  protected float mMoveY; // Relative to the center of mElementContainerView after initialization
  
  protected float mOriginWidth; // The width of the content at initialization
  
  protected float mOriginHeight; // The height of the content at initialization
  
  protected Rect mEditRect; // Drawable area
  
  protected float mRotate; // The Angle of the image rotated clockwise
  
  protected float mScale = 1.0 f; // The size of the image zoom
  
  protected float mAlpha = 1.0 f; // Image transparency
  
  protected boolean mIsSelected; // Check whether the command is selected
  
  @ElementType
  protected int mElementType; // Used to distinguish element types
  
  // The parent View of Element mElementShowingView, which contains all elements that need to be displayed
  protected ElementContainerView mElementContainerView;
  
  protected View mElementShowingView; // View to display content
  
  protected int mRedundantAreaLeftRight = 0; // A distance that extends from the left to the right of the content area to extend the element's clickable area
  
  protected int mRedundantAreaTopBottom = 0; // A distance that extends from top to bottom of the content area to extend the element's clickable area
  
  // Whether showing View responds to the click event after the element is selected
  protected boolean mIsResponseSelectedClick = false;
  
  // Whether to change the height and width parameters when updating the showing view. In general, you just use scale and rotate to refresh the view
  protected boolean mIsRealUpdateShowingViewParams = false;
Copy the code

Function unmoved data first, data structure is a framework very core things, define a good data structure can save a lot of unnecessary code. So in this section we will define the data structure and view drawing coordinate system according to code block 1

  • 1. WE take the ECV WE are in as the drawable region of the View in WE. MEditRect in code block 1 is the rectangle represented by this region. So mEditRect is usually **[0, 0, ecv.getwidth, ecv.getheight], and mEditRect is in px**.

  • 2. The origin of the coordinate system we defined is at the center of mEditRect, which is also the center of ECV. MMoveX and mMoveY represent the distance between view and the origin of the coordinate system respectively. Since they both default to 0, the default position of a view when added to the ECV is in the center of the ECV. The units of these two parameters are px.

  • 3. Our coordinate system has z axis, mZIndex is the coordinate of z axis, Z axis represents the cascading relationship of view, when mZIndex is 0, it means view is on the top layer of ECV. MZindex defaults to -1, indicating that the view is not added to the ECV. MZIndex is an integer.

  • 4. We define mRotate as the clockwise view and the range of mRotate as [-360,360].

    5. We define mScale 1 when the view is not scaled, mScale 2 when the view is enlarged by 2 times, and so on.

  • 6. MOriginWidth and mOriginHeight are the initial size of the view, in units of PX.

  • 7. MAlpha is the transparency of the view. The default value is 1 and less than or equal to 1.

  • 8. The rest of the parameters need not be explained, the code is commented.

(2). How is the View in WE updated

As WE know from the previous analysis, various data in WE are constantly updated during the ECV gesture processing. After the data is updated, we.update is called to refresh the state of the view. Let’s take a quick look at the two view refreshes we support using code block 2:

-- -- -- -- -- a code block 2 -- -- -- -- -- com. Whensunset. Becomes. WsElement# update public void the update () {if (isRealChangeShowingView ()) { AbsoluteLayout.LayoutParams showingViewLayoutParams = (AbsoluteLayout.LayoutParams) mElementShowingView.getLayoutParams(); showingViewLayoutParams.width = (int) (mOriginWidth * mScale); showingViewLayoutParams.height = (int) (mOriginHeight * mScale); if (! limitElementAreaLeftRight()) { mMoveX = (mMoveX < 0 ? -1 * getLeftRightLimitLength() : getLeftRightLimitLength()); } showingViewLayoutParams.x = (int) getRealX(mMoveX, mElementShowingView); if (! limitElementAreaTopBottom()) { mMoveY = (mMoveY < 0 ? -1 * getBottomTopLimitLength() : getBottomTopLimitLength()); } showingViewLayoutParams.y = (int) getRealY(mMoveY, mElementShowingView); mElementShowingView.setLayoutParams(showingViewLayoutParams); } else { mElementShowingView.setScaleX(mScale); mElementShowingView.setScaleY(mScale); if (! limitElementAreaLeftRight()) { mMoveX = (mMoveX < 0 ? -1 * getLeftRightLimitLength() : getLeftRightLimitLength()); } mElementShowingView.setTranslationX(getRealX(mMoveX, mElementShowingView)); if (! limitElementAreaTopBottom()) { mMoveY = (mMoveY < 0 ? -1 * getBottomTopLimitLength() : getBottomTopLimitLength()); } mElementShowingView.setTranslationY(getRealY(mMoveY, mElementShowingView)); } mElementShowingView.setRotation(mRotate); mElementShowingView.bringToFront(); }Copy the code
  • 1. Set the view’s real parameters to update the view: in code block 2 we see a flag that distinguishes the two view update modes. This method is also very simple, because our ECV is inherited from AbsoluteLayout so we just need to get the LayoutParam of mElementShowingView and set the corresponding data into it. There are two caveats:
    • 1. This method will re-measure, layout and draw each time
    • 2. In this way, I have successfully implemented the event distribution when the VIEW is a VG.
  • 2. Update the View by setting its canvas parameters: The second way is to update the view by setting its underlying RenderNode parameters. In fact, we can simply use the analogy of scale, rotate and translate to canvas. There are two caveats to this approach:
    • 1. This method will not update measure, layout, draw and other methods, and the performance should be better than no. 1.
    • 2. At present, this method can only respond to events when the view is a VG. If the view is a VG, the events will be confused.
  • 3. The above two view update methods have some similarities:
    • 1. We set a limit on the mMoveX and mMoveY parameters of the view. If the current data exceeds the limit, set these two parameters to the upper and lower limits.
    • 2. SetRotation is used to rotate the view
    • 3. BringToFront brings the view to the top layer of the ECV at the end of the update.

(3). How are events delivered from ECV to sub-VGs for distribution

First of all, I won’t go into the Android event distribution system, there are a lot of information on the web. I’ll talk about the implementation in combination with code block 3

-- -- -- -- -- code block3----- com.whensunset.sticker.ElementContainerView

@Override
  public boolean dispatchTouchEvent(MotionEvent ev) {
    if(mSelectedElement ! =null && mSelectedElement.isShowingViewResponseSelectedClick()) {
      if (ev.getAction() == MotionEvent.ACTION_DOWN) {
        long time = System.currentTimeMillis();
        mUpDownMotionEvent[0] = copyMotionEvent(ev);
        Log.i(DEBUG_TAG, "time:" + (System.currentTimeMillis() - time));
      } else if (ev.getAction() == MotionEvent.ACTION_UP) {
        mUpDownMotionEvent[1] = copyMotionEvent(ev); }}return super.dispatchTouchEvent(ev);
  }
  
  private static MotionEvent copyMotionEvent(MotionEvent motionEvent) { Class<? > c = MotionEvent.class; Method motionEventMethod =null;
    try {
      motionEventMethod = c.getMethod("copy");
    } catch (NoSuchMethodException e) {
      e.printStackTrace();
    }
    MotionEvent copyMotionEvent = null;
    try {
      copyMotionEvent = (MotionEvent) motionEventMethod.invoke(motionEvent);
    } catch (IllegalAccessException e) {
      e.printStackTrace();
    } catch (InvocationTargetException e) {
      e.printStackTrace();
    }
    return copyMotionEvent;
  }
  
  @Override
  public boolean onInterceptTouchEvent(MotionEvent event) {
    return true;
  }

/** * click the selected element */ again
  protected void selectedClick(MotionEvent e) {
    if (mSelectedElement == null) {
      Log.w(DEBUG_TAG, "selectedClick edit text but not select ");
    } else {
      if (mSelectedElement.isShowingViewResponseSelectedClick()) {
        mUpDownMotionEvent[0].setLocation(
            mUpDownMotionEvent[0].getX() - mSelectedElement.mElementShowingView.getLeft(),
            mUpDownMotionEvent[0].getY() - mSelectedElement.mElementShowingView.getTop());
        rotateMotionEvent(mUpDownMotionEvent[0], mSelectedElement);
  
        mUpDownMotionEvent[1].setLocation(
            mUpDownMotionEvent[1].getX() - mSelectedElement.mElementShowingView.getLeft(),
            mUpDownMotionEvent[1].getY() - mSelectedElement.mElementShowingView.getTop());
        rotateMotionEvent(mUpDownMotionEvent[1], mSelectedElement);
        mSelectedElement.mElementShowingView.dispatchTouchEvent(mUpDownMotionEvent[0]);
        mSelectedElement.mElementShowingView.dispatchTouchEvent(mUpDownMotionEvent[1]);
      } else{ mSelectedElement.selectedClick(e); } callListener( elementActionListener -> elementActionListener .onSelectedClick(mSelectedElement)); }}Copy the code
  • 1. There are a few important methods that I’ve excerpted from Block 3, and we’ll cover them in a moment. Before we do that, we need to know a few premises:
    • 1. Why does ECV only support click events when handing events to child VGS? The reason for this is that Move, LongPress, and Fling are all gestures that the ECV must consume, and even the ECV must consume the first click on the VG. So in order not to conflict the ECV and the subVG, the subVG can only receive click events.
    • 2. A subVG can only receive click events after the subVG is selected. The reason for this is simple: most of the action on WsElement is done after the WsElement is selected, as is the case with the click event.
    • ** If I select a WsElement, the ECV must use the Down gesture for the movement gesture, and the click event of the subVG also needs the Down gesture. Isn’t that still a conflict? ** I’ll address this in the next code section.
  • 2. Without further ado, let’s parse code block 3:
    • The onInterceptTouchEvent method is used by the ECV to intercept all gestures that pass through it. In this way, the ECV has the highest priority in gesture processing. Only unwanted gestures are given to child VGS. As we said earlier, the click event after WsElement is selected.
    • 2. Then there is dispatchTouchEvent, which is the method called by ECV 2’s parent view when it hands the event to the ECV, and the initial method for the EVENT distribution within the ECV itself. We can see that the up and Down motionEvents are cloned and stored for later use. Until you notice that the copzy method of MotionEvent is a public method, but it is not known from which version the copy method is hidden. So we can only clone the MotionEvent by reflection. Of course, since it only clone down and Up motionevents in the series of events from Down to Up, there is basically no impact on performance.
    • 3. Finally, selectedClick method. We mentioned earlier that after WsElement is selected, ECV movement gesture and child VG click events need to use down event. Therefore, our solution is: The DOWN event is still consumed by the ECV. We manually call the dispatchTouchEvent of the two sub-vg during the up event and pass in the previously stored Down and up MotionEvent successively. If the VG does not rotate, the x and Y coordinates in MotionEvent need to be rotated by the corresponding Angle. Of course, as we mentioned earlier, event distribution currently only supports view updates using LayoutParam.

3. Brief analysis of the source code process

This section I will mainly through a simple demo to explain the whole source of the flow process, so that the reader read the control of the overall operation has a simple understanding. This section focuses on the source code, so be sure to clone the source code and follow along.

(1). Add elements

  • 1. I won’t go over the simple initialization action, but we’ll start with the addTestElement button of the MainActivity. When I click on it, I’m going to create a TestElement, which is the element THAT I’m testing, but it’s pretty simple. And then will call unSelectElement and addSelectAndUpdateElement method in turn. UnSelectElement is to cancel the currently selected element, the left behind analysis, we first see addSelectAndUpdateElement.
  • 2.addSelectAndUpdateElementIs a comparison composition method, which is calledaddElement,selectElement,update, that is, add elements, select elements, update elements. Let’s analyze one by one…
    • 1.addElementThis method does the following:
      • 1. Check the data. If the added WE is empty or the WE is already in the ECV, the adding fails.
      • 2. I maintain a LinkedList of WE in ECV, where all WE are stored. Every time WE add, WE will be added to the top of the list, and the mZIndex of other Weis will also be updated accordingly.
      • 3. Call the we.add method, which initializers mElementShowingView and adds it to the ECV, as I’ll cover in more detail in a later point.
      • 4. Call the corresponding method of the listener, and call the method that is automatically unselected (ECV can be externally determined to be automatically unselected).
    • 2.selectElementAfter WE are added, WE will select it directly, and the code mainly does the following:
      • 1. Check the data. If the WE to be selected is not added to the ECV, the selection fails.
      • 2. Remove the selected WE from the list and add it to the top of the list, then update the mZIndex of the other WE’s.
      • Select * from WE; select * from WE;
      • 4. Call the listener method.
    • 3. Update: Once you’ve done that, you need to adjust the WE to its proper state, which is one of the two view update modes WE discussed in the previous section.
  • 3.WE.add: If you look at the WE source code you will see that mElementShowingView is actually initialized and added to ECV not when WE create it, but when ecv. addElement is created as stated in 2. This method does the following:
    • 1. Call initView to create a view if mElementShowingView has not been initialized. InitView is an abstract method subclass that must implement it. Using TestElement as an example, you can see that its initView creates an ImagaView.
    • 2. Add a view to the ECV using LayoutParam from initView. The mElementShowingView in WE is initialized with 0 left and 0 right at the top left corner of the ECV. The height and width are the mOriginWidth and mOriginHeight that were set when WE were created.
    • 3. If mElementShowingView has already been initialized, it will be updated here.

(2). Element single finger gesture

Unlike adding elements, which require an external call, element gestures are triggered by event distribution, so we can start with the ecv. onTouchEvent method

  • 1. When looking at ecv. onTouchEvent, let’s skip all the previous code and go straight to the last line of the method. GestureDetector is used here, which I think many readers have used so I won’t bother you with the basic usage. We go straight to where it defines the addDetector method.

  • 2. The processing of element single finger gesture mainly depends on three touch events: Down, move and up. Therefore, we directly look at the three callbacks of onDown, onScroll and onSingleTapUp of GestureDetector.

    • 1.onDownIt skips the two-finger gesture and goes straight insingleFingerDownThe logic inside the method is as follows:
      • 1. Find the WE at the top of the current position according to the down position through findElementByPosition.
      • 2. Call downSelectTapOtherAction if there is currently a selected WE that is the same as the current touch WE. This function can be overridden by subclasses and returns false by default. That is, a subclass can handle the current event first, and if it handles the event, return. If the subclass does not, mark mMode as SELECTED_CLICK_OR_MOVE, indicating that the final gesture may be a click element or a move element. Specific behaviors need to be determined when moving or up.
      • 3. If there is currently a selected WE but it is not the same as the current touched WE, there are two cases: one is that the touched WE does not exist, in this case, mark mMode as SINGLE_TAP_BLANK_SCREEN, and click the blank area of ECV. Another case is the presence of a touching WE, in which case a WE has been re-selected.
      • 4. If there is no “WE” currently selected, there will be two situations: one is that the “WE” of the touch also does not exist, which means that the blank area is clicked as before. Otherwise, you pick a WE.
    • 2.onScrollChina will give priority to move eventsscrollSelectTapOtherActionThis method can also be overridden by subclasses, which also return false by default. If a subclass handles the event, it returns. Otherwise whenmModeSELECTED_CLICK_OR_MOVE(SELECT WE to MOVE), SELECT(not SELECT WE to MOVE), MOVE(WE to MOVE)In one of the three situations, the movement gesture can be triggered. The specific logic is insingleFingerMoveIn:
      • 1. Invoke singleFingerMoveStart or singleFingerMoveProcess based on the mMode status. SingleFingerMoveStart calls the listener and WE corresponding methods, and there is little logic in it. The listener and WE corresponding methods are also called in singleFingerMoveProcess, but the corresponding methods of WE update the data of mMoveX and mMoveY.
      • 2. Call Update to update the view in WE. Set mMode to MOVE to indicate that the mMode is moving.
    • 3.onSingleTapUpThe first filter is also out of the two-finger gesture, and then calledsingleFingerUpMethods:
      • 1. MMode is SELECTED_CLICK_OR_MOVE, which can only be confirmed when it is here. The user’s behavior is click after selecting the element.
      • 2. MMode is SINGLE_TAP_BLANK_SCREEN, which means click on a blank in the ECV. OnClickBlank can also be overridden by subclasses to implement some of their own logic.

    (3). Element two-finger gesture and deletion

    The rest of the reader to read the source code, it is not to write, leave some energy in the last chapter to write douyin sticker control, so see the next chapter.

Three, imitation write a douyin sticker control

In the last chapter, I will simulate the static stickers of Douyin based on our controls. Of course, not all the details will be reproduced, but I can be sure that there are some areas where our replicas will do better than Douyin.

The good news is that I packaged the core code from Github and uploaded it to JCenter. If you want to use this package, just add it to the build.gradle file as you would with normal dependencies: Implementation ‘com. Whensunset: becomes: 0.2. This library will always be maintained, you can mention more issues. Let’s start with a few features:

Characteristics of 1.

In this section, let’s talk about the features in our library.

  • 1. Single finger movement, double finger rotation and zoom, double finger movement: these functions are directly available to ECV and WE, as well as Douyin.
  • 2. Decorated borders when selected, one-finger zoom, click delete: These features are added to the DECV and DecorationElement layers, as well as tok Tok.
  • 3. Location assist line: This feature is very well done in Instagram, and this feature in Douyin is very bad. So I am imitating Ins, RLECV supports this function.
  • 4. Trash can: This function is available in both Instagram and Douyin. Instagram has better user experience, but it cannot imitate Instagram due to its limited ability, so IT imitates Douyin.
  • 5. Animation: This feature is the same as Tiktok. AnimationElement is a concrete implementation class for animation. When I implemented the DECV, I added a sliding effect after onFling, which was very fun, so the experience should be better.

2. Copy writing

In fact, most of the core code is integrated into the library, so we only need to write a little bit of code to copy most of the features of the Tiktok sticker, and in some areas we even do better than Tiktok sticker.

Our test code is in the Github project’s Test Moudle. You can use the code to see the following analysis:

  • 1. As we mentioned earlier, our library contains several ECVs with different functions. From the architecture diagram and the analysis in the previous section, we can see that TECV is the lowest class in the inheritance structure, which contains all the functions listed in the previous section. So in Activity_main we can use TECV as the element’s container view.
  • 2. With the layout defined, we can look at the MainActivity and see a very important code here: letdick.initialize (this); It is a method that must be called before the framework can be used, and it initializes something. This recommendation is invoked when the App is initialized.
  • 3. Add a TestElement, which we covered in the previous chapter and won’t repeat here. So if we look at addStaticElement, clicking on that will trigger adding a StaticStickerElement, that’s the StaticStickerElement.
  • The StaticStickerElement view is SimpleDraweeView, so the main code in the StaticStickerElement is to construct an ImageRequest. I’ve already done everything else. The code is simple, but StaticStickerElement displays local images as well as web images. Does this library feel very simple to use, but it works very well?
  • 5. This blog is over 10,000 words, so there are more features in the library for readers to explore. I will post a library usage document on Github sometime later for star, fork, issue.

Fourth, the end

Another article of ten thousand words, I hope you will enjoy it. I am busy recently, so blog updates will not be as stable as before. I hope you will forgive me, but even if I am busy, my articles will be carefully selected technical dry goods, and I will not send hydrological and anxiety articles in order to increase exposure rate. Long way to go, let’s make progress together!

Serial articles

  • 1. Write a Douyin app from scratch — start
  • 4. Copied a Douyin App from scratch — log and buried point and preliminary back-end architecture
  • 5. Copied a Douyin App from scratch — App architecture update and network layer customization
  • 6. Write a Douyin App from scratch — start with audio and video
  • 7. Write a Douyin App from scratch — a minimalist video player based on FFmpeg
  • 8. Write a Douyin App from scratch — build a cross-platform video editing SDK project
  • 9. Copied a Douyin App from scratch — fully analyzed the source code of Android drawing mechanism and Surface family

No angst peddling, no clickbait. Share some interesting things about the world. Topics include but are not limited to: science fiction, science, technology, the Internet, programmers, computer programming. The following is my wechat public number: Interesting things in the world, dry goods waiting for you to see.

reference

  • 1. The two-finger gesture code was stolen from this guy’s Github