Front end “componentized series” catalogue

  • Build components using JSX Parser
  • “Two” uses JSX to establish a Markup component style
  • “Three” use JSX to realize Carousel multicast module
  • “Four” with JavaScript timeline and animation
  • “Five” with JavaScript to achieve three times bezier animation library – front-end componentization
  • “Six” use JavaScript to achieve gesture library – to achieve listening logic
  • “Seven” use JavaScript to achieve gesture library – gesture logic
  • “Eight” implements the gesture library in JavaScript – supports multi-key triggering
  • “Ix” implements JavaScript gesture libraries – event dispatches and Flick events
  • Using JavaScript to Create a Gesture Library
  • . To be continued…

After several iterations, our gesture library functionality has been implemented. But at this point our code really needs to be reorganized and wrapped. If you remember, the element we first fetched between us, Element, was written dead. But as a gesture library, the elements we bind must be determined by the user of the library.

Some of you might ask, “Why don’t you just think about it from the beginning and encapsulate it from the beginning? Now that you’ve implemented all the functionality, it’s a waste of time to go back and encapsulate it.”

In fact, if we had thought about how to package, how to design this library from the beginning. Because the design needs to consider a lot of factors, and to achieve the function has not been implemented. In fact, often this time the design of the scheme or architecture will be modified after N times. We end up spending a lot more time designing than we do implementing these features. But if we implement the functionality first and then encapsulate it, it becomes much easier.

So let’s start encapsulating the gesture library!

The first thing to do to encapsulate the gesture library is to list the existing functions and categorize them. So our library of gestures actually consists of three parts:

  • Listener the Listener
    • The mouse events
      • mousedown
      • mouseup
      • mousemove
    • Touch events
      • touchstart
      • touchmove
      • touchend
      • cancel
  • Recognizer recognizer
    • start()
    • move()
    • end()
  • The dispatcher dispenser
    • dispatch()

If we wanted to make the library an API, we could use the three parts mentioned above to decouple it.

According to the three parts we wrote above, in fact, they have a series relationship, even nested relationship. First we need to instantiate a Listener. The Listener then needs to have a Recognizer Recognizer that recognises the listening event. Finally, our recognizer needs to have a Dispatcher, through which the identified events will be distributed.

So we end up calling the gesture API like this:

new Listener(new Recognizer(new Dispatcher()))
Copy the code

Listener the Listener

So let’s see how we implement a Listener.

Because a Listener instance is passed in a Recognizer by default, we’ll start by building a Contructor constructor that accepts the Recognizer passed in. The Listener also needs to know what element it is listening for, so we need to receive an element from constructor as well.

/** * listener */
export class Listener {
  constructor(element, recognizer){}}Copy the code

Then we can copy all the Listener functions we wrote earlier into the Listener class. Instead of calling the Start, Move, and end event handlers directly, we’re recognizing them with the izer.

After that, our Listener class should look like this.

/** * listener */
export class Listener {
  constructor(element, recognizer) {
    let contexts = new Map(a);let isListeningMouse = false;

    element.addEventListener('mousedown'.event= > {
      let context = Object.create(null);
      contexts.set(`mouseThe ${1 << event.button}`, context);

      recognizer.start(event, context);

      let mousemove = event= > {
        let button = 1;

        while (button <= event.buttons) {
          if (button & event.buttons) {
            let key;
            // Order of buttons & button is not the same
            if (button === 2) {
              key = 4;
            } else if (button === 4) {
              key = 2;
            } else {
              key = button;
            }

            let context = contexts.get('mouse' + key);
            recognizer.move(event, context);
          }
          button = button << 1; }};let mouseup = event= > {
        let context = contexts.get(`mouseThe ${1 << event.button}`);
        recognizer.end(event, context);
        contexts.delete(`mouseThe ${1 << event.button}`);

        if (event.buttons === 0) {
          document.removeEventListener('mousemove', mousemove);
          document.removeEventListener('mouseup', mouseup);
          isListeningMouse = false; }};if(! isListeningMouse) {document.addEventListener('mousemove', mousemove);
        document.addEventListener('mouseup', mouseup);
        isListeningMouse = true; }}); element.addEventListener('touchstart'.event= > {
      for (let touch of event.changedTouches) {
        let context = Object.create(null); contexts.set(touch.identifier, context); recognizer.start(touch, context); }}); element.addEventListener('touchmove'.event= > {
      for (let touch of event.changedTouches) {
        letcontext = contexts.get(touch.identifier); recognizer.move(touch, context); }}); element.addEventListener('touchend'.event= > {
      for (let touch of event.changedTouches) {
        letcontext = contexts.get(touch.identifier); recognizer.end(touch, context); contexts.delete(touch.identifier); }}); element.addEventListener('cancel'.event= > {
      for (let touch of event.changedTouches) {
        letcontext = contexts.get(touch.identifier); recognizer.cancel(touch, context); contexts.delete(touch.identifier); }}); }}Copy the code

So we’ve wrapped our Listener. Next we can start encapsulating our Recognizer Recognizer.

Recognizer Recognizer

Recognizer is designed to encapsulate our start, Move, end and cancel functions. All these functions do is identify the type of mouse event and distribute the corresponding gesture type.

The first Recognizer instance we recognise needs to receive a Dispatcher dispenser. This class will call its Dispatch dispatch function to dispatch our events after all of our events have been determined. So we simply record it in the constructor function.

We then copy the four functions we wrote earlier into the Recognizer. After that, our entire Recognizer looks like this:

/** ** recognizer */
export class Recognizer {
  constructor(dispatcher) {
    this.dispatcher = dispatcher;
  }

  start(point, context) {
    (context.startX = point.clientX), (context.startY = point.clientY);

    context.points = [
      {
        t: Date.now(),
        x: point.clientX,
        y: point.clientY,
      },
    ];

    context.isPan = false;
    context.isTap = true;
    context.isPress = false;

    context.handler = setTimeout(() = > {
      context.isPan = false;
      context.isTap = false;
      context.isPress = true;
      console.log('press-start');
      context.handler = null;
    }, 500);
  }

  move(point, context) {
    let dx = point.clientX - context.startX,
      dy = point.clientY - context.startY;

    if(! context.isPan && dx **2 + dy ** 2 > 100) {
      context.isPan = true;
      context.isTap = false;
      context.isPress = false;
      console.log('pan-start');
      clearTimeout(context.handler);
    }

    if (context.isPan) {
      console.log(dx, dy);
      console.log('pan');
    }

    context.points = context.points.filter(point= > Date.now() - point.t < 500);

    context.points.push({
      t: Date.now(),
      x: point.clientX,
      y: point.clientY,
    });
  }

  end(point, context) {
    context.isFlick = false;

    if (context.isTap) {
      //console.log('tap');
      // Replace the old console.log with the dispatch call
      // This event does not require any special attributes, just pass 'empty object'
      dispatch('tap'{});clearTimeout(context.handler);
    }
      
    context.points = context.points.filter(point= > Date.now() - point.t < 500);

    let d, v;
    if(! context.points.length) { v =0;
    } else {
      d = Math.sqrt(
      (point.clientX - context.points[0].x) ** 2 + (point.clientY - context.points[0].y) ** 2);
      v = d / (Date.now() - context.points[0].t);
    }

    if (v > 1.5) {
      context.isFlick = true;
      dispatch('flick'{}); }else {
      context.isFlick = false;   
    }

    if (context.isPan) {
      dispatch('panend'{}); }if (context.isPress) {
      console.log('press-end'); }}cancel(point, context) {
    clearTimeout(context.handler);
    console.log('cancel'); }}Copy the code

You will recall that none of the events in these four event handlers have been distributed yet. After all the events are identified, we just print out console.log.

So let’s finish this part of the logic

The first is press (or press-start), for which we do not need to pass any parameters. So we just disptach out of here.

context.handler = setTimeout(() = > {
  context.isPan = false;
  context.isTap = false;
  context.isPress = true;
  this.dispatcher.dispatch('press');
  context.handler = null;
}, 500);
Copy the code

And then the panstart movement starts the event, and the event needs to send the data out. Here are some key numbers to hand out:

  • startXMinus the x-coordinate of the starting point
  • startYMinus the y coordinate of the starting point
  • clientX– The x coordinate of the current position
  • clientY– The y coordinate of the current position
  • isVertical– Whether the current movement is vertical or not, this state is useful for doing some orientation functions, so we have added this judgment here.
    • The calculation is also simple, if the dx horizontal line is moved less than the dy vertical line, then the movement is now vertical, otherwise it is horizontal line.
    • What we want to note here is that we want to compare their absolute lengths of movement, whether we want to ignore the subsidiary case (ignore left or right, up or down, just the length of movement).
    • So we want dx and dy to be positive numbers, and we’ll use them hereMath.abs()

For this event, these four pieces of data are enough. If panStart needs to give us more data, we can go back to the library and add it.

if(! context.isPan && dx **2 + dy ** 2 > 100) {
  context.isPan = true;
  context.isTap = false;
  context.isPress = false;
  context.isVertical = Math.abs(dx) < Math.abs(dy)
  this.dispatcher.dispatch('panstart', {
    startX: context.startX,
    startY: context.startY,
    clientX: point.clientX,
    clientY: point.clientY,
    isVertical: context.isVertical,
  });
  clearTimeout(context.handler);
}
Copy the code

The following PAN event is also the same logic as our Panstart.

this.dispatcher.dispatch('pan', {
  startX: context.startX,
  startY: context.startY,
  clientX: point.clientX,
  clientY: point.clientY,
  isVertical: context.isVertical,
});
Copy the code

Here we’ve changed the location of the Panend trigger, because sometimes we don’t functionally need to listen for the flick, whether it’s a move ending or a flick. So it was a mistake not to output panend when we had a flick.

Therefore, we put the panend dispatch logic after the flick judgment, and then put a Panend dispatch event. The outgoing parameters are the same as those of pan above, and add isFlick parameters here. Also transmit to the user of the gesture library whether the current motion state is a flick event state.

Although we have given the velocity parameter in panend. However, in terms of scenarios, there are times when we need to listen for the flick event alone. So if our current motion event is a flick, we will also send a flick event and add velocity to the outgoing parameter.

IsPan finally decided that the code inside was like this:

let d, v;
if(! context.points.length) { v =0;
} else {
  d = Math.sqrt(
  (point.clientX - context.points[0].x) ** 2 + (point.clientY - context.points[0].y) ** 2);
  v = d / (Date.now() - context.points[0].t);
}

if (v > 1.5) {
  context.isFlick = true;
  dispatch('flick'{}); }else {
  context.isFlick = false;   
}

if (context.isPan) {
  this.dispatcher.dispatch('panend', {
    startX: context.startX,
    startY: context.startY,
    clientX: point.clientX,
    clientY: point.clientY,
    isVertical: context.isVertical,
    isFlick: context.isFlick,
  });
}
Copy the code

Ok, we also have a dispatch of press and Cancel events, so we can just use this.dispatcher. Dispatch without passing any external parameters. Because these events are unnecessary.

The Dispatcher distributing device

And finally, we’re going to implement the Dispatcher that we’re Recognizer with. This is as simple as writing our dispatch function to a Dispatcher class. Finally, because Element is passed into the Dipatcher. So we need to receive and record the class properties in constructor.

/** * dispenser */
export class Dispatcher {
  constructor(element) {
    this.element = element;
  }
  dispatch(type, properties) {
    let event = new Event(type);
    for (let name in properties) {
      event[name] = properties[name];
    }
    this.element.dispatchEvent(event); }}Copy the code

All-in-one enablement function

Finally, we added a function that lets users use our library of gestures directly from this method. Remember that the design concept of “high cohesion” is that your users don’t need to know anything complex about the services we encapsulate and how to use them. Encapsulate some simple and convenient methods for users to make this feature more user-friendly.

So here we add the enableGesture function, which accepts an Element parameter to enable all of our event monitoring capabilities.

/** * Enable the listener of a gesture library for an Element@param {Element} * / element element
export function enableGesture(element) {
  new Listener(element, new Recognizer(new Dispatcher(element)));
}
Copy the code

So we have the perfect Gesture library, which provides Gesture functionality to our Carousel component.

Next, let’s test ourselves to see if the code we’ve wrapped is reliable. Reference enable we just wrote in our gesture.html

<body oncontextmenu="event.preventDefault()"></body>
<script>
  import { enableGesture } from './gesture.js';
  enableGesture(document.documentElement);

  document.documentElement.addEventListener('tap'.() = > {
    console.log('Tapped! ');
  });
</script>
Copy the code

When we click on a blank page in the browser, without any problems, we console ourselves with the word “Tapped!” . This proves that our library of gestures is ready for use.

I’m Three Diamonds from Tech Galaxy, a tech guy who is reinventing knowledge. See you next time.


⭐️ three elder brother recommended

Open Source Project Recommendation

Hexo Theme Aurora

The following features were recently updated in version 1.5.0:

“Preview”

: sparkles: new

  • Adaptive “Recommended Articles” layout (added a new”Top article layout“!!
    • The ability to toggle between Recommended articles and Top articles modes
    • If the total number of articles is less than 3, it will automatically switch to “Top articles” mode
    • Add “top” and “Recommended” tags to the article card
    • Book: document
  • Added custom containers like VuePress# 77
    • InfoThe container
    • WarningThe container
    • DangerThe container
    • DetailThe container
    • preview
  • Support for more SEO meta data# 76
    • addeddescription
    • addedkeywords
    • addedauthor
    • Book: document

Recently, bloggers have been fully engaged in the development of a “futuristic” Hexo theme, a blog theme based on the aurora.

If you’re a developer, creating a personal blog can be another bright spot on your resume. And if you have a really cool blog, it’s even brighter. It’s just sparkling.

If you like this theme, please click 🌟 on Github and let each other shine

Github address: github.com/auroral-ui/… Topics using document: aurora. Tridiamond. Tech/useful /


VSCode Aurora Future

Yeah, the blogger also did a VSCode theme for Aurora. The Hexo Theme Aurora color scheme was used. The key feature of this theme is to use only 3 colors, which reduces the distraction of multi-color coding and allows you to focus on writing code.

Like you can support oh! Just type “Aurora Future” into VSCode’s plugin search to find this theme. ~

Github address: github.com/auroral-ui/… Theme plugin address: marketplace.visualstudio.com/items?itemN…


Firefox Aurora Future

I don’t know about you, but I’ve been using Firefox for development lately. I think Firefox is really good. I recommend you try it.

And of course what I want to show you here is that I did an Aurora theme for Firefox as well. Right! They use the same color system. Like small partners can try oh!

Subject address: addons.mozilla.org/en-US/firef…