This article is divided into the following seven sections:

  1. Technology selection
  2. Unified specification
  3. test
  4. The deployment of
  5. monitoring
  6. Performance optimization
  7. refactoring

The section provides a very detailed practical tutorial for you to get your hands dirty.

I also wrote a front-end engineering demo for Github. This demo includes JAVASCRIPT, CSS, and Git validation. For javascript and CSS validation, you need to install VSCode, which will be mentioned in the following tutorial.

Technology selection

For the front end, the technology selection is fairly simple. It’s a multiple-choice question, one of three frames. In my opinion, it can be selected according to the following two characteristics:

  1. Pick the one you or your team knows best, and make sure you have someone to fill in the blanks when the going gets tough.
  2. Choose the one with a high market share. In other words, choose the right people to hire.

The second point is especially important for small companies. Small companies have a hard time hiring, and if you choose a framework with a low market share (such as Angular), you won’t see many resumes…

The UI component library is simpler, and you can use whichever star is available on Github. Star more, that the use of more people, a lot of pit others have stepped on for you, save trouble.

Unified specification

Code specification

Let’s start with the benefits of the unified code specification:

  • Canonical code promotes teamwork
  • Canonical code can reduce maintenance costs
  • Canonical code facilitates code review
  • Developing the habit of code specification helps the programmer grow

When team members strictly follow the code specification, you can ensure that everyone’s code looks like it was written by one person, and that looking at someone else’s code is like looking at your own. More importantly, we can recognize the importance of the specification and adhere to the development habits of the specification.

How to develop code specifications

It is recommended to find a good code specification and tailor it to your team’s needs.

Here are some of star’s more js code specifications:

  • Airbnb (101K Star English version), Airbnb – Chinese version
  • Standard (24.5K STAR) Chinese version
  • Baidu front-end coding specification 3.9K

There are also many CSS code specifications, such as:

  • Styleguide 2.3 k
  • The spec 3.9 k

How do I check code specifications

Using ESLint, you can check that code symbols don’t conform to your team’s specifications. Here’s how to configure ESLint to check code.

  1. Download the dependent
// eslint-config-airbnb-base Use airbnb code specification NPM i-d babel-eslint eslint eslint-config-airbnb-base eslint-plugin-importCopy the code
  1. configuration.eslintrcfile
{
    "parserOptions": {
        "ecmaVersion": 2019
    },
    "env": {
        "es6": true,
    },
    "parser": "babel-eslint",
    "extends": "airbnb-base",
}
Copy the code
  1. inpackage.jsonscriptsAdd this line of code"lint": "eslint --ext .js test/ src/". Then performnpm run lintYou can start validating your code. In the codetest/ src/This is the directory of code that you want to check. This is the directory that you want to checktest,srcDirectory of code.

However, this is an inefficient way to check the code. You have to check it manually every time. And you have to manually modify the code if you get an error.

To improve the above shortcomings, we can use VSCode. Using it with the appropriate configuration automatically validates and formats your code every time you save it, saving you the hassle of doing it yourself.

CSS checks code specifications using the stylelint plug-in.

For details on how to configure this, see my article ESlint + Stylelint + VSCode (2020).

Git specification

Git specification includes two aspects: branch management specification and Git Commit specification.

Branch management Standard

General projects are divided into master branches and other branches.

When a team member wants to develop a new feature or BUG, a new branch is opened from the master branch. For example, if a project wants to change from client rendering to server rendering, it will open a branch called SSR and then merge back into the master branch.

If you change a BUG, you can also open a new branch from the master branch and name it with the BUG number (although our small team didn’t bother to do this unless it was a very big BUG).

Git commit specification

<type>(<scope>) :<subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>
Copy the code

Roughly divided into three parts (using blank lines):

  1. Title line: Mandatory, describing the major changes and their contents
  2. Topic content: Describe why the changes were made, what changes were made, the thinking behind the development, etc
  3. Footer comments: can write comments, BUG number link

Type: Indicates the type of COMMIT

  • Feat: New functions and features
  • Fix: fix bugs
  • Perf: Change the code to improve performance
  • Refactor: code refactoring (refactoring, code changes that do not affect the internal behavior or functionality of the code)
  • Docs: Modify the document
  • Style: code format changes, note not CSS changes (such as semicolon changes)
  • Test: Test cases are added or modified
  • Build: Affects project builds or dependency changes
  • Revert: Revert a previous submission
  • Ci: Continuous integration related file modification
  • Chore: Other modifications (modifications not in the above types)
  • Release: Releases a new version
  • Workflow: Modifies workflow-related files
  1. Scope: The scope of the commit effect, such as route, component, utils, build…
  2. Subject: Overview of commit
  3. Body: commit The specific modification, which can be divided into multiple lines.
  4. Footer: Some notes, usually links to BREAKING changes or fixed bugs.

The sample

To fix a BUG

If the BUG fix only affects the currently modified file, leave it out of scope. If the scope of influence is large, add a scope description.

For example, if this BUG fix affects the whole world, you can add global. If a directory or function is affected, you can add the path of the directory or the name of the corresponding function.

/ / sample 1
fix(global): Fixed an issue where the checkbox could not be checked// In Example 2, common in parentheses is the name of common managementFix (common): Fix the small font by changing the default font size for all pages under Common Management to 14px/ / sample 3
fix: value.length -> values.length
Copy the code
Feat (Add new features or new pages)
Here is an example, assuming some description of the spot check task static page. Here's a note, it could be a BUG link or something important.Copy the code
Chore (Other modifications)

The Chinese translation of chore is routine work, routine work, and, as the name implies, modifications that are not in other COMMIT types, can be represented by chore.

Chore: Replace view detail in the form with detailCopy the code

The other types of COMMIT are similar to the three examples above and will be left alone.

Verify the Git Commit specification

Verify the Git Commit specification, primarily through git’s pre-commit hook functions. Of course, you’ll need to download an auxiliary tool to help you verify.

Download AIDS

npm i -D husky
Copy the code

Add the following code to package.json

"husky": {
  "hooks": {
    "pre-commit": "npm run lint"."commit-msg": "node script/verify-commit.js"."pre-push": "npm test"}}Copy the code

Then create a new folder script in your project root directory and create a new file verify-commit. Js below. Enter the following code:

const msgPath = process.env.HUSKY_GIT_PARAMS
const msg = require('fs')
.readFileSync(msgPath, 'utf-8')
.trim()

const commitRE = /^(feat|fix|docs|style|refactor|perf|test|workflow|build|ci|chore|release|workflow)(\(.+\))? : {1, 50} /

if(! commitRE.test(msg)) {console.log()
    console.error('Invalid COMMIT message format. Please see the git commit to submit specifications: https://github.com/woai3c/Front-end-articles/blob/master/git%20commit%20style.md `)

    process.exit(1)}Copy the code

Now to explain what each hook means:

  1. "pre-commit": "npm run lint"In thegit commitFormer executivenpm run lintCheck the code format.
  2. "commit-msg": "node script/verify-commit.js"In thegit commitExecute the script whenverify-commit.jsVerify the COMMIT message. If it does not match the format defined in the script, an error will be reported.
  3. "pre-push": "npm test", before you executegit pushExecute the code before pushing it to the remote repositorynpm testTest. If the test fails, the push will not be executed.

The project specification

This is mainly the way the project files are organized and named.

Take our Vue project as an example.

├ ─ public ├ ─ SRC ├ ─ testCopy the code

A project contains public (public resources that are not processed by Webpack), SRC (source code), and test (test code), with SRC directory, which can be subdivided.

├─ Exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises ├─ exercises (page)Copy the code

If the file names are too long, separate them with hyphens (-).

The UI specification

UI specifications require communication and consultation between the front end, UI and products. Finally, it is suggested to use a unified UI component library.

Benefits of having UI specifications:

  • Unified page UI standard, save UI design time
  • Improve front-end development efficiency

test

Testing is an essential part of front-end engineering construction. Its function is to find bugs. The earlier you find bugs, the lower the cost you need to pay. Moreover, its role is more important in the future than in the present.

Imagine adding a new feature to your project a year or two later. After adding a new feature, you are not sure whether the original feature will be affected, so you need to test it. So much time has passed that you no longer understand the project’s code. In this case, if you don’t write the test, you have to manually try it over and over again. If you write tests, you only need to run the test code once, saving time and effort.

Writing tests also allows you to make changes to your code without having to think, is there a problem here? Will it cause bugs? You don’t have that worry when you write tests.

One of the most commonly used ones on the front end is unit testing (which I’m not familiar with mostly end-to-end testing), but I’ll focus on that here.

Unit testing

A unit test is a test of a function, a component, or a class, which is of relatively small granularity.

How should I write it?

  1. Write tests based on correctness, that is, correct input should have normal results.
  2. Write tests against exceptions, that is, bad input should be the wrong result.

Test a function

For example, if we take the absolute value of abs() and input 1,2, the result should be the same as the input; If I put -1,-2, the result should be the opposite of what I put in. If you enter a non-number, such as “ABC”, a type error should be thrown.

Test a class

Suppose you have a class like this:

class Math {
    abs() {

    }

    sqrt() {

    }

    pow() {

    }
    ...
}
Copy the code

For unit tests, you must test all the methods of the class.

Test a component

Component testing is harder because many components involve DOM manipulation.

For example, if an upload image component has a method to convert the image into Base64 code, how do you test it? Tests are usually run in a Node environment, which has no DOM objects.

Let’s review the uploading process:

  1. Click on the<input type="file" />, and select picture upload.
  2. The triggerinputchangeEvent, obtainfileObject.
  3. withFileReaderConvert the image to Base64 code.

The process is the same as the following code:

document.querySelector('input').onchange = function fileChangeHandler(e) {
    const file = e.target.files[0]
    const reader = new FileReader()
    reader.onload = (res) = > {
        const fileResult = res.target.result
        console.log(fileResult) // Output base64 code
    }

    reader.readAsDataURL(file)
}
Copy the code

The above code is just a simulation and should be used in the real world

document.querySelector('input').onchange = function fileChangeHandler(e) {
    const file = e.target.files[0]
    tobase64(file)
}

function tobase64(file) {
    return new Promise((resolve, reject) = > {
        const reader = new FileReader()
        reader.onload = (res) = > {
            const fileResult = res.target.result
            resolve(fileResult) // Output base64 code
        }

        reader.readAsDataURL(file)
    })
}
Copy the code

As you can see, the above code appears the window event object Event, FileReader. That is, as long as we can provide these two objects, we can run it in any environment. So we can add these two objects to the test environment:

/ / rewrite the File
window.File = function () {}

/ / rewrite FileReader
window.FileReader = function () {
    this.readAsDataURL = function () {
        this.onload
            && this.onload({
                target: {
                    result: fileData,
                },
            })
    }
}
Copy the code

The test can then be written like this:

// Write the contents of the document in advance
const fileData = 'data:image/test'

// Provide a false file object to the toBase64 () function
function test() {
    const file = new File()
    const event = { target: { files: [file] } }
    file.type = 'image/png'
    file.name = 'test.png'
    file.size = 1024

    it('file content'.(done) = > {
        tobase64(file).then(base64= > {
            expect(base64).toEqual(fileData) // 'data:image/test'
            done()
        })
    })
}

// Execute the test
test()
Copy the code

By doing this hack, we are able to test the components involved in DOM manipulation. My vue-upload-IMgs library is a unit test written in this way. If you are interested, you can learn about it.

TDD test driven development

TDD is about writing test code ahead of time based on requirements, and then implementing functionality based on the test code.

TDD has good intentions, but if your needs are constantly changing (you get the idea), it’s not a good thing. Chances are, you’re changing the test code every day, and the business code isn’t moving much. So up until now, in over three years as a programmer, I have never tried TDD development.

Although the environment is so difficult, you should try TDD if you can. For example, when you are working on a project yourself and you are not busy, you can use this method to write test cases.

Test Framework Recommendation

My common testing framework is JEST, the benefits are Chinese documentation, API clear, you can see what is used.

The deployment of

Before I learned to automate deployment, I deployed projects like this:

  1. Perform the testnpm run test.
  2. Build the projectnpm run build.
  3. Place the packaged files on a static server.

Once or twice is ok, but if you do it every day, you will waste a lot of time on repetitive operations. So we have to learn to deploy automatically, free our hands completely.

Automatic Deployment (also known as Continuous Deployment (CD)) is triggered in two ways:

  1. Polling.
  2. Listening to thewebhookEvents.

polling

Polling is when the build software automatically performs packaging and deployment operations at regular intervals.

This is not a good way to do it. It’s possible that I will change the code as soon as the software is deployed. To see the new page, you have to wait until the next build starts.

As a side effect, if I don’t change the code for a day, the build software will continue to perform packaging and deployment operations, wasting resources.

Therefore, the current build software is basically deployed by listening for Webhook events.

Listening to thewebhookThe event

Webhook functions are set on your build software to listen for certain events (usually push events) and automatically execute a defined script when the event is triggered.

Github Actions, for example, has this feature.

For starters, it’s not going to be possible to learn automatic deployment just by looking at this. For this reason, I have written an automated deployment tutorial. You don’t need to learn about automation in advance, just follow the instructions to automate front-end projects.

Automate deployment of front-end projects — super detailed tutorial (Jenkins, Github Actions), the tutorial is here, if you find it useful, don’t forget to like it, thank you very much.

monitoring

Monitoring, also divided into performance monitoring and error monitoring, its role is to warn and track the location of problems.

Performance monitoring

In performance monitoring, window.performance is used to collect data.

The Performance interface, which is part of the High Resolution Time API, can obtain performance-related information in the current page. It also integrates Performance Timeline API, Navigation Timing API, User Timing API and Resource Timing API.

The TIMING property of the API contains the start and end times of each page loading phase.

In order to make it easier for everyone to understand the meaning of various attributes of Timing, I found a netizen’s introduction to Timing in Zhihu (I forgot his name and couldn’t find it later, sorry), and reprint it here.

timing: {
        // The timestamp at the end of unload for a page on the same browser. If there is no previous page, this value will be the same as fetchStart.
	navigationStart: 1543806782096.// Timestamp when the last page Unload event was thrown. If there is no previous page, this value returns 0.
	unloadEventStart: 1543806782523.// Timestamp when the unload event finished processing corresponding to unloadEventStart. If there is no previous page, this value returns 0.
	unloadEventEnd: 1543806782523.// The timestamp when the first HTTP redirect started. If there is no redirection, or a different source in the redirection, this value returns 0.
	redirectStart: 0.// The timestamp when the last HTTP redirection was completed (that is, when the last bit of the HTTP response was received directly).
	// If there is no redirection, or if there is a different source in the redirection, this value returns 0.
	redirectEnd: 0.// The browser is ready to use an HTTP request to fetch the timestamp of the document. This is before any application caches are checked.
	fetchStart: 1543806782096.// The UNIX timestamp when the DNS domain name query started.
        // If a persistent connection is used, or if the information is stored in a cache or local resource, this value will be the same as fetchStart.
	domainLookupStart: 1543806782096.// The time when the DNS domain name query is complete.
	// Equal to the fetchStart value if local caching (that is, no DNS query) or persistent connection is used
	domainLookupEnd: 1543806782096.// HTTP (TCP) Indicates the timestamp when the domain name query ends.
        // If a persistent connection is used, or if the information is stored in a cache or local resource, this value will be the same as fetchStart.
	connectStart: 1543806782099.// HTTP (TCP) returns the timestamp when the connection between the browser and the server was established.
        // If the connection is persistent, the return value is the same as the value of the fetchStart property. Connection establishment means that all handshaking and authentication processes are completed.
	connectEnd: 1543806782227.// HTTPS returns the timestamp when the handshake between the browser and the server started the secure link. Return 0 if the current page does not require a secure connection.
	secureConnectionStart: 1543806782162.// Return the timestamp when the browser made an HTTP request to the server (or started reading the local cache).
	requestStart: 1543806782241.// Return the timestamp when the browser received the first byte from the server (or read it from the local cache).
        // If the transport layer fails after starting the request and the connection is reopened, this property will be counted as the corresponding initiation time for the new request.
	responseStart: 1543806782516.// Return when the browser received the last byte from the server (either read from the local cache or read from the local resource)
        // If the HTTP connection was closed before then, return the timestamp when it was closed.
	responseEnd: 1543806782537.// The timestamp when the current page DOM structure starts parsing (that is, when the document. readyState property changes to "loading" and the corresponding ReadyStatechange event is triggered)
	domLoading: 1543806782573.// The timestamp when the current page DOM structure ends parsing and the embedded resources start loading (that is, when the document. readyState property changes to "interactive" and the corresponding ReadyStatechange event is triggered).
	domInteractive: 1543806783203.// The timestamp when the parser sends the DOMContentLoaded event, which means that all scripts that need to be executed have been parsed.
	domContentLoadedEventStart: 1543806783203.// Timestamp when all scripts that need to be executed immediately have been executed, regardless of order.
	domContentLoadedEventEnd: 1543806783216.// The timestamp when the current Document is parsed. document. readyState becomes 'complete' and the corresponding readyStatechange is triggered
	domComplete: 1543806783796.// Load the timestamp when the event was sent. If the event has not been sent, it will have a value of 0.
	loadEventStart: 1543806783796.// The timestamp when the load event ends. If the event has not been sent or completed, it will have a value of 0.
	loadEventEnd: 1543806783802
}
Copy the code

From the above data, we can get several useful time

// Redirection takes time
redirect: timing.redirectEnd - timing.redirectStart,
// DOM rendering takes time
dom: timing.domComplete - timing.domLoading,
// Page loading time
load: timing.loadEventEnd - timing.navigationStart,
// Page uninstallation time
unload: timing.unloadEventEnd - timing.unloadEventStart,
// The request is time consuming
request: timing.responseEnd - timing.requestStart,
// The current time when performance information is obtained
time: new Date().getTime(),
Copy the code

Another important time is the white-screen time, which is the time from typing in the web address to the beginning of the page.

To get the white-screen time, place the following script in front of .

<script>
    whiteScreen = new Date() - performance.timing.navigationStart
</script>
Copy the code

Through these times, you can know how well the page first screen loading performance.

In addition, through the window. The performance. GetEntriesByType (‘ resource ‘) this method, we can get the related resources (js, CSS, img…). It returns all the resources currently loaded on the page.

It generally includes the following types

  • sciprt
  • link
  • img
  • css
  • fetch
  • other
  • xmlhttprequest

All we need is the following information

// Name of the resource
name: item.name,
// Resource loading time
duration: item.duration.toFixed(2),
// Resource size
size: item.transferSize,
// Resource protocol
protocol: item.nextHopProtocol,
Copy the code

Now, write a few lines of code to collect this data.

// Collect performance information
const getPerformance = () = > {
    if (!window.performance) return
    const timing = window.performance.timing
    const performance = {
        // Redirection takes time
        redirect: timing.redirectEnd - timing.redirectStart,
        // White screen time
        whiteScreen: whiteScreen,
        // DOM rendering takes time
        dom: timing.domComplete - timing.domLoading,
        // Page loading time
        load: timing.loadEventEnd - timing.navigationStart,
        // Page uninstallation time
        unload: timing.unloadEventEnd - timing.unloadEventStart,
        // The request is time consuming
        request: timing.responseEnd - timing.requestStart,
        // The current time when performance information is obtained
        time: new Date().getTime(),
    }

    return performance
}

// Obtain resource information
const getResources = () = > {
    if (!window.performance) return
    const data = window.performance.getEntriesByType('resource')
    const resource = {
        xmlhttprequest: [].css: [].other: [].script: [].img: [].link: [].fetch: [].// The current time when resource information is obtained
        time: new Date().getTime(),
    }

    data.forEach(item= > {
        const arry = resource[item.initiatorType]
        arry && arry.push({
            // Name of the resource
            name: item.name,
            // Resource loading time
            duration: item.duration.toFixed(2),
            // Resource size
            size: item.transferSize,
            // Resource protocol
            protocol: item.nextHopProtocol,
        })
    })

    return resource
}
Copy the code

summary

Reading the performance and resource information, we can determine the following reasons for slow page loading:

  1. Too much resources
  2. Speed too slow
  3. Too many DOM elements

In addition to the slow user network speed, we can not solve the other two reasons, the performance optimization will be discussed in the next section “Performance optimization”.

Monitoring errors

There are now three types of errors that can be caught.

  1. Resource loading error. PassedaddEventListener('error', callback, true)Catch a resource load failure error during the capture phase.
  2. Js execution error, passwindow.onerrorCatch JS errors.
  3. Promise error, passedaddEventListener('unhandledrejection', callback)Catch a PROMISE error, but there is no information about the number of rows, the number of columns, and so on. You can only manually throw the error message.

We can create an array variable errors and add the error information to the array when the error occurs, and then report it at a certain stage. How to do that

// Capture resource loading failure js CSS img...
addEventListener('error'.e= > {
    const target = e.target
    if(target ! =window) {
        monitor.errors.push({
            type: target.localName,
            url: target.src || target.href,
            msg: (target.src || target.href) + ' is load error'.// The time when the error occurred
            time: new Date().getTime(),
        })
    }
}, true)

// Listen to js error
window.onerror = function(msg, url, row, col, error) {
    monitor.errors.push({
        type: 'javascript'.row: row,
        col: col,
        msg: error && error.stack? error.stack : msg,
        url: url,
        // The time when the error occurred
        time: new Date().getTime(),
    })
}

// The drawback of listening to promise errors is that the number of rows can not be fetched
addEventListener('unhandledrejection'.e= > {
    monitor.errors.push({
        type: 'promise'.msg: (e.reason && e.reason.msg) || e.reason || ' '.// The time when the error occurred
        time: new Date().getTime(),
    })
})
Copy the code

summary

By collecting errors, you can learn about the types and numbers of errors that occur on your site, and you can make adjustments to reduce them. The full code and DEMO can be found at the end of my other article on front-end performance and error monitoring. You can copy the code (HTML file) and test it locally.

The data reported

Reporting Performance Data

The performance data can be reported after the page is loaded to minimize the impact on the page performance.

window.onload = () = > {
    // Obtain performance and resource information during idle browser time
    // https://developer.mozilla.org/zh-CN/docs/Web/API/Window/requestIdleCallback
    if (window.requestIdleCallback) {
        window.requestIdleCallback(() = > {
            monitor.performance = getPerformance()
            monitor.resources = getResources()
        })
    } else {
        setTimeout(() = > {
            monitor.performance = getPerformance()
            monitor.resources = getResources()
        }, 0)}}Copy the code

Of course, you can also set a timer, circular report. However, it is better to do a comparison to report again each time to avoid the same data repeated report.

Error data report

The code I provided in the DEMO uses an errors array to collect all errors and report them uniformly at some stage (delayed reporting). Instead, you can report errors as they occur. This can avoid the problem of losing the error data when the user has closed the page before the error is reported.

// Listen to js error
window.onerror = function(msg, url, row, col, error) {
    const data = {
        type: 'javascript'.row: row,
        col: col,
        msg: error && error.stack? error.stack : msg,
        url: url,
        // The time when the error occurred
        time: new Date().getTime(),
    }
    
    // Report immediately
    axios.post({ url: 'xxx', data, })
}
Copy the code

SPA

Window. The performance API is flawed, the SPA routing, switching window. Performance. The timing of the data is not updated. So we need to figure out another way to count the time between switching routes and loading completion. In Vue’s case, one possible approach is to switch routes by using the global pre-guard beforeEach of the routes to get the start time, and executing the VM.$nextTick function in the mounted hook of the component to get the render finish time of the component.

router.beforeEach((to, from, next) = > {
	store.commit('setPageLoadedStartTime'.new Date()})Copy the code
mounted() {
	this.$nextTick(() = > {
		this.$store.commit('setPageLoadedTime'.new Date() - this.$store.state.pageLoadedStartTime)
	})
}
Copy the code

We can do more than just performance and error monitoring.

Collecting User Information

navigator

Use window.Navigator to collect the user’s device information, operating system, browser information…

UV (Unique Visitor)

Refers to access through the Internet, browse this page of natural persons. A computer client visits your site as a visitor. The same client is counted only once between 00:00 and 24:00. Only one UV is counted for multiple visits by the same visitor in one day. When the user visits the website, a random string + time and date can be generated and saved locally. These parameters are passed to the back end, which uses the information to generate UV statistics reports when a request is made to the web page (if more than 24 hours of the day has passed).

PV (Page View)

That is, page views or clicks, each time the user visits each page in the website is recorded 1 PV. The number of times a user visits the same page, a measure of the number of pages visited by a site’s users.

Page stay time

Traditional website users enter A page, through the background request to the user to enter the page time. After 10 minutes, the user enters page B. At this time, the background can judge that the user has stayed in page A for 10 minutes by the parameters conveyed by the interface. SPA can use router to obtain the user’s stay time. For example, Vue obtains the user’s stay time by using the router.beforeEach Destroyed hook function.

Browse the depth

Through the document. The documentElement. ScrollTop attributes, and height of the screen, you can determine whether the user browsing the web content.

Page Jump Source

The document.referrer property lets you know which site the user is jumping from.

summary

By analyzing user data, we can learn about users’ browsing habits, hobbies, and so on, which is really scary. There is no privacy at all.

Front-end monitoring deployment tutorial

All of these are monitoring principles, but you still have to write your own code to implement them. To avoid trouble, we can do this with our existing tool, Sentry.

Sentry is a performance and error monitoring tool written in Python. You can either use the services provided by Sentry (free with few features) or deploy the services yourself. Now look at how to implement monitoring using the services provided by Sentry.

Registered account

Open the web site https://sentry.io/signup/, to register.

Choose the project, I chose Vue.

Install the Sentry dependency

After selecting the project, there is a detailed Sentry dependency installation guide below.

Follow the prompts and run the code NPM install — save@sentry/browser@sentry /integrations @sentry/tracing in your Vue project to install the sentry dependencies.

Copy the following code to your main.js file before new Vue().

import * as Sentry from "@sentry/browser";
import { Vue as VueIntegration } from "@sentry/integrations";
import { Integrations } from "@sentry/tracing";

Sentry.init({
  dsn: "xxxxx".// Here is your DSN address
  integrations: [
    new VueIntegration({
      Vue,
      tracing: true,}).new Integrations.BrowserTracing(),
  ],

  // We recommend adjusting this value in production, or using tracesSampler
  // for finer control
  tracesSampleRate: 1.0});Copy the code

Then click Skip this Onboarding in step 1 to go to the console page.

If you forget your DSN, click on the left menu bar and select Settings -> Projects -> Your Project -> Client Keys(DSN).

Create the first error

Execute a print statement console.log(b) in your Vue project.

B is not defined: b is not defined:

This error message contains the specific information about the error, your IP address, browser information, and so on.

Oddly, our browser console does not output an error message.

This is because it is blocked by sentry, so we need to add an option logErrors: true.

Then check the page again, and find the console error message:

Upload sourcemap

Typically, packaged code is compressed, and if you don’t have sourcemap, you won’t be able to tell where the source code is, even if there is an error message.

Let’s see how to upload Sourcemap.

First create the auth token.

This generated token will be used later.

Install sentry- CLI and @sentry/webpack-plugin:

npm install sentry-cli-binary -g
npm install --save-dev @sentry/webpack-plugin
Copy the code

After installing the above two plug-ins, create a.sentryclirc file in the project root directory (don’t forget to add it to the.gitignore file to avoid exposing tokens).

[auth]
token=xxx

[defaults]
url=https://sentry.io/
org=woai3c
project=woai3c
Copy the code

Replace XXX with the token just generated.

Org is the name of your organization.

Project is the name of your project. Follow the tips below to find it.

Create a new vue.config.js file under the project and fill in the following:

const SentryWebpackPlugin = require('@sentry/webpack-plugin')

const config = {
    configureWebpack: {
        plugins: [
            new SentryWebpackPlugin({
                include: './dist'.// The packaged directory
                ignore: ['node_modules'.'vue.config.js'.'babel.config.js'],}),],},}// Upload sourcemap only in the production environment
module.exports = process.env.NODE_ENV == 'production'? config : {}
Copy the code

After that, run NPM run build to see the sourcemap upload result.

Let’s take a look at the error message from not uploading sourcemap versus the error message from uploading sourcemap.

Not upload sourcemap

Uploaded sourcemap

As you can see, the error message after uploading Sourcemap is more accurate.

Change the Chinese environment and time zone

Select refresh.

Performance monitoring

Turn on the Performance option to see how each of your projects is performing. For details about the parameters, see Performance Monitoring.

Performance optimization

There are two main types of performance optimization:

  1. Load time optimization
  2. Runtime optimization

For example, compression of files and use of CDN are load-time optimization; Reducing DOM manipulation and using event delegates are runtime optimizations.

Before you can solve the problem, you must first find out the problem, otherwise there is no way to start. So it’s a good idea to investigate the loading and running performance of your site before tuning.

Manual inspection

Checking loading Performance

How well a website loads depends on the white screen time and the first screen time.

  • White screen time: refers to the time from the input url to the beginning of the page display.
  • First screen time: refers to the time from the input url, to the page fully rendered.

To get the white-screen time, place the following script in front of .

<script>
	new Date() - performance.timing.navigationStart
</script>
Copy the code

In the window. The onload event in the execution of new Date () – performance. Timing. NavigationStart can obtain first screen time.

Checking performance

With Chrome’s developer tools, you can see how your site performs at runtime.

Open the website, press F12 to select Performance, click on the top left corner of the gray dot, become red to represent the start of recording. When you’re done using the site, click Stop, and you’ll see a performance report for the duration of the site. If there are red blocks, there is a frame drop. If it’s green, it means FPS is good.

Also, under the Performance TAB, pressing ESC will bring up a small box. Click on the three points to the left of the box and select rendering.

Of these two options, the first is to highlight the redraw area and the other is to display the frame rendering information. Check these two boxes and browse the web to see how your rendering changes in real time.

Check with tools

Monitoring tools

You can deploy a front-end monitoring system to monitor site performance, and Sentry, described in the previous section, is one of these.

Chrome tools Lighthouse

If you have Chrome 52+ installed, press F12 to open developer Tools.

It will not only rate your site for performance, but also for SEO.

Use Lighthouse to review your web apps

How to optimize performance

There are numerous articles and books on performance optimization online, but many of the optimization rules are outdated. Therefore, I wrote a performance optimization article 24 Suggestions for Front-end Performance Optimization (2020), and analyzed and summarized 24 performance optimization suggestions and strongly recommended them.

refactoring

Refactoring is defined in Refactoring 2:

Refactoring is the process of making changes to code to improve the internal structure of a program without changing its external behavior. Refactoring is a well-honed, methodical way of organizing programs that minimizes the chance of introducing errors. Essentially, refactoring is about improving the design of code after it has been written.

There are similarities and differences between refactoring and performance optimization.

The same thing is that they all modify the code without changing the program’s functionality; The difference is that refactoring is to make the code more readable and understandable, and performance optimization is to make the program run faster.

Refactoring can be done as you write code, or you can set aside a dedicated period of time after the program is written. I don’t know which one is better, it depends on the individual.

If you set aside time for refactoring, it is recommended that you test a piece of code as soon as you refactor it. This prevents you from changing the code so much that you can’t find the error point when it goes wrong.

Principles of refactoring

  1. Things but three, three refactoring. You should not write the same code over and over again, in which case you should refactor.
  2. If a piece of code is hard to read, it’s time to consider refactoring.
  3. If you already understand the code, but it’s too tedious or not good enough, you can refactor it.
  4. A function that’s too long, it needs to be reconstructed.
  5. A function should be a function. If a function is crammed with multiple functions, it needs to be refactored.

Refactoring technique

In Refactoring 2, there are hundreds of refactoring techniques. But I think there are two common ones:

  1. Extract duplicate code and encapsulate it into functions
  2. Break down functions that are too long or have too many functions

Extract duplicate code and encapsulate it into functions

Suppose you have an interface for querying data /getUserData? Age = 17 & city Beijing. User data: {age: 17, city: ‘Beijing’};

let result = ' '
const keys = Object.keys(data)  // { age: 17, city: 'beijing' }
keys.forEach(key= > {
    result += '&' + key + '=' + data[key]
})

result.substr(1) // age=17&city=beijing
Copy the code

If this is the only interface that needs to be transformed, it is fine not to wrap it as a function. However, if there are multiple interfaces that require this, then you need to wrap it as a function:

function JSON2Params(data) {
    let result = ' '
    const keys = Object.keys(data)
    keys.forEach(key= > {
        result += '&' + key + '=' + data[key]
    })

    return result.substr(1)}Copy the code

Break down functions that are too long or have too many functions

Suppose we now have a registration function, denoted in pseudocode:

function register(data) {
    // 1. Verify that the user data is valid
    /** * Verify account * Verify password * Verify SMS verification code * Verify ID * Verify email */

    // 2. If the user uploads an avatar, convert the avatar into base64 code for saving
    /** * Create a New FileReader object * convert the image to Base64 code */

    // 3. Invoke the registration interface
    // ...
}
Copy the code

This function contains three functions: validation, transformation, and registration. The validation and transformation functions can be extracted and packaged as separate functions:

function register(data) {
    // 1. Verify that the user data is valid
    // verify()

    // 2. If the user uploads an avatar, convert the avatar into base64 code for saving
    // tobase64()

    // 3. Invoke the registration interface
    // ...
}
Copy the code

If you are interested in refactoring, refactoring 2 is highly recommended.

References:

  • Refactoring 2

conclusion

This article is mainly to summarize my work experience of more than one year, because I basically study front-end engineering and how to improve the development efficiency of the team. Hope this article can help some of the novice to front-end engineering experience, through this article to get started with front-end engineering.

For a more complete tutorial, see Getting You started with front-end Engineering