preface

This article focuses on configuring the entire TypeScript SDK project environment from scratch, including the following

  • useWebpackforEngineering structures,
  • useEditorConifg, Prettier, ESLint, Airbnb JavaScript Style GuideTo ensure that theTeam code style specification
  • useHusky, Commitlintlint-stagedBuild a front-end workflow
  • usestandard-versiongeneratechangelog
  • useTypeDocorVuepressQuickly generateThe document.
  • useHTML – webpack plugin, webpack dev server. –forLocal hot update code debugging
  • useJestforUnit testingAnd constrain the commitCorrect code
  • useGithub ActionsforAutomatic deployment publishing

A cautionary note: If you want to create a simple typescript-based tool library, you can also use some mature “zero configuration” scaffolding. If you need to write a complex piece of middleware: monitor SDKS, mid-tier SDKS, etc., then this article will help you get off to a good start.

If this article is helpful to you, please click a “like” to encourage me. If you have any questions about the configuration of this article, please leave a comment in the comment section

GitHub

The relevant code has been uploaded to GitHub warehouse, interested partners can star

Project initialization

Initialization package. Json

We first clone a repository of our own on Github to our own local directory, and then run the initialization command in the root directory of the project. After initialization, we can configure the required partner in package.json by ourselves

npm init -y
Copy the code

Configuration. Gitignore

Gitignore is a file or folder that you need to ignore when committing to Git, such as node_modules, dist, etc

We create and add the following configuration in the root directory:

Dist /types # eslint. eslintCache # jest /coverageCopy the code

Install the typescript

npm install typescript -D
Copy the code

Json is created in the root directory of TypeScript, and my configuration is posted here. If you need to customize it, you can see the full configuration on TypeScript’s Chinese website

{"compilerOptions": {// Specify the ECMAScript target version "ES3" (default), "ES5", "ES6"/"ES2015", "ES2016", "ES2017" or "ESNext". "Target ": "ES5", // Built object code removes all comments, but does not remove comments with /! * Start copyright message "removeComments": true, // enable all strict type checking options. "Strict" : true, / / the ban on the same file is not consistent with the reference "forceConsistentCasingInFileNames" : true, / / generate the corresponding "declaration" which s file: DeclarationDir: "types" // noEmitOnError is not generated when noEmitOnError occurs: True, // baseUrl tells the compiler where to look for modules. All non-relative module imports are treated as relative to baseUrl. "BaseUrl" : ". ", / / the relative module into the path of the map configuration "paths" : {" @ / * ": "SRC / *", "@ docs / *" : [" docs / * "], "@ public / *" : [] "public / *", "@ test / *" : [" test / * "],}}, / / the default compiler contains compiled file, the SRC is the source folder, Test is jest test code folder "include:" ["/SRC / * * * ", "test / * * / *"], / / compilers excluded by default file "exclude" : [" node_modules "]}Copy the code

Here path is configured with the path mapping configuration for non-relative module imports, which must be used together with the Webpack alias configuration mentioned below

Webpack

To cut to the quick, the reason for choosing Webpack as the packaging tool is simple: Webpack is arguably the most powerful and complete; However, other packaging tools such as rollup gulp can also be used if you are concerned about simple configuration and so on

In this section, we introduce the following three configuration processes:

  • How to usecross-env + webpack-mergeCombine to achieveDifferent configurations of development and production environments.
  • How to useBabelforBackward compatibility of code.
  • How to useSome pluginsimplementationSome of the weirdest features

Webpack installation

To use WebPack, you need to install two packages

npm install webpack webpack-cli -D
Copy the code

After the installation, we create a new scripts folder in the root directory and add constants. Js and webpack.common.js to the scripts folder

Create a new SRC folder under the root directory to store our source code and create the index.ts entry file under SRC

Let’s create a new public folder under the root directory to hold our static resource files

The new folder and file structure in the root directory are as follows:

├ ─ ─ scripts │ ├ ─ ─ webpack.com mon. Js │ └ ─ ─ the js ├ ─ ─ the SRC │ └ ─ ─ index. The ts ├ ─ ─ the publicCopy the code

Constants. Js file is as follows:

The PROJECT_PATH constant here keeps us from writing the constant.. /.. /, and start at the root to find the required files

const path = require('path') const resolve = path.resolve const isDev = process.env.NODE_ENV ! == 'production' const PROJECT_PATH = resolve(__dirname,'.. /') module.exports = { PROJECT_PATH, resolve, isDev }Copy the code

Webpack.com mon. Js as follows:

Here we set the libraryTarget to umD because, as an SDK, we want to support multiple types of references: import, require, script

Const {resolve, PROJECT_PATH} = require('./constants') module.exports = {// defines entry: {index: Resolve (PROJECT_PATH, './ SRC /index.ts'),}, // defines the file name and path of the compiled package. Filename: 'library-starter. Js ', // Add a libraryStarter to the global variable library: LibraryTarget: 'umd', // libraryExport this property needs to be set, otherwise the external package will have a layer of default libraryExport: 'default', / / path path: resolve (PROJECT_PATH, '. / dist),}, resolve: {alias: {' @ ': resolve (__dirname,'.. /src'), '@docs': resolve(__dirname, '.. /docs'), '@public': resolve(__dirname, '.. /public'), '@test': resolve(__dirname, '.. /test'), }, extensions: ['.ts', '.tsx', '.js'], }, }Copy the code

Json and webpack.common.js. This is an alias alias. Note that when Webpack constructs a TypeScript project, the alias must be configured in both tsconfig.json and webpack.common.js. Webpack.common.js is configured to recognize the path during build packaging, and tsconfig.json is configured so that ESLint does not report errors during local development and debugging

After configuring the alias, we can use the alias for file import

// use SRC alias @import '@/index'Copy the code

cross-env + webpack-merge

In Webpack, we need to configure the development environment and production environment separately to adapt to different environmental requirements. The requirements of different environments are different. What we need in the development environment is faster construction speed and source-map error information. What we need in a production environment is a smaller package size;

We need to configure our scripts folder to write common Webpack configurations as well as configurations for different environments.

Cross-env can be set and used on different operating systems. For example, export NODE_ENV=development is used on Mac computers. On Windows, set NODE_ENV=development is used, so we don’t have to worry about the differences caused by the operating system.

Let’s start by installing cross-env:

npm install cross-env -D
Copy the code

Create webpack.dev.js and webpack.prod.js in the scripts folder:

The webpack.dev configuration is as follows:

const { merge } = require('webpack-merge')
const common = require('./webpack.common.js')

module.exports = merge(common, {
  mode: 'development',
})
Copy the code

The webpack.prod configuration is as follows:

const { merge } = require('webpack-merge')
const common = require('./webpack.common.js')

module.exports = merge(common, {
  mode: 'production',
})
Copy the code

The configuration in package.json is as follows: consider local debugging, which is divided into dev in development environment and build in production environment

  "scripts": {
    "dev": "cross-env NODE_ENV=development webpack --config ./scripts/webpack.dev.js",
    "build": "cross-env NODE_ENV=production webpack --config ./scripts/webpack.prod.js"
  },
Copy the code

webpackbar

How often do you see packaging progress during a project build?

Now we can do the same, we can do it with the webPackBar, install it:

npm install webpackbar -D
Copy the code

Add the following plugins to webpack.common.js:

const WebpackBar = require('webpackbar'); Module.exports = {// exports... New WebpackBar({name: '#fa8c16',}),],};Copy the code

Let’s take a look at the actual packaging, doesn’t it look much more comfortable? Sometimes it’s nice to have feedback when the project is big.

rimraf

Clear the build artifacts here using Rimraf is recommended, let’s install it first:

npm install rimraf -D
Copy the code

Then configure the scripts in package.json and add rimraf dist types before the original build:

{

  "scripts": {
     "dev": "rimraf dist types && cross-env NODE_ENV=development webpack --config ./scripts/config/webpack.dev.js",
     "build": "rimraf dist types && cross-env NODE_ENV=production webpack --config ./scripts/config/webpack.prod.js",
  },
}
Copy the code

This will automatically delete the dist folder and create a new one before each build

devtool

Devtool has some Settings that help map compiled code back to the original source code, often referred to as source-map. This is especially important when debugging code errors locally, and different Settings can significantly affect the speed of builds and rebuilds.

In the development environment, I chose eval-source-Map

We don’t set it in production, webpack automatically doesn’t generate source-map when mode is production, and we can’t put code mapping in production

So we added the devtool configuration to the webpack.dev.js file

module.exports = merge(common, {
    devtool: 'eval-source-map',
})
Copy the code

babel

Babel is a toolchain for converting ECMAScript 2015+ version code into backwardly compatible JavaScript syntax so it can run in current and older versions of browsers or other environments.

Let’s start by installing some plug-ins for Babel

npm install @babel/core @babel/preset-env @babel/plugin-transform-runtime babel-loader -D
Copy the code
  • @babel/core: @babel/core is the core library of Babel. All the core apis are in this library. These apis are called by babel-loader
  • @babel/preset-env: This is a preset set of plugins that contains a set of related plugins, and Bable is used for guidance on code-switching. This plug-in contains all the es6 to ES5 translation rules
  • @babel/plugin-transform-runtime: Transform-runtime conversion is non-invasive, that is, it does not pollute your existing methods. When it encounters a method that needs to be transformed, it will give it a different name, otherwise it will directly affect the business code that uses the library,
  • Babel-loader: This acts as an intermediate bridge to tell WebPack how to handle JS by calling the API in Babel /core.

After the installation, we will create a new. Babelrc file in the root directory and configure it as follows:

{"presets": [["@babel/preset-env", {// Prevent Babel from projecting any module type to CommonJS, causing tree-shaking invalid problem "modules": false } ] ], "plugins": [ [ "@babel/plugin-transform-runtime", { "corejs": { "version": 3, "proposals": true }, "useESModules": true } ] ] }Copy the code

Then we configure the module in webpack.common.js:

  module: {
    rules: [
      {
        test: /\.(js)$/,
        loader: 'babel-loader',
        exclude: /node_modules/,
      },
    ],
  },
Copy the code

The choice of @babel/polyfill and @babel/ plugin-transform-Runtime

The gaskets of Polyfill are missing features of the target browser that are attached to global variables, so babel-Polyfill should not be used when developing class libraries, third-party modules, or component libraries, otherwise it might cause global contamination, and transform-Runtime should be used instead. The transform-Runtime conversion is non-invasive, meaning it doesn’t contaminate your existing methods. When it encounters a method that needs to be transformed, it will give it a different name, otherwise it will directly affect the business code that uses the library,

Therefore, transform-Runtime is used when developing class libraries, third-party modules or component libraries, and babel-Polyfill is used for common projects

browserslistrc

Here’s a very interesting configuration, browserslistrc

We consider browser compatibility, not screen size, but features supported by different browsers, such as CSS features and JS syntax

Create a new.browserslistrc file in the project root directory and configure it as follows:

> 5%
last 2 versions
not ie < 11
Copy the code

This means that the last two versions of browsers used by more than 5% of the world’s population need to be compatible, with the exception of ie11 and below. With this in place, we can make browser compatibility tailored to our needs, rather than going all out

ts-loader

The installation is as follows:

npm install ts-loader typescript -D
Copy the code

Then we configure the rules in webpack.common.js,

module: {
  rules: [
    { 
      test: /\.(ts)$/,
      loader: 'ts-loader',
      exclude: /node_modules/,
    }
  ]
}
Copy the code

Tree-shaking

Webpack is supported by default, we just need to set model: false in.bablerc to enable it by default in production

Tree-shaking is removing unused code to reduce the size of the package. We used Webpack in production, when mode was set to Production, to remove unused code that was introduced through the ES6 syntax import after packaging

As a reminder, we introduced Babel, which converts ECMAScript 2015+ code into a backward-compatible JavaScript syntax to run in current and older browsers and other environments. It converts the IMPORT of ES6 syntax into require, which causes tree-shaking to fail.

So we configure “modules”: false in our.babelrc file to prevent Babel from translating any module type into CommonJS, causing tree-shaking failure

{"presets": [["@babel/preset-env", {// Prevent Babel from turning any module type into CommonJS, tree-shaking invalid problem "modules": false}],],}Copy the code

webpack-bundle-analyzer

Webpack-bundle-analyzer is a package file analysis tool that helps us analyze what files our packed sizes are distributed on

Let’s install it first:

npm install webpack-bundle-analyzer -D
Copy the code

We then add the plugins configuration to webpack.dev.js as follows:

const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer'); module.exports = merge(common, { // ... Plugins: [//... new BundleAnalyzerPlugin({analyzerMode: 'server', // '127.0.0.1', // host set analyzerPort: 8888, // port number set openAnalyzer: false,// prevent automatic report opening in default browser}),],});Copy the code

The openAnalyzer is set to false to prevent the browser analysis report from being opened every time a local build is packed, so we can only open it ourselves when we need to.

Other configurations that may be required

This article configures the TypeScript SDK project. If you are not writing an SDK, you may need other loaders such as HTML-loader, CSS-loader, etc

terser-webpack-plugin

Terser-webpack-plugin is a webpack plug-in that compresses JS.

If you are using WebPack V5 or above, you do not need to install this plugin. Webpack V5 automatically helps us compress the code when mode is production.

If we want to use webpack.common.js, add the following configuration:

const TerserPlugin = require('terser-webpack-plugin');

module.exports = {
  optimization: {
    minimizer: [
      new TerserPlugin(
        parallel: true
      )
    ],
  },
};
Copy the code

API documentation and comments

As an SDK project, it means that we will actively expose a lot of classes or functions to other developers, so as a typescript project, we need a document generation tool. We can choose to automatically generate typeDoc apis. You can also use Vuepress, our custom document content. In this section, we’ll start with TypeDoc

In this section, we introduce the following two configuration processes:

  • How to usetypedocQuickly generateThe API documentation.
  • How to usekoroFileHeaderQuickly generateReasonable comment.

PS: If you are determined to do your own document building with VuePress, you can skip this section and move on to the next

typedoc

Typedoc is a great TypeScript project API documentation generator that automatically generates API documentation based on your source code and the appropriate comments written in the source code.

First, let’s install it

npm install typedoc -D
Copy the code

We then create a new typeDoc. json file in the root directory to place the configuration information as follows:

{
  "entryPoints" : ["src/index.ts"],
  "out": "docs"
}
Copy the code

Complete configuration information can be viewed on the official website: typedoc.org/guides/opti…

The entryPoints configuration item here is the entry file for your project.

Then we add scripts to package.json as follows:

  "scripts": {
    "typedoc": "rimraf docs && typedoc",
  },
Copy the code

Now that we’re ready to generate our API documentation directly, let’s do a simple test to see how it looks in practice:

SRC /index.ts: SRC /index.ts

** ** typescript * // We can initialize like this * const SDK = new frontendSdk(); ** ** typescript * // We can initialize like this * const SDK = new frontendSdk(); * ``` */ export class frontendSdk { /** * @description: @return {*} */ initConfig(id: string, url: string) {}} @return {*} */ initConfig(id: string, url: string) {}Copy the code

After saving, execute the TypeDoc script defined above

npm run typedoc
Copy the code

We can see the effect as follows, which automatically generates an API document for us with the comments we wrote, which is nice:

koroFileHeader

As mentioned above, typedoc generates documentation from annotations, but it is cumbersome to manually generate such reasonable annotations, so we need to use the vscode plug-in koroFileHeader to help us quickly generate annotation formats

We search for koroFileHeader in the vscode app store and install it. Once installed, we press CTRL +shift+p in vscode to open the Settings file setting.json shown below

After setting. Json is enabled, we add a configuration in it. The following configuration aims to turn off header comments and turn on function comments.

"Fileheader. customMade": {// Header comment "autoAdd":false}, "fileheader.cursorMode": {// function comment "description": "", "param": "", "return": "", }Copy the code

After configuration, we open the keyboard shortcut:

Search for cursorTip and change its bound shortcut to our favorite one, CTRL + Alt + T in this case

Now that the configuration is complete, we can quickly generate function annotations. The generated function annotations can also be quickly recognized by TypeDoc to generate API documents. The following image shows a simple annotation quickly generated with koroFileHeader

Vuepress has more freedom to customize documents

Sometimes, using TypeDoc automatically generated documents, either interface or content is not quite what we need, we need a more free and customizable document, we choose Vuepress here

In this section, we will introduce the following:

  • How to useVuepressgenerateCustom document content

PS: You can skip typeDoc configuration if you choose to configure Vuepress

Remove typeDoc related configuration

If typeDoc is configured according to the above article, you need to remove the typedoc configuration first

The installation

Here we have installed: Vuepress v2.0.0-beta, why not install the 1.0 version? Actually, the 1.0 version of Vuepress uses webPack version, but it uses WebPack 4, many plug-ins in it conflict with webpack5 I use here

npm install -D vuepress@next
Copy the code

After installing the dependencies, we add scripts to package.json

{
  "scripts": {
    "docs:dev": "vuepress dev docs",
    "docs:build": "vuepress build docs"
  }
}
Copy the code

Let’s add two more omissions to.gitignore

Cache. Cache. TempCopy the code

Directory structure

We will create a new docs folder under the root directory. The structure directory under the folder is as follows:

├─ ├─ anti├ ─ anti├ ─ anti├ ─ anti├ ─ anti├ ─ anti├ ─ anti├ ─ anti├ ─ anti├.mdCopy the code

Docs is our document root directory,.vuepress is our configuration folder, Guide and API folders are newly created according to their own needs, and Readme. md under docs is our home page

configuration

Let’s start by configuring our config.ts configuration file. Here I post my configuration

import { defineUserConfig } from 'vuepress' import type { DefaultThemeOptions } from 'vuepress' const packageJson = require(".. /.. /package.json") export default defineUserConfig<DefaultThemeOptions>({title: packagejson. name, description: Packagejson. description, base:'/typescript- sdK-starter /', // theme and its configuration theme: '@vuepress/theme-default', themeConfig: {navbar: [{the text: "home page", the link: "/",}, {the text: "guide", link: "/ guide /,"}, {the text: "API" link: "/ API /,"}, {text: "GitHub", link: "https://github.com/nekobc1998923/typescript-sdk-starter", }, ], }, })Copy the code

Then we configure the home page, the readme.md file in the docs folder

-- Home: true heroText: typescript-sdK-starter tagline: build a canonical typescript SDK project from scratch Go link: /guide/ type: primary - text: Github address link: # https://github.com/nekobc1998923/typescript-sdk-starter type: secondary configuration page effects list the features: - the title: Code style specification ESLint, Prettier, EditorConifg Code specification constraint - title: commit specification Husky, Commitlint, and Lint-staged submission specifications (section 10) Making Actions do automated deployment release # page footer footer: MIT Licensed | Copyright © 2021 food cat son neko -Copy the code

NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev: NPM run docs:dev

You can see that the effect is still very good, and then we can further improve the content of the document according to their own needs!

Code specification and submission specification

Now the front end of the project development, generally a project may have many people development, everyone don’t tend to use the code style is unified, for a long time, certainly will let the project become difficult to maintain, it is necessary for us to place restrictions on these issues, and constraints on these, through code review way, oral constraints, the high cost of communication, inflexible, more key is unable to control. We desperately need a tool to help us unify these code specifications;

And for the sake of specification, it also means that we need tools to help us unify the submission of code;

Finally, with the code style specification and submission specification agreed, we can start to do the ChangeLog update;

In this section, we introduce the following three configuration processes:

  • How to useEditorConfig + Prettier + ESLintCombine to achieveCode normalization.
  • How to usehusky + lint-staged + commitlintforSubmit the code for the specification.
  • How to usestandard-versionimplementationGenerate the ChangeLog

Prettier

Prettier Prettier is a powerful code formatting tool that supports JavaScript, TypeScript, CSS, SCSS, Less, JSX, Angular, Vue, JSON, Markdown, etc. It can handle almost any file format our front-end can use, so we use it here to constrain our code style specification

npm install prettier -D
Copy the code

After the installation, create prettier.config.js in the root directory and configure it. Here is my simple Configuration

/ / https://prettier.io/docs/en/configuration.html. The module exports = {/ / the width of each line (display the number of characters) printWidth: TabWidth: 2, // Whether to print Spaces between brackets in the object, {a:5} formatted to {a:5} bracketSpacing: // arrowParens: "always", // newline use endOfLine: "lf", // whether to use single quotes, all items use single quotes singleQuote: TrailingComma: "all", // use semicolon, semi: true, // Use TAB formatting: no useTabs: false,};Copy the code

An. Prettierignore file tells Prettier which files to ignore

Json # build build /dist /types # eslint. eslintCache # jest /coverage # docs API /docsCopy the code

ESLint

Let’s start by installing the following plug-ins

  • eslint
  • @typescript-eslint/parser
  • @typescript-eslint/eslint-plugin
  • eslint-plugin-import
npm install eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin eslint-plugin-import -D
Copy the code

Run this command after the installation is successful

npx eslint --init
Copy the code

NPX eslint –init NPX eslint –init

  • How would you like to use ESLint?

Choose the third clause To check syntax, find problems, and Enforce code style.

  • What type of modules does your project use?

The non-configuration code of the project is imported and exported by the ES6 module system, and JavaScript Modules are selected

  • Which framework does your project use?

According to the actual need to choose, this article does not choose

  • Does your project use TypeScript?

Since this is a TypeScript project, yes is chosen

  • Where does your code run?

Both Browser and Node environments are selected

  • How would you like to define a style for your project?

Use a popular Style Guide, that is, Use a code style that the community has already established and we just follow it.

  • Which style guide do you want to follow?

Choose Airbnb style, are summed up by the community best practices.

  • What format do you want your config file to be in?

Choose JavaScript, that is, the generated configuration file is A JS file for more flexible configuration.

  • Would you like to install them now with npm?

Select YES

We then add scripts to package.json and NPM run lint before the build directive

{
    "lint": "eslint src",
    "build": "npm run lint && rimraf dist types && cross-env NODE_ENV=production webpack --config ./scripts/webpack.prod.js",
}
Copy the code

When we chose the style, we chose Airbnb style, so we don’t need to customize many rules. We can just introduce them into the extends. Airbnb github: github.com/airbnb/java… , interested partners can take a look inside the specific rules

After the installation, there is a new.eslintrc.js file in the root directory of the project, modified as follows:

module.exports = { env: { browser: true, es2021: true, node: true, }, extends: [ 'airbnb-base', 'plugin:@typescript-eslint/recommended', ], parser: '@typescript-eslint/parser', parserOptions: EcmaVersion: 12, sourceType: 'module',}, plugins: ['@typescript-eslint', 'import'], rules: { 'import/no-unresolved': 'off', 'import/extensions': 'off', }, };Copy the code

Let’s create a new.eslintignore file that tells ESLint which files to ignore

Json # build build /dist /types # eslint. eslintCache # jest /coverage # docs API /docs # webpack configuration /scriptsCopy the code

Prettier and ESLint had already been installed, there would be some duplication of configuration between Prettier and ESLint, and we needed plug-ins to work with ESLint

The nature of the conflict is that ESLint is responsible for both code quality inspection and part of formatting beautification, where some of the formatting rules are incompatible with Prettier. Could esLint only check code quality and Prettier only prettier? The community has a pretty mature solution, eslint-config-prettier plus eslint-plugin-prettier

Let’s install these two:

npm i eslint-plugin-prettier eslint-config-prettier -D
Copy the code
  • eslint-config-prettierThe function is to turn offeslintAnd in theprettierConflicting rules.
  • eslint-plugin-prettierIs used to callESLintIs called whenPrettierFor code style verification

Add prettier to.eslintrc.js:

module.exports = { ... Extends: [' plugin: prettier/it / / add a prettier plug-in],... }Copy the code

When this is done, ESLint and Prettier can be used together! And let ESLint only check code quality while Prettier does beautification.

Vscode plug-in EditorConfig

Editorconfig file to Generate. Editorconfig file.

We modify the.editorConfig file it generates:

# EditorConfig is awesome: https://EditorConfig.org # top-most EditorConfig file root = true [*] # Indent_size = 2 # indent size end_of_line = lf # newline, CRLF # charset = UTF-8 # Trim_trailing_whitespace = true insert_final_newline = true [*.md] # utF-8 trim_trailing_whitespace = true Insert_final_newline = false # Insert a line at the end of trim_trailing_whitespace = false # remove extra SpacesCopy the code

Prettier-code formatter prettier-code formatter

Install the extension first

The purpose of this configuration is to keep the developer’s VScode configuration uniform. The configuration of this file takes precedence over vscode’s global settings.json, so that someone else downloads your project and develops it. There is no Prettier or Eslint disabled because the global settings.json configuration is different

Create a settings.json file under the.vscode folder in the project root directory

.vscode/
    setting.json
Copy the code

Setting. json is configured as follows

Exclude: {"**/node_modules": true, "dist": true}, "editor.formatonSave ": true, "[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[typescript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[json]": { "editor.defaultFormatter": "esbenp.prettier-vscode" } }Copy the code

The Prettier code is formatted automatically when we save it

Vscode plug-in ESLint

We know that ESLint has auto-fix functionality supported by the editor, first we need to install the extension:

Add the following code to the.vscode/settings.json file you created earlier:

{
  "eslint.validate": [
    "javascript",
    "typescript"
  ],
  "editor.codeActionsOnSave": {
    "source.fixAll.eslint": true
  },
}
Copy the code

When we save, esLint automatically fixes some syntax problems.

Husky + Lint-staged submission of spec code

First, before using Husky and Lint-staged nodes, check that your Node version >=12.20.0. If it is smaller, reinstall node or use NVM to switch versions (see github.com/okonet/lint… The Node version I’m using is V14.18.2.

During project development, code formatting and rule checks for ESLint and stylelint are done before each commit to enforce our code style and prevent hidden bugs.

So is there a way to only do the above formatting and lint rule validation for files that have changed the most recently in our Git cache?

The answer is Husky, which provides hooks such as the pre-commit hook that allows lint-staged code file formatting and Lint rule validation!

We directly follow the official recommended installation instructions:

npx mrm@2 lint-staged
Copy the code

Note: Installationhuskyandlint-stagedPlease install it firstESLint + PrettierOtherwise, it will detect that there is no installation and give you an error:

E:\Code\ HRFSH > NPX mrm@2 lint-passage NPX: 237 Running Lint-passage successfully installed in 17.33 seconds... Cannot add lint-staged: only eslint, stylelint, prettier or custom rules are supported.Copy the code

The official recommended installation instructions do these things for you:

  • topackage.jsonAdd to devDependencies ofhuskylint-stagedBoth rely on
  • topackage.jsonTo the scripts in"prepare": "husky install"The script
  • topackage.jsonLint-staged configuration items have been added into this passage. The official generated code is as follows:
  "lint-staged": {
    "*.js": "eslint --cache --fix",
    "*.{js,css,md}": "prettier --write"
  }
Copy the code
  • Create a new directory in the root directory.huskyFolder, in the folderpre-commitThe files are already integrated for us automaticallynpx lint-stagedinstruction

Thus, at this point we have achieved: When committing, eslint –cache –fix prettier –write for.js.css.md files, but not for.ts files, eslint –cache –fix prettier –write for.js.css. And we don’t want auto-fix –fix

So we reconfigured our package.json to change the lint-staged configuration items automatically generated by the instructions to be what we needed

{
  "lint-staged": {
    "*.{ts,js}": [
      "eslint"
    ]
  }
}
Copy the code

Eslint is verified when files in the staging area have a.js or.ts suffix.

After this configuration, every time we commit, ESLint validations are triggered to check whether files in the staging area comply with THE ESLint specification. If they do not, an error is thrown to abort the COMMIT

Commitlint submits a commit for the specification

In multi-person projects, if git’s submission instructions are accurate, they will be documented later in collaboration and bug handling.

We have one goal: only Commit messages that conform to the Angular specification pass the Commit check. To do this, we use CommitLint to check that git commit messages are formatted in the specification

Let’s first install the dependencies

npm install @commitlint/cli @commitlint/config-conventional -D
Copy the code

Here, we use the most popular, well-known, and recognized Angular team submission specification in the community.

Take a look at the Angular project submission record:

As you can see from the figure above, each commit has a clear and complete format. The Commit message consists of a Header, Body, and Footer. There are a lot of commit MSG articles out there and I won’t bother you here, so let’s get right into the configuration process:

We create a.commitlintrc.js file in the root directory, which is our COMMITlint configuration file, as follows:

module.exports = {
  extends: ['@commitlint/config-conventional'],
  rules: {
    'type-enum': [
      2,
      'always',
      ['build', 'ci', 'chore', 'docs', 'feat', 'fix', 'perf', 'refactor', 'revert', 'style', 'test'],
    ],
  },
};
Copy the code

These are all officially recommended Angular style CommitLint configurations

/** * Build: Changes to the Build tools such as Webpack * CI: Continuous integration Added * Chore: Changes to the build process * feat: new features * docs: Changes to the documentation * Fix: Bug fixes * Perf: performance optimizations * refactor: refactoring of an existing feature * Revert: Undoing a previous COMMIT * Style: code formatting changes * test: adding tests */Copy the code

We will then add a hook to the husky above and execute the following statement:

npx husky add .husky/commit-msg 'npx --no-install commitlint --edit $1'
Copy the code

Once this is done, at commit time we will trigger the Husky hook to check that our commit is compliant

Note: If you are using Windows, you may not be able to add NPX husky add.

The question has already been raised on Github: github.com/typicode/hu…

We just need to modify our Add code as follows, just like the scheme provided in this issue:

npx husky add .husky/commit-msg "npx"
Copy the code

We can then complete the NPM statement in the generated commit-msg file.

The complete configuration of the commit- MSG file after completion is as follows:

#! /bin/sh . "$(dirname "$0")/_/husky.sh" npx --no-install commitlint --edit $1Copy the code

Standard – generate changelog version

Standard-version is selected here, and the installation is as follows:

npm i standard-version -D
Copy the code

Let’s configure package.json as follows:

  "scripts": {
    "release": "standard-version",
    "release-major": "standard-version --release-as major",
    "release-minor": "standard-version --release-as minor",
    "release-patch": "standard-version --release-as patch",
  },
Copy the code

The version number major. Minor. Patch

Standard-version Default version update rule:

  • Feature updates minor,
  • Bug fix updates patch
  • BREAKING CHANGES updates major

We manually added release-major directives to make it easier for us to commit

Then we create a.versionrc.js file in the root directory with the following configuration:

module.exports = { header: '# Changelog', commitUrlFormat: '{{host}}/{{owner}}/{{repository}}/commit/{{hash}}', types: [{type: 'feat' section: '✨ the Features | new Features'}, {type:' fix 'section:' 🐛 Bug Fixes | Bug Fixes'}, {type: 'init' section: '🎉 init | initialization'}, {type: 'docs' section:' ✏ ️ Documentation | document '}, {type: 'style' section: '💄 Styles | style'}, {type: 'refactor' section: '♻ ️ Code Refactoring | Code Refactoring'}, {type: 'perf, section: '⚡ Performance Improvements | Performance optimization'}, {type: 'test' section: '✅ Tests | test'}, {type: "revert" section: '⏪ Revert | back'}, {type: 'build' section: '📦 build System | packaged build'}, {type: 'chore, section: '🚀 Chore | dependent/build/engineering tools'}, {type:' ci 'section:' 👷 Continuous Integration | ci configuration '},]};Copy the code

If you want to customize it, you can open the official documentation: Convention-Changelog-config-spec/readme.md

If you don’t want it to commit automatically, you can set it in package.json. In addition to commit, you can also skip bump, Changelog, and Tag steps

{
  "standard-version": {
    "skip": {
      "commit": true
    }
  }
}
Copy the code

Some words outside the configuration

As mentioned, EditorConfig, Prettier, ESLint are installed to do code checking, styling checking, and some might wonder: Don’t the conventions conflict?

The answer is yes. First, let’s take a look at the positioning of the three:

  • EditorConfig: Code across editors and ides with a consistent simple coding style;
  • Prettier: a tool that formats code, beautifies it;
  • ESLint: code quality checks, code style constraints, etc.

ESLint conflicts with Prettier in the previous ESLint configuration

EditorConfig’s configuration items are syntactically neutral, such as indent size, text removing extra Spaces, and so on.

Prettier is a formatting tool, you format it according to syntax, single or double quotation marks, semicolon, where to wrap a line, and of course, indentation

EditorConfig indent_size 4, Prettier tabWidth 2, EditorConfig indent_size 4, Prettier tabWidth 2

As you can see, after we declare the object and press enter, the indent size is 4 in.editorConfig, so the cursor goes directly there, but when we save, because our default formatting tool is already Prettier in.vscode/settings.json, So the configuration with an indent size of 2 is read and the code is formatted correctly.

Of course, the purpose of introducing them is to keep the code style consistent, so if we manually configure the conflicting options, we will be missing the purpose

Local hot update debug code

To put it simply, the SDK project is to provide methods for external use, so generally our local debugging process is like this: SDK project changes the code -> manually package the SDK to build the product -> copy the packaged product into a new project -> introduce the packaged product in another project -> debug on another project

This kind of debugging method is very tedious and time-consuming, so it is necessary for us to set up a local hot update debugging environment for the project, which can directly view the modification effect as our code is updated.

In this section, we introduce the following two configuration processes:

  • How to usehtml-webpack-plugin + webpack-dev-serverCombine to achieveNative code debugging.
  • How to usenpm-linkTo implement theDebugging running in the target project.

html-webpack-plugin

The function of htML-webpack-plugin is to help us automatically import packaged JS files into HTML files. Through this plug-in, we can save the two tedious steps of copying packaged products into a new project and importing packaged products into another project.

Let’s start by installing this plug-in

npm install html-webpack-plugin -D
Copy the code

Create a new index. HTML file in the public folder. The HTML code is as follows:

<! DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta  name="viewport" content="width=device-width, Initial-scale =1.0"> <title>libraryStarter</title> </head> <body> </body> <script> new libraryStarter({id:'GIQE-QWQE-VFFF',url:'localhost'}) </script> </html>Copy the code

Notice the new libraryStarter() in this code; LibraryStarter, which corresponds to the output.library field configured in webpack.common.js, can be called because output.library adds this variable to the global variable

SRC /index.ts to do a simple test:

interface ConfigOptions {
  id: string;
  url: string;
}
class libraryStarter {
  constructor(options: ConfigOptions) {
    console.log('constructor-id-url', options.id, options.url);
  }
}

export default libraryStarter;
Copy the code

Then we configure the webpack.dev.js configuration file for the development environment. We add the following configuration:

const HtmlWebpackPlugin = require('html-webpack-plugin');
const { resolve, PROJECT_PATH } = require('./constants');

module.exports = merge(common, {
  plugins: [
    // ...
    new HtmlWebpackPlugin({
      template: resolve(PROJECT_PATH, './public/index.html'),
      scriptLoading: 'blocking',
    })
  ],
});

Copy the code

At this point, we execute the NPM run dev command, and when we’re done, we can see that our configured HTML file has been automatically typed into the dist folder

Let’s open the HTML file in the browser and verify that it automatically introduces us to the packaged JS file and executes correctly.

As you can see in the figure above, the HTML-webpack-plugin has automatically imported the packed JS file into the HTML file and the new executes correctly

Although this configuration allows us to build HTML locally for debugging, we have to manually refresh the page after each code debugging. Lazy programmers are the first productivity! How does this laborious debugging work? The solution, of course, is to do hot updates via Webpack-dev-server.

webpack-dev-server

Webpack-dev-server is a local HTTP service that can be configured with hot updates, ports, and so on. With this, we can avoid our native debugging SDK manual packaging, compilation and production of the SDK, and when it listens for code changes, it will rebuild and refresh the HTML page itself, can be said to be very cool.

Official website: github.com/webpack/web…

Let’s install it first:

npm install webpack-dev-server -D
Copy the code

Then we add the following configuration to webpack.dev.js:

Module. exports = merge(common, {devServer: {host: '127.0.0.1', // set host port: Compress: true, // Whether compression is enabled open: true, // Open the default browser hot: true, // hot update},});Copy the code

We then go back to package.json and modify the dev script as follows:

{
  "scripts": {
    "dev": "rimraf dist types && cross-env NODE_ENV=development webpack-dev-server --config ./scripts/webpack.dev.js"
  }
}
Copy the code

Then we run NPM run dev to see what the configuration looks like

It can be found that after it is built, it automatically opens the index.html page that we manually opened when we configured html-webpack-plugin above, and the address is http://127.0.0.1:9003/. Let’s have a look at the console print:

Well! Let’s try the hot update again. We’ll modify the index.ts file and add a new console. We can print anything we want.

Well, it is very nice, you can find that it automatically updates the packaged code for us, and also helps us refresh the page, can be said to be very thoughtful ~

npm link

With the above two plug-ins, we can now debug native hot-update code without manual packaging and refreshing; It’s easy to debug code with two big screens on and a cup of coffee in your right hand. But! There is another scenario that is still very tedious to debug. What is it? The actual project had a BUG using our SDK.

Let’s think about it for a moment. The actual projects using our SDK are usually installed by the project after we release it to NPM. If we want to debug on the project, we have to manually copy the build product into the project and then manually import and use it. And every time you change the code, you have to build and copy it again! This can’t be tolerated, so here’s a practical tip that lets you debug locally on a real project.

The trick is to usenpm link

The usage method is extremely simple, we first in our SDK project, the command line implementation

npm link
Copy the code

After the command is executed, the Npm-link-module is linked globally according to the configuration in package.json

Then we go to the actual project where we want to run the SDK, command-line

npm link library-starter
Copy the code

Then in the actual I project where we want to run the SDK, we write:

import library-starter from 'library-starter'
new library-starter()
Copy the code

In this way, we can have fun debugging in specific projects ~

Unit testing

Unit testing is a very important part of project development, it can reduce the occurrence of bugs. The code specification and submission specification above can only restrict the code submitted to the specification, but cannot guarantee that we submitted the correct code, so we need to do a basic unit test for ourselves, to ensure that every code submitted will not affect our main functional process.

In this section, we introduce the following two configuration processes:

  • How to useJestTo implement theUnit testing.
  • How to usehusky + lint-staged + JestforSubmit the correct code.

Jest

Let’s start by installing:

npm install jest ts-jest @types/jest -D
Copy the code

Of these plugins, JEST is needless to say, and TS-JEST is a Jes converter that lets you test projects written in TypeScript with JEST. @types/jest is designed so that our compiler doesn’t keep reporting unfound problems with jEST types

Jest does not compile.ts files by default, so in order for our TypeScript project to unit test properly, we need to configure Jest to understand that using the TS-Jest default, we execute the command:

npx ts-jest config:init
Copy the code

After executing the NPX ts-jest config:init command, it will create the jest config file in the root directory. We modify jest. Config.js to the following:

Module. exports = {preset: 'ts-jest', testEnvironment: 'node', // Whether to display collectCoverage: True, // Let jest know which files need to pass the test. This requirement is: all. Ts files in SRC folder need to be overwritten to collectCoverageFrom: [' SRC /*.ts'], // all statement coverage, branch coverage, function coverage, and line coverage must be 100% to pass the test. coverageThreshold: { global: { branches: 100, functions: 100, lines: 100, statements: 100,},}, // Path mapping configuration https://kulshekhar.github.io/ts-jest/docs/getting-started/paths-mapping/configuration/here to and mapping corresponding moduleNameMapper TypeScript path: { '^@/(.*)$': '<rootDir>/src/$1', '^@docs/(.*)$': '<rootDir>/docs/$1', '^@public/(.*)$': '<rootDir>/public/$1', '^@test/(.*)$': '<rootDir>/test/$1', }, };Copy the code

Then we’ll configure our package.json to add scripts:

"test": "jest"
Copy the code

Ok, now that we’re ready to write unit tests, let’s test them briefly:

We modify the SRC /index.ts file:

interface ConfigOptions {
  id: string;
  url: string;
}
class LibraryStarter {
  public id: string;

  public url: string;

  constructor(options: ConfigOptions) {
    this.id = options.id;
    this.url = options.url;
  }

  getConfig() {
    return {
      id: this.id,
      url: this.url,
    };
  }
}

export default LibraryStarter;
Copy the code

We will create a test folder in the root directory to store our use cases, and create init.spec.ts under the test folder:

import LibraryStarter from '@/index'; Describe (' SRC /index.ts', () => {it() => {expect(new LibraryStarter({id: 'GIQE-QWQE-VFFF', url: 'localhost' }).getConfig()).toStrictEqual({ id: 'GIQE-QWQE-VFFF', url: 'localhost', }); }); });Copy the code

Ok, let’s run our unit test case:

npm run test
Copy the code

This unit test should pass normally. Let’s take a look at the results:

PASS test/init.spec.ts SRC /index.ts √ ----------|---------|----------|---------|---------|------------------- File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s ----------|---------|----------|---------|---------|------------------- All files | 100 | 100 | 100 | 100 | index.ts | 100 | 100 | 100 | 100 | ----------|---------|----------|---------|---------|------------------- Test Suites: 1 passed, 1 Total Tests: 1 passed, 1 total Snapshots: 0 Total Time: 4.927 s Ran All test Suites: 1 passed, 1 Total Tests: 1 passed, 1 total Snapshots: 0 Total Time: 4.927 S Ran all test SuitesCopy the code

Let me explain what these parameters mean

  • % stmts Statement coverageCoverage of code statements
  • % Branch Branch coverageIf execution coverage in code
  • % Funcs Function coverageFunction coverage in code
  • % Lines Line coverageLine coverage of code

As shown in the above results, our test passed, let’s open the generated test coverage report to have a look:

The file is in the Coverage folder as index.html

Jest is verified by ESLint

In order for our unit test code to be ESLint verified to Jest’s recommended rules, we need to install a supported plug-in to verify

Let’s start by installing the plugin: eslint-plugin-jest

npm install eslint-plugin-jest -D
Copy the code

Add the extends configuration to.eslintrc.js after the installation

{
  "extends": ["plugin:jest/recommended"]
}
Copy the code

Of course we also need to validate the test folder in package.json

"lint": "eslint src test",
Copy the code

Submit the correct code via Jest

Jest unit tests are not allowed to be uploaded to our repository if the code fails the Jest unit tests.

This need, of course, has to be fulfilled by our old friend Husky

Before the configuration commits the correct code, we add Jest validation before the build:

{
  "scripts": {
    "build": "npm run lint && npm run test && rimraf dist && cross-env NODE_ENV=production webpack --config ./scripts/webpack.prod.js"
  }
}
Copy the code

After this ensures that the code builds correctly, we go to configure the.husky folder under the pre-push file, and we execute:

npx husky add .husky/pre-push "npm run test"
Copy the code

If the execution fails under Widnows, refer to the commitlint section juejin.cn/post/703896…

The complete file after execution looks like this:

#! /bin/sh . "$(dirname "$0")/_/husky.sh" npm run testCopy the code

After this configuration, each push will run the NPM run test. As long as the coverage does not reach 100% or one of the test cases fails, this push will not be allowed to pass

Automatic deployment publishing

After we develop the SDK, we generally need to release it on NPM for download and use. However, if we continuously release NPM manually, it is still troublesome. Therefore, this section mainly includes the following two parts:

  • Automatic updatesStatic resource(Docs document) toGithub 上
  • automaticBuild buildingAnd publish the product toNPM 上

This section is described in another article of mine, here’s the link:

Juejin. Cn/post / 704519…

At the end

That’s it. A basic TypeScript SDK project is now in place. You just need to start coding for your business in the index.ts file in the entry.

If you want to work on a TypeScript non-SDK project, you can configure the TypeScript project as above and add your own configuration later