Next – ANTD-Scaffold was used for all of our recent projects. In the process of system development, we have been thinking about and improving the logon authorization and route authentication. We hope to find an optimal solution
. Here is why the Next js scheme, because different SSR framework, the first screen will apply colours to a drawing on the server, so some processing or request with the ordinary SPA, there are some difference between ordinary client rendering, only need a set of logic can login validation, and in the Next, js inside the server and the client actually require a separate treatment, Therefore, a unified and convenient abstract solution is helpful to the development of our business
Because Nuxt and Next are basically the same, and authorization verification is nothing to do with code, logical design level things, so I think it should be a general login authorization scheme for SSR framework ~
The login
Login logic is very simple, there is nothing to say, no matter what system, login module should be essential, so I will talk about my side of the development process encountered some problems and summary. Generally speaking, there are two scenarios for logging in to a business or blogging system.
The first kind: the user enters the system for the first time, then provides the user is the login page, the user enters the system after logging in;
In the second case, the system saves the user’s authorization information and does not log in again within a certain period of time. The user directly enters the home page or URL page of the system. When the user’s authorization information is invalid or the user clears the browser, the system prompts the user to log in again.
After successful login, user information will be written into the specified location (general user information into state, user authorization information into cookie) to facilitate subsequent authorization verification operations.
One quick question about login
In the case of re-login after the user’s authorization information expires, there are two situations: First, a user sends a request to the system page after entering the system page (not necessarily the home page, but any page). When the user’s authorization expires, the user is prompted to re-log in. Second, the user enters the system before the browser displays the page (that is, when the server side), it determines that the user login is invalid and redirects to the login page (no matter what page the user opens).
The first scenario IS what I call flash login (as the name implies, the user is redirected to the login page once they enter the system, another flash of page switching process), and the second scenario is what I call no-flash login. It is not to say that no flash is better than flash, both schemes should be used in commercial systems, depending on their specific business scenarios ~ here is not going to talk in detail, just write this will be a brief talk ~
authorization
Authorization is the definition of the permissions of the front and back ends on the interface, whether the interface can be accessed by all, if not, what fields are checked by the front and back ends, where are they placed, and so on. Each team has their own code habits, here we do not force everyone to write according to my habits, the following scheme is only their own business scene abstraction, you can improve the use of ~ according to their own habits
cookie + token
My habit is to use token for authentication of users at the front and back ends. The back-end generates token and stores it in cookie, and the later request front-end takes the token out of cookie and carries it to the request header (FETCH is used by me), which can be divided into the following two scenarios:
The user has never logged in -> Logon logic
The user enters the system for the first time and logs in. After the user passes the password verification, the user gets the relevant information and token and saves it in the cookie.
The user has logged in -> Front-end authorization logic
If the user has logged in and the token exists and is within the validity period, the auth process is used to obtain the user information directly from the cookie without sending a request.
Why are there two scenes?
Because user information is stored in state, and state will be the initial state when the system is refreshed, in fact, in most cases, it is not necessary to send a request again to get data from the background. Related user information (user name, user ID and other necessary information) can be stored in cookies. The authorization logic can then store user information from cookies to state, saving unnecessary network requests.
In addition to cookie, token can also be put into state. However, there may be a problem when state is put, that is, the status of token may not be synchronized in time. For example, the token expiration time is one hour, after which the token expires, login should be authorized again. The problem is that if I open the page and login is not closed, the token in the state will not expire and disappear after one hour. However, you can set the expiration time of the cookie when you put it in the cookie. Of course that scenario expired and as you can tell from the responses in the background, it didn’t have much of an impact overall.
Authorization scheme
The relevant information after successful login will be stored in cookies. Here I use USER_TOKEN,USER_NAME, and USER_ID to indicate that the user logged in successfully and wrote the cookie.
- Add authorization logic functions in getinitialProps of _app.js
initialize(ctx)
static async getInitialProps ({ Component, ctx }) { let pageProps = {}; / * * application initialization, must be in Component. GetInitiialProps * because it is authorized, in front of the system the most priority logic is CTX * the incoming parameters, containing the store and the req * * / initialize (CTX); if (Component.getInitialProps) { pageProps = await Component.getInitialProps({ ctx }); } return { pageProps }; }Copy the code
- The initialize logical
@param {Object} CTX */ export default function(CTX) {const {req, store} = CTX; const userToken = getCookie('USER_TOKEN', req); if (userToken && ! Store.getstate ().user.auth.user) {// Cookie exists token and auth.user does not exist null. Const payload = {username: getCookie('USER_NAME', req), userId: GetCookie ('USER_ID', req),} getCookie('USER_ID', req),} }}Copy the code
- Encapsulate the Cookie
We use JS-cookie to operate and process the front-end cookie. However, SSR framework has the process of obtaining data on the server side. When we use THE server side, we cannot obtain the data through JS-cookie, but through the reQ we pass in. So the end result is encapsulation that both the server and the client can get
import cookie from 'js-cookie'; /** * Client-side -> directly fetch js-cookie API * server-side -> fetch ctx.req */ export const getCookie = (key, req) => { return process.browser ? getCookieFromBrowser(key) : getCookieFromServer(key, req); }; const getCookieFromBrowser = key => { return cookie.get(key); }; const getCookieFromServer = (key, req) => { if (! req.headers.cookie) { return undefined; } const rawCookie = req.headers.cookie .split('; ') .find(c => c.trim().startsWith(`${key}=`)); if (! rawCookie) { return undefined; } return rawCookie.split('=')[1]; };Copy the code
This code is actually very simple to encapsulate, there seems to be a lot of community, such as next-cookie, Nookie, etc., everyone feel free to use, I use this, anyway, enough to do, there is no dependency.
Finally, I also organized the logon authorization logic roughly and drew a flow chart:
That’s the setup, yeah, that’s just the setup, because it’s just logon + authorization logic, and I want to talk about authorization best practices, and the interface authentication logic with the background is the most frustrating part of these systems, and then the validation part.
validation
It is very simple. In a commercial system, all interfaces except login and registration need to verify the user’s identity. Generally, my front and back end are carried out through token. Different front-end and back-end conventions are also different.
I generally use JWT for personal projects, so the token should be placed in the header Authorization field and then be fronted with Bearer ${token}.
In my company’s projects, we generally agree on the front and back end. The back end is given a header field, and the front end puts the token into the header field.
Verify the first step, encapsulating the FETCH
The specific encapsulation method, MY previous relevant article seems to have written, here directly paste the code.
The convention here is that the header field of the front-end and back-end tokens is user-token
import fetch from 'isomorphic-unfetch'; import qs from 'query-string'; import { getCookie } from './cookie'; // initial fetch const nextFetch = Object.create(null); // browser support methods // ['DELETE', 'GET', 'HEAD', 'OPTIONS', 'POST', 'PATCH', 'PUT'] const HTTP_METHOD = ['get', 'post', 'put', 'patch', 'delete']; // can send data method const CAN_SEND_METHOD = ['post', 'put', 'delete', 'patch']; HTTP_METHOD.forEach(method => { // is can send data in opt.body const canSend = CAN_SEND_METHOD.includes(method); nextFetch[method] = (path, { data, query, timeout = 5000 } = {}) => { let url = path; const opts = { method, headers: { 'Content-Type': 'application/x-www-form-urlencoded', Accept: 'application/json' /* Add the token to the header field */ 'user-token ': getCookie('USER_TOKEN')}, credentials: 'include', timeout, mode: 'cors', cache: 'no-cache' }; Query if (query) {url += '${url.includes('? ')? '&' : '? '}${qs.stringify(query)}`; } if (canSend && data) { opts.body = JSON.stringify(data); } console.log('Request Url:', url); Return fetch(URL, opts).then(res => res.json()); }; }); export default nextFetch;Copy the code
Well, the validation logic is written here, we will take the token from the cookie and stuff it into the header every fetch request. So the question is, can you put it in every time?
The answer is no! As mentioned earlier, one of the features of the server rendering framework is that the server gets the data. In many cases, the server gets the data on the server (if you use getInitialProps, the server gets the data when the system is initialized or the page is refreshed). The server cannot get the data through jS-cookie. One might say, well, isn’t there a cookie that’s wrapped that you can get from the server? Yes, it is wrapped, but if we look closely, we need to pass the second parameter ctx.req. Do we pass req in every fetch? Impractical and against development logic, so dig deeper
Verify step 2, get cookies correctly and gracefully
Here, I have thought about two schemes. Although one of them is more troublesome, it also has suitable scenarios. In order to show the difficulty of my research process, I will say both of them here:
- First: the server passes reQ to the FETCH separately, and the client obtains it directly from the cookie
Instead of passing REQ to FETCH for each request, the server and the client will write the request differently.
// nextFetch.js HTTP_METHOD.forEach(method => { ... /* Add a req attribute, NextFetch [method] = (path, {data, query, timeout = 5000, req} = {}) => {let url = path; const opts = { method, headers: { ... /* Put token into header field */ 'user-token ': getCookie('USER_TOKEN', req)},... }; . }; }); export default nextFetch;Copy the code
Our getInitialProps method might need to change at this point.
/ pages/home. Js = = = = = = = = = = > written before the import home from '.. /.. /containers/home'; import { fetchHomeData } from '.. /.. /redux/actions/home'; Home.getInitialProps = async (props) => { const { store, isServer } = props.ctx; store.dispatch(fetchHomeData()); return { isServer }; }; export default Home; ==========> import Home from '.. /.. /containers/home'; import { fetchHomeDataSuccess } from '.. /.. /redux/actions/home'; import nextFetch from '.. /.. /core/nextFetch'; Home.getInitialProps = async (props) => { const { store, isServer, req } = props.ctx; // State const homeData = await nextFetch. Get (url, {query, req}); // State const homeData = await nextFetch. store.dispatch(fetchHomeDataSuccess(homeData)); return { isServer }; }; export default Home;Copy the code
In fact, there is no big problem with this scheme, the only problem is that the whole process becomes a bit nonessential. In principle, we obtain data by distributing action, but in this scenario, it becomes a direct request to obtain, and then trigger successful action. The reason why this method is not recommended is that the request is written in two different ways, the server side and the client side are not the same, it feels a little confused logic, personal development is not a big problem, but if the collaborative development, everyone needs to discuss in advance, but it is not without applicable scenarios. This is used when the system is relatively simple and does not have a state management mechanism such as redux
It’s perfect for stateless management scenarios where you don’t need redux, so you can pass data to components as props
- The second way: the code logic stays the same, the token is not fetched inside the FETCH, but instead gets the incoming FETCH at the Redux layer
This scheme is used by one of my systems. The idea is that I do not want to change the logic of obtaining data, nor do I want reQ to be passed into action or fetch. The idea is that the client obtains token through JS-cookie, and the server cannot obtain token through JS-cookie, so it obtains token from other places. The server obtains the token through state, and the overall authentication logic is as follows.
// nextFetch.js import fetch from 'isomorphic-unfetch'; . HTTP_METHOD.forEach(method => { // is can send data in opt.body const canSend = CAN_SEND_METHOD.includes(method); /* New token attribute, NextFetch [method] = (path, {data, query, timeout = 5000, token} = {}) => {let url = path; const opts = { ... headers: { ... 'User-Token': token }, ... }; . }; }); export default nextFetch;Copy the code
NextFetch adds a token parameter to nextFetch. Where does the token come from? Well, as mentioned above, it is passed from the Redux layer. I use Redux-Saga for asynchronous logic, and the rest is the same.
import { take, put, fork, select } from 'redux-saga/effects'; import { FETCH_HOME_DATA } from '.. /.. /.. /constants/ActionTypes'; import { fetchHomeDataFail, fetchHomeDataSuccess, } from '.. /.. /actions/home'; import api from '.. /.. /.. /constants/ApiUrlForBE'; import nextFetch from '.. /.. /.. /core/nextFetch'; import { getToken } from '.. /.. /.. /core/util'; Export function fetchHomeData() {while (true) {yield take(FETCH_HOME_DATA); /* getToken */ const token = yield getToken(select); const query = {... queryProps} try { const data = yield nextFetch.get(api.home, { token, query }); yield put(fetchHomeDataSuccess(data)); } catch (error) { yield put(fetchHomeDataFail()); } } } export default [ fork(fetchHomeData) ];Copy the code
When Saga used nextFetch to get data, we passed the token acquisition into nextFetch in advance, using a method.
/** * To obtain the token, if the client obtains the token through cookie, * @param {Function} select */ export Function getToken (select) {return process.browser? getCookie('USER_TOKEN') : select(state => state.user.auth.token); }Copy the code
The problem? This solution I think is really ok, encapsulation is not troublesome after completion, looks very elegant, and the team development is not any problem, should be everyone’s acquisition process has been unified encapsulation code limited. However, there is a minor flaw, which is that each saga needs to fetch the token process separately and then plug it into nextFetch.
Verify step 3, best solution
The above scheme is a process that I have been designing, improving and overturning in the writing of several systems. The essence of the problem is that the server and client cannot share cookies. In fact, the second scheme is ok, at least it is not ugly in writing, and it is not troublesome after encapsulation, but I am thinking, Is it really necessary for each saga to go through the token fetch process and then pass it into the FETCH? It would be nice if we could do this in fetch.
With the above idea, I suddenly had an Epiphany, I thought I was in a dream you may not believe ~ ha ha, but it was an Epiphany. As mentioned above, the second option is to add a token attribute to the receiving parameter of nextFetch, and we pass the token in for use. Then I thought, if we get the token when we assign to nextFetch, wouldn’t it be ok? Now that I have an idea, LET’s give it a try
Initialize import {getCookie} from './cookie'; /* nextFetch */ import nextFetch from './nextFetch'; import { authUserSuccess } from '.. /redux/actions/user'; @param {Object} CTX */ export default function(CTX) {const {req, store} = CTX; const userToken = getCookie('USER_TOKEN', req); /** Add the following line to assign the obtained token to the nextFetch Authorization attribute **/ nextFetch.Authorization = userToken; if (userToken && ! store.getState().user.auth.user) { ... store.dispatch( authUserSuccess(payload) ); }}Copy the code
At this point, we get the token in the request and it looks like this.
import fetch from 'isomorphic-unfetch'; . HTTP_METHOD.forEach(method => { // is can send data in opt.body const canSend = CAN_SEND_METHOD.includes(method); nextFetch[method] = function (path, { data, query, timeout = 5000 } = {}) { let url = path; const opts = { ... headers: { ... /* Get the token, if the browser environment takes the cookie, if the node side, get its own Authorization attribute */ 'user-token ': process.browser? getCookie('USER_TOKEN') : this.Authorization }, ... }; . }; }); export default nextFetch;Copy the code
Let’s try printing in the browser
As you can see, we successfully assigned the token to the nextFetch.Authorization attribute after the validation, so this scheme is feasible.
Then the final solution is to assign token to nextFetch as an attribute at the authorization point, and then nextFetch takes it from cookie on the client side and from its own attribute on the server side, so that we do not need to do extra operations at other locations ~
In fact, the first version of the original plan was to use
global
Global is shared by the server side. Assignment by different users will be overwritten. There will be problems in high concurrency scenarios, so it cannot be used for sure.
conclusion
The code may be too abstract, I have opened a new auth branch with scaffolding, you can run through the code ~ and the fetch inside is also complete ~ use as needed ~
Address: Next-ANTD-Scaffold_auth
This article may be a little confusing, perhaps using too much of their own logic habits, but the central idea is to help you simplify the logics of login authorization verification, several of the schemes are actually feasible, we should have their own schemes in their own projects, you can communicate ~
Communication group: