This article expects readers to have some knowledge of NodeJS and Koa
What problem does GraphQL solve
In the past, the server-side developers to design a data interface, usually does not require the use of interfaces client developers know the internal data structure, but offer only a API documentation (specification), the document content is introduced how to call API, return what data, documents and function after implementation, even if the server work is complete.
We worked with this way of working and slowly found some problems:
- Writing API documentation is a burden
- API documentation and API services are often deployed in different domains, so we need to remember where the documentation is
- We always find inconsistencies between the actual behavior of the API and the documentation
- Enumeration values of internal API data are always leaked to the client
- API parameter verification is repeated on both the client and server
- It is difficult to see the overall structure of the application data
- We had to maintain multiple versions of the API
Over time, we found that a data description specification needed to be shared between the server and the client:
- This data description is part of the functionality (note that it is not a comment), and it participates in implementing the API functionality.
- The data description itself is documentation, and we no longer need to write documentation, let alone deploy documentation services.
- When we change the data description details, the API functionality changes and we don’t have to worry about inconsistent documentation and behavior.
- The data description itself supports enumeration types, limiting the problem of leakage of enumeration values.
- The data description itself has a type system, we do not need to do the parameter verification work in the client and server.
- The data description itself is the structure diagram of the entire application data.
- Data description can mask version maintenance problems.
GraphQL is such a data description specification.
What is a GraphQL
The official website is as follows:
GraphQL is a query language for apis, a server-side runtime that executes queries using a type based system (defined by your data).
Here is a graphQL-based data description:
type Query {
book: Book
}
enum BookStatus {
DELETED
NORMAL
}
type Book {
id: ID
name: String
price: Float
status: BookStatus
}
Copy the code
To be platform-independent and easier to understand on the server, GraphQL implements an easy-to-read Schema syntax: Schema Definition Language (SDL)
SDL is used to represent the types available in the schema and the relationships between those types
SDL must be stored as a string
SDL defines three types of entrances, which are:
Query
Used to defineread
Operation (can be understood asCURD
In theR
)Mutation
Used to definewrite
Operation (can be understood asCURD
In theCUD
)Subscription
Used to define long links (event-based ways of creating and maintaining real-time connections to the server)
With the above code, we declare a query named book of type book
The type Book has four fields:
- Id, representing the unique ID of each book, of type ID
- Name, representing the name of each book, is a string of type
- Price, representing the price of each book, is of type floating point
- Status, the status of each book, type is
BookStatus
BookStatus is an enumerated type containing:
DELETED
Represents that the book has been removed from the shelves, and its value is0
NORMAL
Represents the book in normal sales, its value is1
In addition to the ability to define its own data types, GraphQL also comes with several basic types (scalars) built in:
Int
: signed 32-bit integerFloat
: signed double precision floating-point typeString
: UTF-8 character sequenceBoolean
: true or falseID
: a unique identifier, often used to retrieve an object or as a cache key
Note that GraphQL requires that the type of the endpoint field be a scalar type. (The endpoints here can be understood as leaf nodes)
For more information about GraphQL, see: graphql.cn/learn/
What is the Apollo,
The official website is as follows:
Apollo is an implementation of GraphQL that helps you manage data from the cloud to the UI. It can be adopted incrementally and layered on top of existing services, including REST apis and databases. Apollo includes two sets of open source libraries for clients and servers, as well as developer tools that provide everything you need to run the GraphQL API reliably in production.
We can think of Apollo as a set of tools that fall into two broad categories, one service-oriented and one client-oriented.
Among them, client-oriented Apollo Client covers the following tools and platforms:
- React + React Native
- Angular
- Vue
- Meteor
- Ember
- IOS (Swift)
- Android (Java)
- .
Service-oriented Apollo Server covers the following platforms:
- Java
- Scala
- Ruby
- Elixir
- NodeJS
- .
In this article, we will use the Apollo-server-KOA library from Apollo for the NodeJS server-side KOA framework
For more information about Apollo Server and Apollo-server-KOA please refer to:
- www.apollographql.com/docs/apollo…
- Github.com/apollograph…
Build GraphQL backend API service
Quickly build
Step1:
I’m going to create a new folder, so I’m going to create the GraphQL-server-demo folder
mkdir graphql-server-demo
Copy the code
Initialize the project within the folder:
cd graphql-server-demo && yarn init
Copy the code
Install dependencies:
yarn add koa graphql apollo-server-koa
Copy the code
Step2:
Create a new index.js file and write the following code in it:
'use strict'
const path = require('path')
const Koa = require('koa')
const app = new Koa()
const { ApolloServer, gql } = require('apollo-server-koa')
/** * Define GraphQL Schema ** in typeDefs for example: we define a query named book of type book */
const typeDefs = gql` type Query { book: Book hello: String } enum BookStatus { DELETED NORMAL } type Book { id: ID name: String price: Float status: BookStatus } `;
const BookStatus = {
DELETED: 0.NORMAL: 1
}
For example: * To query hello, define a parser function of the same name that returns the string "Hello world!" * For the query book, define a parser function of the same name that returns a predefined object (a real scenario might return data from a database or other interface) */
const resolvers = {
// Apollo Server allows us to mount actual enumeration mappings into resolvers (these mappings are usually maintained in server-side configuration files or databases)
// Any data exchange for this enumeration will automatically replace the enumeration value with the enumeration name, avoiding the problem of leaking the enumeration value to the client
BookStatus,
Query: {
hello: (a)= > 'hello world! '.book: (parent, args, context, info) = > ({
name:'Once upon a Earth'.price: 66.3.status: BookStatus.NORMAL
})
}
};
// Create a Server instance using the schema, parser, and Apollo Server constructor
const server = new ApolloServer({ typeDefs, resolvers })
// Mount the server instance to the app as middleware
server.applyMiddleware({ app })
// Start the Web service
app.listen({ port: 4000}, () = >console.log(` 🚀 Server ready ` at http://localhost:4000/graphql))Copy the code
Looking at the code above, we see that the query book defined in SDL has a parser with the same name as Book as its implementation of the data source.
In fact, GraphQL requires every field to have a resolver. For endpoint fields, that is, fields of scalar type, most GraphQL libraries allow the parser definition of these fields to be omitted, in which case the property with the same name as the field is automatically read from the parent object.
Because hello is a root field in the code above, it has no upper object, so we need to actively implement a parser for it, specifying the data source.
The parser is a function that takes the following parameter list:
parent
If the object is the root field, the value isundefined
args
在SDL
The parameters passed in the querycontext
This parameter is provided to all parsers and holds important contextual information such as the user currently logged in or the database access objectinfo
A value that holds field-specific information and schema details related to the current query
Step3:
Start the service
node index.js
Copy the code
At this point, we see the following information on the terminal:
➜ graphql - server - demo git: (master) ✗ node index. The js 🚀 server ready at http://localhost:4000/graphqlCopy the code
The representative service has been started
Open another terminal interface and request the Web service we just started:
curl 'http://localhost:4000/graphql' -H 'Content-Type: application/json' --data-binary '{"query":"{hello}"}'
Copy the code
or
curl 'http://localhost:4000/graphql' -H 'Content-Type: application/json' --data-binary '{"query":"{book{name price status}}"}'
Copy the code
The following information is displayed:
{"data": {"hello":"Hello world!"}}
Copy the code
or
{"data": {"book": {"name":"Once upon a Earth"."price": 66.3."status":"NORMAL"}}}
Copy the code
This means we have successfully created the GraphQL API service ~
Using commands on the terminal to debug the GraphQL API is obviously not what most of us want.
We need a graphical client with memory to help us remember the parameters of each previous query.
In addition to customizing query parameters with this client, you can also customize header fields, view Schema documents, view the data structure of the entire application…
Next, we look at The Palyground that Apollo offered us.
Playground
Start the GraphQL backend service we just created:
➜ graphql - server - demo git: (master) ✗ node index. The js 🚀 server ready at http://localhost:4000/graphqlCopy the code
We open the address http://localhost:4000/graphql in your browser
At this point, we should see the following interface:
Enter query parameters on the left:
{
book {
name
price
}
}
Copy the code
Then click the middle button to make the request (so much so that when I first saw it, I thought it was a video…). After the request succeeds, we should see the output on the right side:
Playground also provides us with the following functions:
- Create multiple queries and remember them
- Custom request header fields
- View the entire API documentation
- View the complete server Schema structure
The diagram below:
The content in DOCS and SCHEMA is provided through a GraphQL feature called Introspection.
The introspection feature allows us to ask GraphQL Schema which queries are supported by the client query parameter, and playground, when launched, sends pre-introspection requests to retrieve Schema information and organize the content structure of DOCS and Schemas.
For more information on introspection, please refer to graphql.cn/learn/intro…
For Playground and introspection, we wanted to turn them on only in the development and testing production environment, where we wanted them to be off.
We can mask the production environment by using the corresponding switches (Playground and Introspection) when creating the Apollo Server instance:
. const isProd = process.env.NODE_ENV ==='production'
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: !isProd,
playground: !isProd
})
...
Copy the code
Next, let’s consider a more common question:
After the client and server share the API document, the function of the server usually takes some time to develop. Before the function is developed, the client can not request real data from the API. At this time, in order to facilitate the client’s research and development, we will let the API return some fake data.
Next, let’s look at how to do this on the GraphQL server.
Mock
It is very simple to implement the mock functionality of the API using the GraphQL Server based on Apollo Server.
We just need to turn on the Mocks option when building the Apollo Server instance:
. const server =new ApolloServer({
typeDefs,
resolvers,
introspection: !isProd,
playground: !isProd,
mocks: true})...Copy the code
Restart the service and make a request in Playground, and you’ll see that the resulting data becomes random dummy data within the type:
Thanks to GraphQL’s type system, although we provide random data through mocks, the types of the data are the same as those defined in the Schema, which definitely reduces the workload of configuring mocks and allows us to focus on the types.
In fact, the more precise we are with our type definitions, the better the quality of our mock service will be.
Parameter verification and error information
In the previous section we saw some help from the type system for mock services.
Another scenario in which a type system can play is request parameter validation
With the type system, GraphQL makes it easy to prejudge whether a query conforms to Schema specifications without having to wait until later in the execution to discover the problem of request parameters.
For example, if we query a field that does not exist in book, the GraphQL will intercept it and return an error:
We see that the request return result no longer contains the data field, but only an error field, in which the errors array field shows the exact error details of each error.
In fact, playgorund already found the incorrect argument when we typed the name of the incorrect field None in playground and gave us a hint. Notice the little red block on the left of the image above. When hovering over the incorrect field, Playground will give a specific error, which is the same as the error returned by the server:
This error can be detected when we write query parameters without having to make a request, which is great, right?
In addition, we found that the error results returned by the server are actually not very readable. For the production environment, we only want to print the detailed error information to the server log, not to the client.
Therefore, we may only need to return an error type and a short description of the error in response to the message to the client:
{
"error": {
"errors":[
{
"code":"GRAPHQL_VALIDATION_FAILED"."message":"Cannot query field \"none\" on type \"Book\". Did you mean \"name\"?"}}}]Copy the code
When building an Instance of Apollo Server, we can pass a function called formatError to format the returned error message:
. const server =new ApolloServer({
typeDefs,
resolvers,
introspection: !isProd,
playground: !isProd,
mocks: true.formatError: error= > {
// log detail of error here
return {
code: error.extensions.code,
message: error.message
}
}
})
...
Copy the code
Restart the service, request again, and we find that the error message is formatted as we expected:
Organize schemas and Resolvers
So far, the GraphQL server we’ve built is pretty rudimentary:
├── ├─ package.json ├─ ├.txtCopy the code
It can’t be used in real engineering because it’s too free, and we need to design some rules for it to help us deal with real engineering problems.
By this section, I’m sure you’ve sensed some of the mental model changes GraphQL has made:
- Our original organization
routing
Part of what became the organization todaySchema
The work of - Our original organization
The controller
Part of what became the organization todayResolver
The work of
Let’s design a rule that helps us organize our schemas and resolvers:
- New Folder
src
For storing most of the project code - in
src
Create a new folder incomponents
Is used to hold data entities - Each data entity is a folder containing two files:
schema.js
å’Œresolver.js
, they store information about the current data entitySchema
å’ŒResolver
A description of the - in
src/components
Create a new folder inbook
And create a new oneschema.js
å’Œresolver.js
To holdbook
Related Description - in
src
Creating a foldergraphql
, store allGraphQL
Relevant logic - in
graphql
Create a new file inindex.js
As aGraphQL
Startup file, which is responsible for collecting all data entities and generating when the server application is startedApollo Server
The instance
After following the above steps, the entire structure of graphQL-server-demo is as follows:
├ ─ ─ index. Js ├ ─ ─ package. The json ├ ─ ─ the SRC │ ├ ─ ─ components │ │ └ ─ ─ book │ │ ├ ─ ─ resolver. Js │ │ └ ─ ─ schema. Js │ └ ─ ─ graphql │ ├ ─ ├ ─ sci-1.txt ├ ─ sci-1.txtCopy the code
Next, let’s tweak the code
Step 1
SRC/GraphQL /index.js: SRC/GraphQL /index.js
- Responsible for reading and merging all components
Schema
andResolver
- Responsible for creating
Apollo Server
The instance
The final code for the entry file SRC /graphql/index.js is as follows:
const fs = require('fs')
const { resolve } = require('path')
const { ApolloServer, gql } = require('apollo-server-koa')
const defaultPath = resolve(__dirname, '.. /components/')
const typeDefFileName = 'schema.js'
const resolverFileName = 'resolver.js'
/** * In this file, both schemas are merged with the help of a utility called linkSchema. * The linkSchema defines all types shared within the schemas. * It already defines a Subscription type for GraphQL subscriptions, which may be implemented later. * As a workaround, there is an empty underscore field with a Boolean type in the merging utility schema, because there is no official way of completing this action yet. * The utility schema defines the shared base types, extended with the extend statement in the other domain-specific schemas. * * Reference: https://www.robinwieruch.de/graphql-apollo-server-tutorial/#apollo-server-resolvers */
const linkSchema = gql` type Query { _: Boolean } type Mutation { _: Boolean } type Subscription { _: Boolean } `
function generateTypeDefsAndResolvers () {
const typeDefs = [linkSchema]
const resolvers = {}
const _generateAllComponentRecursive = (path = defaultPath) = > {
const list = fs.readdirSync(path)
list.forEach(item= > {
const resolverPath = path + '/' + item
const stat = fs.statSync(resolverPath)
const isDir = stat.isDirectory()
const isFile = stat.isFile()
if (isDir) {
_generateAllComponentRecursive(resolverPath)
} else if (isFile && item === typeDefFileName) {
const { schema } = require(resolverPath)
typeDefs.push(schema)
} else if (isFile && item === resolverFileName) {
const resolversPerFile = require(resolverPath)
Object.keys(resolversPerFile).forEach(k= > {
if(! resolvers[k]) resolvers[k] = {} resolvers[k] = { ... resolvers[k], ... resolversPerFile[k] } }) } }) } _generateAllComponentRecursive()return { typeDefs, resolvers }
}
const isProd = process.env.NODE_ENV === 'production'
constapolloServerOptions = { ... generateTypeDefsAndResolvers(),formatError: error= > ({
code: error.extensions.code,
message: error.message
}),
introspection: !isProd,
playground: !isProd,
mocks: false
}
module.exports = newApolloServer({ ... apolloServerOptions })Copy the code
In the code above, we can see that the linkSchema value defines a Boolean field named _ in the Query, Mutation, and Subscription entries.
This field is actually a placeholder, since there is no official support for combining multiple extend types, so we can set a placeholder here to support combining extend types.
Step 2
Let’s define the data entities: book’s Schema and Resolver’s contents:
// src/components/book/schema.js
const { gql } = require('apollo-server-koa')
const schema = gql` enum BookStatus { DELETED NORMAL } type Book { id: ID name: String price: Float status: BookStatus } extend type Query { book: Book } `
module.exports = { schema }
Copy the code
Here we no longer need the Hello query, so we remove hello when we adjust the book code
From the code above, we saw that we could define the query type for book separately with the extend keyword
// src/components/book/resolver.js
const BookStatus = {
DELETED: 0.NORMAL: 1
}
const resolvers = {
BookStatus,
Query: {
book: (parent, args, context, info) = > ({
name: 'Once upon a Earth'.price: 66.3.status: BookStatus.NORMAL
})
}
}
module.exports = resolvers
Copy the code
The code above defines the data source for the book query, and the resolver function supports returning promises
Step 3
Finally, we adjust the contents of the service application startup file:
const Koa = require('koa')
const app = new Koa()
const apolloServer = require('./src/graphql/index.js')
apolloServer.applyMiddleware({ app })
app.listen({ port: 4000}, () = >console.log(` 🚀 Server ready ` at http://localhost:4000/graphql))Copy the code
Wow, the service startup file looks a lot leaner.
In the previous section, we said that the more precise we can define field types, the better the quality of our mock and parameter validation services.
So what happens when the existing scalar types don’t meet our needs?
Now, how do we implement custom scalars
A custom scalar implements the date field
We create a new field for Book called Created of type Date
.
type Book {
id: ID
name: String
price: Float
status: BookStatus
created: Date
}
.
Copy the code
book: (parent, args, context, info) = > ({
name: 'Once upon a Earth'.price: 66.3.status: BookStatus.NORMAL,
created: 1199116800000
})
Copy the code
The GraphQL standard does not have a Date type.
Step 1
First, we install a third-party date tool called Moment:
yarn add moment
Copy the code
Step 2
Next, create a new folder scalars in SRC/GraphQL
mkdir src/graphql/scalars
Copy the code
We store custom scalars in the folder Scalars
Create new files in Scalars: index.js and date.js
├─ ├─ index.js ├─ SRC/exercises / ├─ index.js ├─ index.jsCopy the code
The file scalars/index.js is responsible for exporting the custom scalar Date
module.exports = { ... require('./date.js')}Copy the code
The file scalars/date.js is responsible for implementing custom scalar dates
const moment = require('moment')
const { Kind } = require('graphql/language')
const { GraphQLScalarType } = require('graphql')
const customScalarDate = new GraphQLScalarType({
name: 'Date'.description: 'Date custom scalar type'.parseValue: value= > moment(value).valueOf(),
serialize: value= > moment(value).format('YYYY-MM-DD HH:mm:ss:SSS'),
parseLiteral: ast= > (ast.kind === Kind.INT)
? parseInt(ast.value, 10)
: null
})
module.exports = { Date: customScalarDate }
Copy the code
From the code above, you can see that all you need to do to implement a custom scalar is create an instance of GraphQLScalarType.
When creating the GraphQLScalarType instance, we can specify:
- The name of a custom scalar, i.e
name
- An introduction to custom scalars, i.e
description
- Handler function when a custom scalar value is passed from the client to the server, i.e
parseValue
- Handler function when a custom scalar value is returned from the server to the client, i.e
serialize
- For custom scalar in
ast
The literal handler in, i.eparseLiteral
(This is because inast
Values in are always formatted as strings)
Ast stands for abstract syntax tree. For details about abstract syntax tree, see zh.wikipedia.org/wiki/ Abstract Syntax Tree
Step 3
Finally, let’s mount the custom scalar Date into the GraphQL startup file:
. const allCustomScalars =require('./scalars/index.js')... const linkSchema = gql` scalar Date type Query { _: Boolean } type Mutation { _: Boolean } type Subscription { _: Boolean } `. function generateTypeDefsAndResolvers () {const typeDefs = [linkSchema]
constresolvers = { ... allCustomScalars } ...Copy the code
Created field (book); created field (book);
Custom commands implement login verification
In this section, we will learn how to implement login verification function on the GraphQL server
In the past, for each specific route, there was a specific resource. It was easy to add protection for some resources (requiring login users to have access). We just needed to design a middleware and add a tag on each route that needed to be protected.
GraphQL breaks the notion of routing versus resource mapping and advocates resource protection by marking which fields are protected within the Schema.
To implement login verification function in GraphQL service, we need to use the following tools:
- Koa middleware
- The context of resolver
- Custom instruction
Step 1
First, we define a KOA middleware that checks whether the request header has a user signature passed, and if so, retrieves the user information based on this signature, and mounts the user information to the KOA request context object CTX.
Create a new folder in SRC, Middlewares, to store all of KOA’s middleware
mkdir src/middlewares
Copy the code
Create a new file named auth.js in the SRC /middlewares folder as the middleware to mount user information:
touch src/middlewares/auth.js
Copy the code
async function main (ctx, next) {
// Note that in a real scenario, you would need to get the user signature in the request header here, such as token
// Get the user information based on the user token and mount the user information to CTX
// For a simple demonstration, the above steps are omitted and a simulated user information is mounted
ctx.user = { name: 'your name'.age: Math.random() }
return next()
}
module.exports = main
Copy the code
Mount this middleware to your application:
. app.use(require('./src/middlewares/auth.js'))
apolloServer.applyMiddleware({ app })
app.listen({ port: 4000}, () = >console.log(` 🚀 Server ready ` at http://localhost:4000/graphql))Copy the code
One detail to note here is that auth middleware must be mounted before apolloServer. This is because requests in KOA are mounted through the middleware stack in the order in which they are mounted. We expect user information to be mounted on CTX before apolloServer processes the requests
Step 2
Pass the CTX object through the context parameter of the parser to facilitate subsequent retrieval of user information from this object. (We described the parser parameter list in the previous section, where the third parameter is named context.)
When creating the Apollo Server instance, we can also specify an option named Context, and the value can be a function
When context is a function, the applied request context object CTX is passed to the current context function as an attribute of the function’s first parameter. The return value of the context function is passed to each parser function as the context parameter
So we just need to write this to pass the requested context object CTX to each parser:
. const apolloServerOptions = { ... generateTypeDefsAndResolvers(),formatError: error= > ({
code: error.extensions.code,
message: error.message
}),
context: ({ ctx }) = > ({ ctx }),
introspection: !isProd,
playground: !isProd,
mocks: false}...Copy the code
In this way, each parser function simply takes the third parameter to get the CTX, which can be used to get the user attribute on it.
Step 3
Then, we design a custom directive auth (short for authentication)
Create a new folder directives in SRC/graphQL to store all custom directives: directives
mkdir src/graphql/directives
Copy the code
We store the custom directives in the Cache folder
New files in the directives: index.js and auth.js
SRC/graphql ├ ─ ─ directives │ ├ ─ ─ auth. Js │ └ ─ ─ index. The js ├ ─ ─ index. The js └ ─ ─ scalars ├ ─ ─ date. Js └ ─ ─ index, jsCopy the code
The file cache /index.js exports the custom directive Auth
module.exports = { ... require('./auth.js')}Copy the code
The file cache /auth.js implements the custom directive Auth
const { SchemaDirectiveVisitor, AuthenticationError } = require('apollo-server-koa')
const { defaultFieldResolver } = require('graphql')
class AuthDirective extends SchemaDirectiveVisitor {
visitFieldDefinition (field) {
const { resolve = defaultFieldResolver } = field
field.resolve = async function (. args) {
const context = args[2]
const user = context.ctx.user
console.log('[CURRENT USER]', { user })
if(! user)throw new AuthenticationError('Authentication Failure')
return resolve.apply(this, args)
}
}
}
module.exports = {
auth: AuthDirective
}
Copy the code
From the above code, we see that Apollo Server provides the AuthenticationError class AuthenticationError in addition to the basic directive visitor class SchemaDirectiveVisitor
We declare a custom AuthDirective class, inherit SchemaDirectiveVisitor, and write the authentication logic that needs to be performed on each captured field in its class method visitFieldDefinition
The authentication logic is as simple as wrapping a layer of authentication logic on top of the original parser for the field:
- We try to get from
field
To get its parser and store it temporarily in a local variable for subsequent use. If not, the default parser is assigneddefaultFieldResolver
- We cover
field
The parser property on is our custom function within which we passargs[2]
The third parameter of the parser function is accessed and retrieved from itctx
User information on - If the user information does not exist, it is thrown
AuthenticationError
error - Returns the original parser execution result for the field
Step 4
Mount custom directives with the schemaDirectives option when creating the Apollo Server instance:
. const allCustomDirectives =require('./directives/index.js')
...
const apolloServerOptions = {
...generateTypeDefsAndResolvers(),
formatError: error= > ({
code: error.extensions.code,
message: error.message
}),
schemaDirectives: { ...allCustomDirectives },
context: ({ ctx }) = > ({ ctx }),
introspection: !isProd,
playground: !isProd,
mocks: false}...Copy the code
Declare this directive in the global linkSchema and mark @auth in the Schema of the data entity for each field that needs to be protected (indicating that login is required to access this field)
. const linkSchema = gql` scalar Date directive @auth on FIELD_DEFINITION type Query { _: Boolean } type Mutation { _: Boolean } type Subscription { _: Boolean } `.Copy the code
In the code above, FIELD_DEFINITION means that this command only applies to a specific field
Here, we add our custom directive @auth to the only book query field
. const schema = gql` enum BookStatus { DELETED NORMAL } type Book { id: ID name: String price: Float status: BookStatus created: Date } extend type Query { book: Book @auth } `.Copy the code
We added the @auth constraint to the book query field
Next, we restart the service, request book, and we find that the terminal prints:
[CURRENT USER] { user: { name: 'your name', age: 0.30990570160950015}}Copy the code
This means that the custom instruction code is running
Next we comment out the simulated user code in auth middleware:
async function main (ctx, next) {
// Note that in a real scenario, you would need to get the user signature in the request header here, such as token
// Get the user information based on the user token and mount the user information to CTX
// For a simple demonstration, the above steps are omitted and a simulated user information is mounted
// ctx.user = { name: 'your name', age: Math.random() }
return next()
}
module.exports = main
Copy the code
Restart the service, request book again, and we see:
Errors appears in the result with the code value UNAUTHENTICATED, indicating that our directive successfully intercepted the unlogged request
Merge request
Finally, let’s look at a problem caused by GraphQL’s design: unnecessary requests
We added a new data entity to graphQL-server-demo: cat
The final directory structure is as follows:
The SRC ├ ─ ─ components │ ├ ─ ─ book │ │ ├ ─ ─ resolver. Js │ │ └ ─ ─ schema. Js │ └ ─ ─ the cat │ ├ ─ ─ resolver. Js │ └ ─ ─ schema. Js ├ ─ ─ Graphql │ ├ ─ ─ directives │ │ ├ ─ ─ auth. Js │ │ └ ─ ─ index. The js │ ├ ─ ─ index. The js │ └ ─ ─ scalars │ ├ ─ ─ date. Js │ └ ─ ─ index. The js └ ─ ─ Middlewares └ ─ ─ auth. JsCopy the code
The SRC/components/cat/schema. The js code is as follows:
const { gql } = require('apollo-server-koa')
const schema = gql` type Food { id: Int name: String } type Cat { color: String love: Food } extend type Query { cats: [Cat] } `
module.exports = { schema }
Copy the code
We defined two data types: Cat and Food
And defines a query: cats, which returns a list of cats
SRC/components/cat/resolver. Js code is as follows:
const foods = [
{ id: 1.name: 'milk' },
{ id: 2.name: 'apple' },
{ id: 3.name: 'fish'}]const cats = [
{ color: 'white'.foodId: 1 },
{ color: 'red'.foodId: 2 },
{ color: 'black'.foodId: 3}]const fakerIO = arg= > new Promise((resolve, reject) = > {
setTimeout((a)= > resolve(arg), 300)})const getFoodById = async id => {
console.log('--- enter getFoodById ---', { id })
return fakerIO(foods.find(food= > food.id === id))
}
const resolvers = {
Query: {
cats: (parent, args, context, info) = > cats
},
Cat: {
love: async cat => getFoodById(cat.foodId)
}
}
module.exports = resolvers
Copy the code
Based on the above code, we see:
- Every cat has one
foodId
Field for the ID of your favorite food - We have a function
fakerIO
To simulate asynchronous IO - We implemented a function
getFoodById
Provides the ability to get food information based on the food ID, once invokedgetFoodById
Function, will print a log to the terminal
Restart the service, request CATS, we see the normal return result:
When we look at the output of the terminal, we find:
--- enter getFoodById --- { id: 1 }
--- enter getFoodById --- { id: 2 }
--- enter getFoodById --- { id: 3 }
Copy the code
The getFoodById function is called three separate times.
GraphQL’s design advocates specifying a parser for each field, which results in:
A batch request causes one IO per endpoint as other data entities are associated.
This is an unnecessary request because the above requests can be combined into a single request.
How do we merge these unnecessary requests?
We can combine these requests through a tool called dataLoader.
DataLoader provides two main functions:
- Batching
- Caching
In this article, we will only use its Batching function
For more information about dataLoader, see: github.com/graphql/dat…
Step 1
First, we install the dataLoader
yarn add dataloader
Copy the code
Step 2
Next, we in the SRC/components/cat/resolver in js:
- Provide a batch fetch
food
The function ofgetFoodByIds
- The introduction of
dataLoader
Packaging,getFoodByIds
Function, returns a wrapped functiongetFoodByIdBatching
- in
love
Is used in the parser function ofgetFoodByIdBatching
In order to getfood
const DataLoader = require('dataloader')... const getFoodByIds =async ids => {
console.log('--- enter getFoodByIds ---', { ids })
return fakerIO(foods.filter(food= > ids.includes(food.id)))
}
const foodLoader = new DataLoader(ids= > getFoodByIds(ids))
const getFoodByIdBatching = foodId= > foodLoader.load(foodId)
const resolvers = {
Query: {
cats: (parent, args, context, info) = > cats
},
Cat: {
love: async cat => getFoodByIdBatching(cat.foodId)
}
}
...
Copy the code
Restart the service, request CATS again, we still see the correct result, at this point, we go to the terminal, find:
--- enter getFoodByIds --- { ids: [ 1, 2, 3 ] }
Copy the code
The original three IO requests have been successfully merged into one.
Finally, our graphQL-server-demo directory structure looks like this:
├ ─ ─ index. Js ├ ─ ─ package. The json ├ ─ ─ the SRC │ ├ ─ ─ components │ │ ├ ─ ─ book │ │ │ ├ ─ ─ resolver. Js │ │ │ └ ─ ─ schema. Js │ │ └ ─ ─ The cat │ │ ├ ─ ─ resolver. Js │ │ └ ─ ─ schema. Js │ ├ ─ ─ graphql │ │ ├ ─ ─ directives │ │ │ ├ ─ ─ auth. Js │ │ │ └ ─ ─ index. The js │ │ ├ ─ ─ Index. Js │ │ └ ─ ─ scalars │ │ ├ ─ ─ date. Js │ │ └ ─ ─ index. The js │ └ ─ ─ middlewares │ └ ─ ─ auth. Js └ ─ ─ yarn. The lockCopy the code
conclusion
At this point, you should have an idea of how to build the GraphQL server.
This article has actually covered only a fairly limited part of GraphQL. To fully and deeply understand GraphQL, you need to continue to explore and learn.
This is the end of this article, I hope this article can help you in the following work and life.
reference
About the Apollo Server constructor options for a complete list, please reference: www.apollographql.com/docs/apollo…
Shuidi front-end team recruiting partners, welcome to send resume to email: [email protected]