preface
This article will introduce an ORM: Prisma in the NodeJS community. I have not been in touch with it for long, but I am already confident about its future development. Part of this article was actually written three months ago, so something along the lines of “If you think it’s good, consider doing your graduation project based on Prisma.”
At the beginning of writing, Lord Bven’s design line has not moved, and by the time I sent today, he has been outstanding graduates…
At the same time, I had planned to write an article to cover all the content, but I felt that it would be difficult to finish such a long tutorial article. So it was split into two parts:
- The first part mainly introduces the mainstream ORM and Query Builder in NodeJS community, as well as the simple use of Prisma. This part mainly lays a foundation for the students who have little contact with ORM.
- The second part covers the use of Prisma fancy, including multi-table cascading, multi-database collaboration, and the actual practice with GraphQL. Finally, we will talk about the future of Prisma.
The general order of the articles is as follows:
- Old and traditional ORMs in the NodeJS community
- Traditional ORM Data Mapper and Active Record mode
- Query Builder
- Basic environment configuration for Prisma
- Hello Prisma
- Start with single-table CRUD
- Multi table, multi database combat
- Prisma and GraphQL: Full link type security
- Prisma and the Integration framework
ORM in NodeJS community
Those of you who write Node applications often have to deal with ORM, since writing native SQL is a real grind for most front-end students. The convenience of ORM makes it intuitive and easy to work with databases in many cases (although there are certainly cases where ORM doesn’t work), using familiar JavaScript to manipulate databases. There are several main ORMs in the NodeJS community, each of which has its own characteristics:
-
Sequelize, the older ORM, does not support TS very well, but the community does have Sequelize-typescript.
Sequelize defines the table structure like this:
const { Sequelize, Model, DataTypes } = require('sequelize'); const sequelize = new Sequelize('sqlite::memory:'); class User extends Model {} User.init({ username: DataTypes.STRING, birthday: DataTypes.DATE }, { sequelize, modelName: 'user' }); (async() = > {await sequelize.sync(); const jane = await User.create({ username: 'janedoe'.birthday: new Date(1980.6.20)});console.log(jane.toJSON()); }) ();Copy the code
(I don’t think it’s intuitive, so I’ve only used it briefly for beginners.)
-
TypeORM, the ORM with the most star in NodeJS community. It is really easy to use. It has been well received by my classmates and is also the ORM I use most. The highlights are declarative table structures, transactions, cascades, etc. based on decorator syntax, and great TS support.
TypeORM declares that the table structure looks like this:
import { Entity, PrimaryGeneratedColumn, Column } from "typeorm"; @Entity(a)export class User { @PrimaryGeneratedColumn(a)id: number; @Column(a)firstName: string; @Column(a)lastName: string; @Column(a)age: number; } Copy the code
It’s much more intuitive than Sequelize, and because database fields are defined by class attributes, it works well with mixins and other class attribute-based libraries such as TypeGraphQL.
-
MikroORM, a relatively new ORM, is also heavily based on the decorator syntax, with the advantages of automating all transactions and keeping the table entities globally in singleton mode.
MikroORM defines the table structure as follows:
@Entity(a)export class Book extends BaseEntity { @Property() title! :string; @ManyToOne() author! : Author;@ManyToOne() publisher? : IdentifiedReference<Publisher>;@ManyToMany({ fixedOrder: true }) tags = new Collection<BookTag>(this); } Copy the code
-
Mongoose, Typegoose, MongoDB dedicated ORM, here is a simple example of Typegoose use:
import { prop, getModelForClass } from '@typegoose/typegoose'; import * as mongoose from 'mongoose'; class User { @prop(a)publicname? :string; @prop({ type: () = > [String]})publicjobs? :string[]; } const UserModel = getModelForClass(User); (async() = > {await mongoose.connect('mongodb://localhost:27017/', { useNewUrlParser: true.useUnifiedTopology: true.dbName: 'test' }); const { _id: id } = await UserModel.create({ name: 'JohnDoe'.jobs: ['Cleaner']}as User); const user = await UserModel.findById(id).exec(); console.log(user); }) ();Copy the code
-
Bookshelf, a relatively simple but complete ORM, is based on Knex (Strapi’s underlying Query Builder, described below). It works like this:
const knex = require('knex') ({client: 'mysql'.connection: process.env.MYSQL_DATABASE_CONNECTION }) // Bookshelf is based on KNEx, so you need to instantiate knex and pass it in const bookshelf = require('bookshelf')(knex) const User = bookshelf.model('User', { tableName: 'users'.posts() { return this.hasMany(Posts) } }) const Post = bookshelf.model('Post', { tableName: 'posts'.tags() { return this.belongsToMany(Tag) } }) const Tag = bookshelf.model('Tag', { tableName: 'tags' }) new User({id: 1}).fetch({withRelated: ['posts.tags']}).then((user) = > { console.log(user.related('posts').toJSON()) }).catch((error) = > { console.error(error) }) Copy the code
In addition, bookshelf supports plug-in mechanism. Other ORMs usually implement similar functions through hook or subscriber, such as one-time encryption and TPS calculation when the password is deposited.
ORM Data Mapper and Actice Record mode
If you look at the ORM documentation listed above, you will find that the introduction to MikroORM contains this sentence: TypeScript ORM for Node.js based on Data Mapper, TypeORM supports both Active Record and Data Mapper Patterns.
Let’s start with a question. In the process of using ORM, have you learned the difference between Data Mapper and Active Record?
Let’s take a look at how the two modes are used in TypeORM:
Active Record:
import { BaseEntity, Entity, PrimaryGeneratedColumn, Column } from "typeorm";
@Entity(a)export class User extends BaseEntity {
@PrimaryGeneratedColumn(a)id: number;
@Column(a)name: string;
@Column(a)isActive: boolean;
}
const user = new User();
user.name = "No crossing";
user.isActive = true;
await user.save();
const newUsers = await User.find({ isActive: true });
Copy the code
In TypeORM, the entity class inherits from the BaseEntity class in Active Record mode, which has various methods on the entity class, such as the Save Remove Find method. The Active Record pattern, first named by Martin Fowler in his book Enterprise Application Architecture Patterns, makes it possible to have relevant CRUD methods on objects. This pattern is used in RoR as the M in MVC, the data-driven layer. If you’re interested in Active Record in RoR, you can read a full understanding of Active Record (I don’t know Ruby, so I won’t cover it).
Data Mapper:
import { Entity, PrimaryGeneratedColumn, Column } from "typeorm";
@Entity(a)export class User {
@PrimaryGeneratedColumn(a)id: number;
@Column(a)name: string;
@Column(a)isActive: boolean;
}
const userRepository = connection.getRepository(User);
const user = new User();
user.name = "No crossing";
user.isActive = true;
await userRepository.save(user);
await userRepository.remove(user);
const newUsers = await userRepository.find({ isActive: true });
Copy the code
It can be seen that in the Data Mapper schema, entity classes are no longer able to perform database operations by themselves, but need to obtain a “repository” corresponding to the table, and then call the method on this “repository”.
This pattern was also originally named by Martin Fowler. Data Mapper is more like an access layer between the operator and the actual Data, just as in the above example, objects with access rights (i.e. corresponding methods) are first acquired and then the Data is manipulated.
Comparing the two patterns, it is easy to find that the Active Record pattern is much simpler, while the Data Mapper pattern is much more rigorous. It should be clear when to use either mode, but if you’re developing a simpler application, use Active Record mode directly, because it does reduce a lot of code. But if you’re developing larger applications, using the Data Mapper pattern can help you maintain your code better (entity classes no longer have access to the database, only through a common interface (getRepository getManager, etc.). An example is to inject a Repository instance using the Data Mapper schema in Nest and Midway, both IOC-style Node frameworks, and then operate on it.
Finally, OrMs using Data Mapper in NodeJS mainly include Bookshelf, MikroORM, civ.js and Prisma, the protagonist of this paper.
Query Builder
In fact, in addition to ORM and native SQL, there is another common way to interact with databases: Query Builder (QB).
In my opinion, QB and ORM have some similarities and differences, but it is easy to confuse them. For example, MQuery (a Query Builder of MongoDB) has the following method:
mquery().find(match, function (err, docs) {
assert(Array.isArray(docs));
})
mquery().findOne(match, function (err, doc) {
if (doc) {
// the document may not be found
console.log(doc);
}
})
mquery().update(match, updateDocument, options, function (err, result){})
Copy the code
Does it look like ORM? But let’s look at some other scenarios:
mquery({ name: /^match/ })
.collection(coll)
.setOptions({ multi: true })
.update({ $addToSet: { arr: 4 }}, callback)
Copy the code
An important difference between Query Builder and ORM is that in ORM, there is usually no such chain of method calls, but a single method with multiple arguments. Look again at TypeORM’s Query Builder schema:
import { getConnection } from "typeorm";
const user = await getConnection()
.createQueryBuilder()
.select("user")
.from(User, "user")
.where("user.id = :id", { id: 1 })
.getOne();
Copy the code
Userrebo.find ({id: 1}) is the same as userrebo.find ({id: 1}). You might think QB is too verbose, but it is much more flexible and much closer to the SQL statement (you can think of it as each chained method call does one operation on the resulting SQL statement).
Meanwhile, in some cases (such as multiple cascades), the Query Builder is the one with the cleaner code, such as:
const selectQueryBuilder = this.executorRepository
.createQueryBuilder("executor")
.leftJoinAndSelect("executor.tasks"."tasks")
.leftJoinAndSelect("executor.relatedRecord"."records")
.leftJoinAndSelect("records.recordTask"."recordTask")
.leftJoinAndSelect("records.recordAccount"."recordAccount")
.leftJoinAndSelect("records.recordSubstance"."recordSubstance")
.leftJoinAndSelect("tasks.taskSubstance"."substance");
Copy the code
The above code builds a Query Builder that cascades multiple tables.
The cascading relationships are as follows:
- Executor
- tasks -> Task
- relatedRecord -> Record
- Task
- substances -> Substance
- Record
- recordTask -> Task
- recordAccount -> Account
- recordSubstance -> Substance
Strapi relies on KNEx for database interaction and connection pooling functions. Knex is used as a Query Builder.
const knex = require('knex') ({client: 'sqlite3'.connection: {
filename: './data.db',}});try {
await knex.schema
.createTable('users'.table= > {
table.increments('id');
table.string('user_name');
})
.createTable('accounts'.table= > {
table.increments('id');
table.string('account_name');
table
.integer('user_id')
.unsigned()
.references('users.id');
})
const insertedRows = await knex('users').insert({ user_name: 'Tim' })
await knex('accounts').insert({ account_name: 'knex'.user_id: insertedRows[0]})const selectedRows = await knex('users')
.join('accounts'.'users.id'.'accounts.user_id')
.select('users.user_name as user'.'accounts.account_name as account')
const enrichedRows = selectedRows.map(row= > ({ ...row, active: true}}))catch(e) {
console.error(e);
};
Copy the code
You can see that KNEx takes the chain operation a step further and can even chain create multiple database tables.
Prisma
Which brings us to Prisma, the subject of this article. Prisma still defines itself as the NodeJS ORM, but personally feels that it is much more powerful than the ordinary ORM. Here’s an official graph to get a sense of how Prisma compares to ORM, SQL, and Query Builder capabilities:
You can also read Why Prisma? To learn more.
The unique Schema definition, the more rigorous and comprehensive TS type definition than TypeORM (especially in cascading relationships), and the various operators that are easier to use and more close to native SQL are easy for the first time to get hooked (say no, that’s me).
A brief introduction to these features:
-
Unlike ORM, which used JS/TS files to define database table structures, Prisma uses.prisma files to write unique Prisma schemas, and then generates table structures based on the schemas. VS Code has prisma highlighting and syntax checking plug-ins, so don’t worry about using them.
At the same time, this means that there is an ecosystem of generator functions around Prisma Schema. For example, TypeGraphQL-Prisma can generate typeGraphQL Class definitions based on Prisma Schema. There are even basic RESOLvers for CRUD, as well as palJS ‘Prisma Schema-based Nexus type definitions and CRUD methods (so GraphQL and Prisma, both SDL-first tools, are a perfect match).
TypeGraphQL and Resolver are graphQL-related tools/concepts that are not familiar with.
A simple schema.prisma might look like this:
datasource db { provider = "sqlite" url = env("SINGLE_MODEL_DATABASE_URL") } generator client { provider = "prisma-client-js" output = "./client" } model Todo { id Int @id @default(autoincrement()) title String content String? finished Boolean @default(false) createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } Copy the code
Doesn’t it feel like even though you haven’t used it, it’s still easy to read.
-
TS type definition Prisma type definition is full coverage, query parameters, operator parameters, cascade parameters, return results, and so on, more complete than TypeORM.
-
More comprehensive operators, such as queries on strings, Prisma even provides fine-grained operators such as Contains, startsWith, and endsWith for filtering (whereas TypeORM only uses ILike for full matching). (We’ll talk about what these operators do later.)
At the end of this part, we will briefly introduce the usage process of Prisma. In the text, we will introduce the usage of Prisma step by step in detail, including single table, multi-table cascade and the wonderful chemical reaction between Prisma and GraphQL.
Environment configuration is in the next section, but here we just want to get a feel for how to use it
-
First, create a folder named prisma and internally create a schema.prisma file
If you are using VS Code, you can get Prisma syntax highlighting by installing the Prisma extension
-
Define your database type, path, and your database table structure in schema as shown in the following example:
model Todo { id Int @id @default(autoincrement()) title String } Copy the code
-
Run the prisma generate command and Prisma will generate a Prisma Client for you.
-
To use Prisma Client, import Prisma Client into your file:
import { PrismaClient } from "./prisma/client"; const prisma = new PrismaClient(); async function createTodo(title: string, content? :string) { const res = await prisma.todo.create({ data: { title, content, }, }); return res; } Copy the code
Each table is stored in the prisma.__your_model__ namespace.
If you’re already having a bit of fun with this brief introduction, you might want to move your project to Prisma after you follow through with this article
Overhand Prisma
You can find the full Example in prisma-article-example, and we’ll start with an empty folder for the following Example.
Project initialization
-
Create an empty folder and execute NPM init -y
The same applies to YARN and PNPM
-
@prisma/cli: NPM install prisma -g
The @prisma/ CLI package has been renamed prisma
Prisma /cli is installed globally to facilitate subsequent execution of related commands
-
Install the necessary dependencies:
npm install @prisma/client sqlite3 prisma -S npm install typescript @types/node nodemon ts-node -D Copy the code
The Query Engine will be downloaded when you install Prisma into the folder, depending on your operating system:
-
Run prisma version to check whether the installation is successful.
-
Execute prisma init to initialize a Prisma project (this command is very non-invasive and only generates prisma folders and.env files, and appends the required environment variables to existing files if they already exist).
-
Check the. Env file
# Environment variables declared in this file are automatically made available to Prisma. # See the documentation for more detail: https://pris.ly/d/prisma-schema#using-environment-variables # Prisma supports the native connection string format for PostgreSQL, MySQL and SQLite. # See the documentation for all the connection string options: https://pris.ly/d/connection-strings DATABASE_URL="postgresql://johndoe:randompassword@localhost:5432/mydb? schema=public"Copy the code
You’ll notice that the default database is PostgresQL. In order to keep learning costs down, we’ll use SQLite as the entire database, so you’ll need to change the variable value to file:.. /demo.sqlite
If you haven’t worked with SQLite before, this is a file that can be read and written to as a database (the.sqlite suffix), so it’s easy to use, and because it’s a file, you need to change the DATABASE_URL variable to the file:// protocol.
Similarly, in Prisma Schema we also need to change the database type to SQLite:
// This is your Prisma schema file, // learn more about it in the docs: https://pris.ly/d/prisma-schema datasource db { provider = "sqlite" url = env("DATABASE_URL") } generator client { provider = "prisma-client-js" } Copy the code
Creating a database
In Prisma Schema above, we define only datasource and generator, which are responsible for defining the used database configuration and client-generated configuration, respectively. For example, prisMA-generated clients are placed under node_modules by default. The import path is also import {PrismaClient} from “@prisma/client”, but you can change the generated client location by using the client.output command.
generator client {
provider = "prisma-client-js"
output = "./client"
}
Copy the code
This command causes the client to be generated in the PRISma folder, as in:
Generating clients into the corresponding PRISma folder makes it easy for each project to generate clients with a different schema configuration in monorePO (or just multiple folders).
We added a new definition of database table structure in Prisma Schema:
datasource db {
provider = "sqlite"
url = env("SINGLE_MODEL_DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
output = "./client"
}
model Todo {
id Int @id @default(autoincrement())
title String
content String?
finished Boolean @default(false)
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
Copy the code
A quick explanation of the syntax:
- Scalars such as Int and String are automatically mapped to the corresponding data type based on the database type. Scalar type
?
Means this field is optional. @id
Identifies this field as the primary key,@default()
Means the default value,autoincrement
withnow
Prisma built-in functions, representing the auto-increment primary key and the timestamp when a field is written, as well as uUID, cuID, etc.
Client generation and use
Now you can generate the client by executing Prisma generate:
That’s not all. Our database file (i.e. Sqlite file) has not been created yet, performing a Prisma DB push
This command will also execute prisma generate once, so you can skip client generation here by using –skip-generate.
The demo.sqlite file now appears in the root directory.
Create index.ts in the root directory:
// index.ts
import { PrismaClient } from "./prisma/client";
const prisma = new PrismaClient();
async function main() {
console.log("Prisma!");
}
main();
Copy the code
You can also see from the way PrismaClient is used that PrismaClient is actually a class, so you can inherit this class to do a lot of extension operations, which we’ll cover later.
Before using it, we use Nodemon + TS-Node to help us to automatically re-execute when index.ts changes for the sake of simplicity of subsequent learning.
{
"name": "Prisma2-Explore"."restartable": "r"."delay": "500"."ignore": [
".git"."node_modules/**"."/prisma/*",]."verbose": true."execMap": {
"": "node"."js": "node --harmony"."ts": "ts-node "
},
"watch": ["./**/*.ts"],}Copy the code
And add the startup script to package.json:
{
"scripts": {
"start": "nodemon index.ts"}}Copy the code
Execute NPM start:
First experience with Prisma single table
Environment configuration
The code above is a simple demonstration of a development workflow that will not be used in the rest of this article (but you can use this workflow to further explore Prisma yourself).
Next, I have prepared the environment you need, see prisma-article -Example, clone repository to local, run the configured NPM scripts. Here’s a quick look at the NPM scripts in the project. If you’re bored after reading this section, you can use these scripts to directly run other sections such as multi-table and GraphQL-related examples. Brief introduction to some scripts:
yarn flow
: Complete execution from scratchBuild client – Build project – Execute build artifactsThe process.yarn dev:**
: Run the project in development mode, restart the process after the file changes.yarn generate:**
: Generates Prisma Client for the project.- use
yarn gen:client
To generate Prisma Clients for all projects.
- use
yarn setup:**
: Generates SQLite files for the completed project.yarn invoke:**
: Executes the built JS file.- use
yarn setup
Execute all built JS files.
- use
The code for this part (Prisma singleton) is shown in single-model, and the related commands include:
$ yarn dev:single
$ yarn generate:single
$ yarn setup:single
$ yarn invoke:single
Copy the code
Before starting the CRUD code tutorial below, it’s a good idea to run the project first. Yarn dev:single Prisma Client: yarn dev:single
I’m going straight to CRUD
Prisma Client is first introduced and instantiated according to the usage already mentioned:
import { PrismaClient } from "./prisma/client";
const prisma = new PrismaClient();
Copy the code
Prisma mounts your Table Class under prisma.MODEL. The value of MODEL comes directly from the name of the MODEL in schema. Prisma, in this case Todo.
Therefore, a simple CRUD can be copied directly from the API,
Create:
async function createTodo(title: string, content? :string) {
const res = await prisma.todo.create({
data: {
title,
content: content ?? null,}});return res;
}
Copy the code
The create method takes two arguments:
- Data is the attribute you use to create new data. The type definition is determined by your schema. For example, content is an optional string in the schema.
String?
), its type isstring|null
, so it needs to be used??
Syntax to take care of arguments not passed in. - Select, determines the fields in the object returned by the create method, if you specify
select.id
If false, the id attribute is not included in the return value object of the create method. This parameter is included in most PRISMA methods.
Read:
async function getTodoById(id: number) {
const res = await prisma.todo.findUnique({
where: { id },
});
return res;
}
Copy the code
The findUnique method is similar to the findOne method in TypeORM in that it is based on a primary key query, where the query criteria are passed to the WHERE parameter.
Read all:
async function getTodos(status? :boolean) {
const res = await prisma.todo.findMany({
orderBy: [{ id: "desc"}].where: status
? {
finished: status,
}
: {},
select: {
id: true.title: true.content: true.createdAt: true,}});return res;
}
Copy the code
Here we pass in an additional Order Derby method to sort the returned query results, and now that we have sorting, there is of course paging as well. You can also pass cursor, skip, take and other parameters to perform paging.
Cursor-based and offset-based are actually two different types of paging.
Similarly, update operations:
async function updateTodo(
id: number, title? :string, content? :string, finished? :boolean
) {
const origin = await prisma.todo.findUnique({
where: { id },
});
if(! origin) {throw new Error("Item Inexist!");
}
const res = await prisma.todo.update({
where: {
id,
},
data: {
title: title ?? origin.title,
content: content ?? origin.content,
finished: finished ?? origin.finished,
},
});
return res;
}
Copy the code
You can also use the upsert method to create a data entity if the data entity does not exist.
Batch update:
async function convertStatus(status: boolean) {
const res = await prisma.todo.updateMany({
where: {
finished: !status,
},
data: {
finished: {
set: status,
},
},
});
return res;
}
Copy the code
Note that we use the set property here to set the FINISHED value directly. This works just as well as setting it to false. If this is a number type, you can use increment, Decrement, multiply, and Divide in addition to set.
Finally, the delete operation:
async function deleteTodo(id: number) {
const res = await prisma.todo.delete({
where: { id },
});
return res;
}
async function clear() {
const res = await prisma.todo.deleteMany();
return res;
}
Copy the code
Feel free to explore the Prisma API in addition to the above examples with good TS style tips, and to look at other examples in advance to get a taste of Prisma’s power.
Epilogue & the next preview
The Prisma methods (such as CREATE) and operators (such as set) used above are just a few, just to give you a sense of the novel ways Prisma is used compared to other traditional ORMs. In the next chapter, we will introduce:
- Cascade relation processing of Prisma data tables
- Multiple Prisma clients collaborate
- Prisma collaboration with other ORMs
- It’s the same thing as the last term
- Prisma + GraphQL full process combat
- Prisma outlook: Working Principles, integrated framework
Stay tuned ~