MongoDB clustering and security
6. Replica Sets (*)
6.1 introduction
-
Replica Set in MongoDB is a Set of Mongod services that maintain the same data Set. Replica sets provide redundancy and high
Availability is the foundation of all production deployments
You can also say that a replica set is similar to a master/slave cluster with automatic failover. Popular is to use multiple machines for the same data difference
Step synchronization, so that multiple machines have multiple copies of the same data, and automatically when the master library goes down without user intervention
Switch to another backup server as the primary library. Moreover, the replica server can also be used as a read-only server to achieve read and write separation and improve the load
-
Redundancy and data availability
Replication provides redundancy and improves data availability. Replication provides a level by providing multiple copies of data on different database servers
To prevent the loss of a single database server
In some cases, replication can provide increased read performance because clients can send read operations to different services, where no
Maintaining copies of data with the data center increases the location and availability of data for distributed applications. You can also maintain others for dedicated purposes
Copies, such as disaster recovery, reports, or backups
-
Replication in MongoDB
A replica set is a set of Mongod instances that maintain the same data set. The replica set contains multiple data bearer nodes and an optional quorum node.
In a node hosting data, one and only one member is treated as the primary node, while the other nodes are treated as secondary (slave) nodes
The primary node receives all write operations. The replica set can have only one write that is primarily recognized as having {w: “most”} write concerns; Although in some
In some cases, another Mongod instance may temporarily consider itself primary as well. Records all of the data sets in their operation logs
Change, that is, the Oplog secondary (replica) node copies the oplog of the primary node and applies the operation to its dataset so that the data set of the secondary node mirrors the data set of the primary node. If the principal is absent, the eligible secondary school will hold an election to select a new principal
-
Master/slave replication and replica set differences
The biggest difference between master and slave cluster and replica set is that the replica set has no fixed “master node”. The entire cluster will select a “master node” when it hangs
After the drop, select other nodes from the remaining from the node as “primary node”, the replica set always has one active point (primary, primary) and one or more
2 backup nodes (secondary, secondary)
-
6.2 Three roles in the replica set
-
There are two types of replica sets and three roles
-
Two types of
- Primary node type: the main connection point of data operation. It can be read or written
- Secondary (secondary, secondary) nodes: data redundancy backup nodes, which can be read or elected
-
Three roles
- Primary node: the Primary node that receives all write operations
- Replicate member (Replicate) : A slave node maintains the same data set through replication operations. That is, backup data. It cannot be written, but it can be read (with additional configuration)
- Arbiter: does not retain any copy of data, only voting. It is also possible to maintain the blanking server as part of a replica set, in which the replica members can also be arbitrators, a slave node type
-
Additional notes on arbitrators:
You can add additional Mongod instances to the replica set as arbitrators. Arbitrators do not maintain datasets. The purpose of the arbiter is to pass
Maintains quorum in the replica set in response to heartbeat and election requests from other replica set members. Because they do not store data sets, mediators do
It is a good way to provide replica set arbitration capability, with a lower resource cost than full-featured replica set members with data sets.
If your replica set has an even number of members, add arbitrators to get a “majority” vote in the main election. Arbitrators need not be dedicated
The hardware.
The arbiter will always be the arbiter, and the primary person may drop out and become the secondary person, and the secondary person may become the primary person during an election
personnel
If your number of replicas + hosts is even, it is recommended to add an arbiter to form an odd number that will easily satisfy the majority of votes
If you have an odd number of replicas + hosts, you can omit the arbiter
6.3 Replica set architecture objectives
-
One master, one copy, one arbitration
6.4 Creating replica Sets
6.4.1 Step 1: Create an active node
-
Create a directory for storing data and logs
#-----------myrs #The master node mkdir -p /mongodb/replica_sets/myrs_27017/log \ & mkdir -p /mongodb/replica_sets/myrs_27017/data/db Copy the code
-
Create or modify a configuration file
vim /mongodb/replica_sets/myrs_27017/mongod.conf Copy the code
-
Myrs_27017 mongod.conf (note the space)
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/replica_sets/myrs_27017/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/replica_sets/myrs_27017/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/replica_sets/myrs_27017/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27017Replication: # Name of replica set replSetName: myrsCopy the code
6.4.2 Step 2: Create a replica node
-
Create a directory for storing data and logs
#A copy of the node mkdir -p /mongodb/replica_sets/myrs_27018/log \ & mkdir -p /mongodb/replica_sets/myrs_27018/data/db Copy the code
-
Create or modify a configuration file
vim /mongodb/replica_sets/myrs_27018/mongod.conf Copy the code
-
Myrs_27018 mongod.conf (note the space)
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/replica_sets/myrs_27018/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/replica_sets/myrs_27018/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/replica_sets/myrs_27018/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27018Replication: # Name of replica set replSetName: myrsCopy the code
6.4.3 Step 3: Create blanking node
-
Create a directory for storing data and logs
#-----------myrs #Arbitration node mkdir -p /mongodb/replica_sets/myrs_27019/log \ & mkdir -p /mongodb/replica_sets/myrs_27019/data/db Copy the code
-
Quorum node: Creates or modifies a configuration file
vim /mongodb/replica_sets/myrs_27019/mongod.conf Copy the code
-
myrs_27019 mongod.conf
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/replica_sets/myrs_27019/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/replica_sets/myrs_27019/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/replica_sets/myrs_27019/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27019Replication: # Name of replica set replSetName: myrsCopy the code
6.4.4 Step 4: Initialize the replica set and primary node
-
Start three nodes
-
./mongod -f /mongodb/replica_sets/myrs_27017/mongod.conf ./mongod -f /mongodb/replica_sets/myrs_27018/mongod.conf ./mongod -f /mongodb/replica_sets/myrs_27019/mongod.conf ps -ef|grep mongod Copy the code
-
Use client commands to connect to any node, but try to connect to the primary node (27017 node)
./mongo --port 27017 Copy the code
Once connected, many commands, such as show DBS, cannot be used and must be initialized
-
Ready to initialize a new replica set
Grammar: rs. Initiate (configuration)
Initialize replica sets with default configuration: Rs.Initiate ()
6.4.5 Viewing the Replica Set Configuration
-
Returns the document containing the current replica set configuration
-
Grammar: rs. Conf (configuration)
-
Using the default, view the master node configuration: rs.conf()
-
instructions
-
“_id” : “myRS” : primary key of the configuration data store of the replica set. Default is the name of the replica set
-
“Members” : an array of members in a replica set. “host” : “192.168.83.133:27017” does not
Yes Arbitration node: “arbiterOnly” : false, priority (weight) : “priority” : 1
-
“Settings” : parameter Settings for the replica set
-
-
rs.status()
View the running status of the node
6.4.6 Adding a Secondary Replica Node
-
Add slave nodes to the master node to add other members to the replica set
-
Grammar:
rs.add(host,arbiterOnly) Copy the code
-
Parameter Description
-
Host Stirng or Document The new member to be added to the replica set. Specified as a string or configuration document: 1) as in
If it is a string, specify the host name and an optional port number for the new member. 2) If it is a document, specify the replica set member configuration document found in the Members array. You must specify the host field in the member configuration document. For a description of the configuration fields in the document, see the following document: Configuration Documents for host members
-
ArbiterOnly Boolean Optional. This applies only if the value is a string. If true, add
Plus’s host is the arbiter
-
-
Rs. The add (" 192.168.83.133:27018)"Copy the code
Ok: 1: The port is successfully added
-
View replica set status rs.status()
“Name” : “192.168.83.133:27018” is the name of the second node whose role is “stateStr” : “SECONDARY”
6.4.7 Adding a Secondary Quorum Node
-
Add a quorum node to the replica set
-
Grammar: rs. AddArb (host)
-
Ok: 1: The port is successfully added
-
View the replica set status
rs.status()
6.5 Data Read and Write Operations on replica Sets
-
Objective: Tests the data read and write status of three nodes with different roles
-
Log in to primary node 27017 to write and read data
Use articledb db.comment. Insert ({"articleid":"100000","content": "," userID ":"1001","nickname":"Rose"," createDateTime ":new Date()}) db.ment.find()Copy the code
-
Log in to secondary node 27018
Unable to read collection data. The current slave node is just a backup, not a slave node, and cannot read data, certainly not write data
Line. By default, the secondary node does not have the read/write permission. You can add the read permission, but you need to set the read permission
-
Set read operation permission: Set it as a slave node to allow read operations on slave members
grammar
Rs. SlaveOk () # or rs. SlaveOk (true)Copy the code
This command is a simplified version of db.getmongo ().setSlaveok ()
-
The read permission is granted, but the insert is not allowed
-
Read/write separation is now possible, allowing the master to insert data and the never to read data
-
Remove read permission from slave node: rs.slaveok (false)
-
arbiterIf the node does not store any service data, you can log in to view it
6.6 Election Rules for the Primary Node
-
In a replica set, MongoDB automatically elects the primary node and triggers the primary node election
- Primary node failure
- The network of the primary node is unreachable (default heartbeat is 10 seconds)
- Manual intervention (Rs.Stepdown (600))
-
Once an election is triggered, the primary node is chosen according to certain rules, which determine who wins based on the number of votes cast
-
The node with the highest number of votes and the support of a “majority” of members wins
“Majority” is defined as: assuming that the number of voting members in the replication set is N, the majority is N/2 + 1. For example: 3 voting members,
Most of the values are 2. If the number of surviving members in the replication set is less than a majority, the Primary cannot be elected for the entire replication set.
The replicate set will not be able to provide write services and will be read-only
-
If the number of votes is equal and both have the support of a “majority” of the members, the new node of the data wins
The old and new data are compared using the operation log Oplog
-
The priority parameter matters a lot when it comes to getting votes
Additional votes can be set by setting priority. Priority is the weight. The value ranges from 0 to 1000
The number of votes ranges from 0 to 1000. The larger the priority value is, the more likely it is to have the votes of the majority of members. Specify a higher value to enable a member
More eligible to be a principal member, lower values can make members less eligible.
By default, the priority value is 1
It can be seen that the priority of the master node and the replica node is 1 respectively, that is, by default, they can be considered to have already had one vote. But the election node takes precedence
The priority of the election node must be 0, not another value. They do not have the right to vote, but they do
The right to vote)
-
6.7 Fault Testing (*)
6.7.1 Testing Duplicate Node Faults
-
Example Close copy 27018
-
The heartbeat between the primary node and the quorum node to 27018 failed, because the primary node is still there, so no vote was triggered
-
If so, the master node writes data
Db.com ment. Insert ({"_id":"1"," articleID ":"100001","content":" We should not waste the morning on the phone, health is very important, a cup of warm water happiness you, me and him." ," userID ":"1002","nickname":" nickname"," createDateTime ":new Date("2019-08-" 05 t22:08:15. 522 z "), "likenum" : NumberInt (1000), "state" : "1"})Copy the code
The primary node inserts and reads are fine
-
If you start the secondary node again, the data written by the primary node is automatically synchronized to the secondary node
Start the 27018 service again
Data on replica nodes is automatically synchronized
6.7.2 Testing faults on the active Node
-
Close 27017 primary node to trigger voting
-
It is found that the heartbeat between the secondary node and the quorum node to 27017 fails. If the heartbeat fails for more than 10 seconds, the heartbeat is automatically initiated because there is no primary node
The vote. And there’s only 27018 replicas, so there’s only one candidate 27018, start voting. 27019 cast one vote to 27018, 27018 has its own vote, so there are two votes, more than the “majority” 27019 is the arbitration node, does not have the right to vote, 27018 does not vote for it, its vote is 0
-
Results: 27018 became the main node with read and write function
-
27018 If you exit and re-enter, the system automatically switches to the primary node
-
Read and write success
-
At this point, the primary node 21017 that was down is restarted again. 27017 becomes the secondary node, and 27018 remains the primary node. Log in to 27017 node and discover that it is a slave node. The data is automatically synchronized from 27018 to achieve high availability
-
6.7.3 The Quorum Node and primary Node are Faulty
-
Shut down the quorum node 27019 and then shut down the current primary node 27018
-
After logging in to 27017, it is found that 27017 is still the slave node, and there is no master node in the replica set. As a result, the replica set is read-only.
Unable to write
Why are there no elections? Since 27,017 votes did not have a majority, i.e., no vote greater than or equal to 2, it has only one default vote (priority 1). If you want to trigger an election, you can join any member
-
If only 27019 quorum node member is added, the primary node must be 27017, because there is no election, the quorum node does not participate in the election,
But voting
Start quorum node 27019
27017 Upgrading a secondary node to an active node
-
If only 27018 nodes are added, an election is initiated. Because 27017 and 27018 are two votes, the new one will be the main day
point
Only 27018 is activated to trigger the election
Newer data is selected as the primary node
-
6.7.4 The Quorum Node or Secondary Node is Faulty
-
Close quorum node 27019 and then the current replica node 27018
-
After 10 seconds, 27017 primary node is automatically degraded to replica node (service degradation)
-
The replica set cannot write data. It is faulty
6.8 Compass Connects the replica set
6.9 SpringDataMongoDB Connection Replica Set (*)
-
Replica set syntax
mongodb://host1,host2,host3/articledb? Connect =replicaSet&slaveOk=true&replicaSet= name of the replica setCopy the code
- SlaveOk =true: Enables the read function on replica nodes to separate read and write data
- Connect =replicaSet: automatically selects the read and write hosts in the replicaSet. If slaveOK is enabled, read and write separation is implemented
-
Complete connection string reference (understanding)
mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]] [/[database][?options]] Copy the code
- Mongodb :// This is a fixed format and must be specified
- Username :password@ Optional. If set, the driver will try to log in to the database after connecting to the database server
- Host1 must specify at least one host. Host1 is the only one to be entered in this URI. It specifies the address of the server to connect to. If you want to connect to a replication set, specify multiple host addresses
- PortX Specifies the port number. If this parameter is not specified, the default value is 27017
- /database If username: pass@ is specified, connect to and verify login to the specified database. If this parameter is not specified, the test database is opened by default
- ? Options is the connection option. If you do not use /database, you need to be preceded by a /. All join options are key-value name=value between key-value pairs by & or; (semicolon) separated
-
Example: Connect three Replicaset servers (ports 27017, 27018, and 27019) directly to the first server, either Replica
Set part is either primary or secondary, write operations are applied to primary and queries are distributed to secondary
server: port: 7777 spring: # data source configuration data: mongodb: # host address # host: 192.168.83.133 The default port is 27017 # port: 27017 # # database # database: test The connection string for the replica set uri: Mongo: / / 192.168.83.133:27017192168 83.133:27018192168 83.133:27019 / articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs Copy the code
Note: The hosts must be all the hosts in the replica set, including the master node, replica node, and quorum node
-
SpringDataMongoDB automatically implements read/write separation:
- When writing, open only the primary node connection
- The read operation is to open both the master and slave node connections, but use the slave node to fetch data
7. Sharded Cluster (*)
7.1 Sharding Concept
-
Sharding is a method of distributing data across multiple machines, and MongoDB uses sharding to support deployments with very large data sets and high-throughput operations
-
In other words: Sharding is the process of breaking up data and distributing it on different machines. This concept is sometimes expressed in the term partitioning. By spreading the data across different machines, more data can be stored and more loads handled without the need for a powerful mainframe computer
-
Database systems with large data sets or high-throughput applications can challenge the capacity of a single server. For example, a high query rate can exhaust the CPU capacity of the server. A working set size larger than the system’s RAM emphasizes the DISK drive’S I/O capacity
-
There are two approaches to system growth: vertical scaling and horizontal scaling
- Vertical scaling means increasing the capacity of a single server, such as using more powerful cpus, adding more RAM, or increasing the amount of storage. The limitations of available technologies may limit a single machine to being powerful enough for a given workload. In addition, cloud-based providers have hard caps based on available hardware configurations. As a result, vertical scaling has an actual maximum value
- Scaling out horizontally means dividing up the system data set and loading multiple servers, adding additional servers to increase capacity as needed. While the overall speed or capacity of individual machines may not be high, each machine handling a subset of the entire workload may provide greater efficiency than a single high-speed, high-volume server. Scaling deployment capacity simply requires adding additional servers as needed, which can be less costly overall than high-end hardware for a single machine. The tradeoff is the increased complexity of infrastructure and deployment maintenance
-
MongoDB supports horizontal scaling through sharding
- Lower server cost for higher performance
7.2 Components in a Sharded Cluster
- The MongoDB sharding cluster contains the following components:
- Shard (storage) : Each shard contains a subset of the shard data, and each shard can be deployed as a set of replicas
- Mongos (Routing) : Mongos acts as a query router, providing an interface between the client application and the shard cluster
- Config-servers (” scheduling “configuration) : Configure metadata and configuration Settings for server storage clusters. Starting with MongoDB 3.4, configuration servers must be deployed as replica sets (CSRS).
- Diagram of component interaction in a sharded cluster
- MongoDB shards data at the collection level and distributes the collection data among the shards in the cluster
7.3 Sharding Cluster Architecture Objectives
- Two fragment node replica sets (3+3), one configuration node replica set (3), and two routing nodes (2), altogether 11 service nodes
7.4 Creating a Fragment Node Replica Set
7.4.1 First set of replicas
-
Prepare a directory for storing data and logs
#-----------myshardrs01mkdir -p /mongodb/sharded_cluster/myshardrs01_27018/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs01_27018/data/db \ & mkdir -p /mongodb/sharded_cluster/myshardrs01_27118/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs01_27118/data/db \ & mkdir -p /mongodb/sharded_cluster/myshardrs01_27218/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs01_27218/data/dbCopy the code
-
Create or modify a configuration file
vim /mongodb/sharded_cluster/myshardrs01_27018/mongod.conf Copy the code
Myshardrs01_27018:
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/sharded_cluster/myshardrs01_27018/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/sharded_cluster/myshardrs01_27018/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27018Replication: # Replication set name replSetName: mySharDS01 Sharding: # Sharding role clusterRole: shardSvrCopy the code
Setting up sharding.clusterRole requires the Mongod instance to run the replication. To deploy an instance as a replica set member, use replSetName to set and specify the name of the replica set: ConfigSvr and ShardSvr
-
The same operation is repeated for 27118 and 27218
vim /mongodb/sharded_cluster/myshardrs01_27118/mongod.conf Copy the code
vim /mongodb/sharded_cluster/myshardrs01_27218/mongod.conf Copy the code
-
Start the first set of replicas: one master, one replica and one arbitrator
-
In turn, start
-
Initialize the replica set and create the master node, using client commands to connect to either node, but try to connect to the master node here
(1) Execute the initialization replica set command
rs.initiate()
(2) Check the configuration of the primary node
rs.status()
(3) Add copy nodes and arbitration nodes
-
-
View the replica set configuration
rs.conf()
If the following information is displayed, the configuration is complete
7.4.2 Second set of replicas
-
Prepare a directory for storing data and logs
#-----------myshardrs02mkdir -p /mongodb/sharded_cluster/myshardrs02_27318/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs02_27318/data/db \ & mkdir -p /mongodb/sharded_cluster/myshardrs02_27418/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs02_27418/data/db \ & mkdir -p /mongodb/sharded_cluster/myshardrs02_27518/log \ & mkdir -p /mongodb/sharded_cluster/myshardrs02_27518/data/dbCopy the code
-
Create or modify a configuration file
vim /mongodb/sharded_cluster/myshardrs02_27318/mongod.conf Copy the code
vim /mongodb/sharded_cluster/myshardrs02_27418/mongod.conf Copy the code
vim /mongodb/sharded_cluster/myshardrs02_27518/mongod.conf Copy the code
myshardrs02_27318:
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/sharded_cluster/myshardrs02_27318/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/sharded_cluster/myshardrs02_27318/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27318Replication: # Replication set name replSetName: myShards02 Sharding: # Sharding Role clusterRole: shardSvrCopy the code
Repeat the preceding steps for mySHARDRs02_27418 and mySHARDRs02_27518
-
Start the second set of replicas: one master, one replicas, one arbitrator
-
(1) Initialize replica set and create master node: use client command to connect any node, but try to connect master node here
Execute the initialize replica set command: rs.initiate()
View replica sets :rs.status()
-
(2) Master node configuration check: rs.conf()
-
(3) Add copy nodes and arbitration nodes as follows
-
View the configuration of replica sets
rs.conf()
rs.status()
7.5 Configuring the Creation of node Replica Sets
-
Prepare a directory for storing data and logs
#-----------configrs #Create data node data and log directories mkdir -p /mongodb/sharded_cluster/myconfigrs_27019/log \ & mkdir -p /mongodb/sharded_cluster/myconfigrs_27019/data/db \ & mkdir -p /mongodb/sharded_cluster/myconfigrs_27119/log \ & mkdir -p /mongodb/sharded_cluster/myconfigrs_27119/data/db \ & mkdir -p /mongodb/sharded_cluster/myconfigrs_27219/log \ & mkdir -p /mongodb/sharded_cluster/myconfigrs_27219/data/db Copy the code
-
Create or modify a configuration file
vim /mongodb/sharded_cluster/myconfigrs_27019/mongod.conf Copy the code
vim /mongodb/sharded_cluster/myconfigrs_27119/mongod.conf Copy the code
vim /mongodb/sharded_cluster/myconfigrs_27219/mongod.conf Copy the code
myconfifigrs_27019/mongod.conf
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:true storage: The directory in which the mongod instance stores its data. The storage.dbpath setting applies only to Mongod. dbPath: "/mongodb/sharded_cluster/myconfigrs_27019/data/db"Journal: # Enable or disable persistent logging to ensure that data files remain valid and recoverable. enabled:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/sharded_cluster/myconfigrs_27019/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27019Replication: # Name of the replica set replSetName: myConfigRs Sharding: # Sharding Role clusterRole: ConfigSvrCopy the code
-
Myconfifigrs_27119 /mongod. Conf and myConfiFigrs_27219 / Mongod
-
Start the three Mongod services in turn
-
(1) Initialize the replica set and create the master node
- Use client commands to connect to either node, but try to connect to the primary node
- Execute the initialize replica set command:
rs.initiate()
- View replica sets (excerpt) :
rs.status()
-
Add two replica nodes
-
Rs. The add (" 192.168.83.133:27119) rs. Add (" 192.168.83.133:27219) rs. The status ()Copy the code
-
7.6 Creating and Performing operations on routing Nodes
7.6.1 Creation and connection of the first routing node
-
Prepare a directory for storing data and logs
#-----------mongos01 mkdir -p /mongodb/sharded_cluster/mymongos_27017/log Copy the code
-
Create or modify the mymongos_27017/mongos.conf configuration file
vi /mongodb/sharded_cluster/mymongos_27017/mongos.conf Copy the code
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/sharded_cluster//mymongos_27017/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/sharded_cluster//mymongos_27017/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27017Sharding: # specify configDB: myConfigrs /192.16883.133.:27019.192.16883.133.:27119.192.16883.133.:27219 Copy the code
-
Start the Mongos service
-
./mongos -f /mongodb/sharded_cluster/mymongos_27017/mongos.conf Copy the code
-
-
The client logs in to Mongos and can view the library, but cannot write data
Cause: Through the routing node operation, only the configuration node is connected, but the shard data node is not connected, so the business data cannot be written
7.6.2 Configuring Fragments on routing Nodes
-
(1) Add sharding using command:
sh.addShard("IP:Port") Copy the code
- Add the first set of shard replicas
Sh. AddShard (" myshards01/192.168.83.133:27018192168 83.133:27118192168 83.133:27218 ")Copy the code
-
Viewing fragment status:
sh.status()
-
Go ahead and add the second set of shard replicas
Sh. AddShard (" myshards02/192.168.83.133:27318192168 83.133:27418192168 83.133:27518 ")Copy the code
- Warning: If the fragment fails to be added, manually remove the fragment and check the information before adding the fragment again
Remove the shard
use admin db.runCommand( { removeShard: "myshards02" } ) Copy the code
Note: If only the last shard is left, the shard data will be automatically transferred when the shard is removed. A time process is required. After the shard is deleted, the shard data can be deleted again
-
(2) Enable sharding
- Sh.enablesharding (” library name “), sh.shardCollection(” library name “). A collection of “{” key” : 1})
-
(3) Collection sharding
-
For collection sharding, you must specify the collection and sharding keys using the sh.shardCollection() method
-
Grammar:
sh.shardCollection(namespace,key,unique) Copy the code
-
To fragment a collection, you need to select a Shard Key that must be included in each record, and the Shard Key is established
For a single field or compound field of an index,MongoDB divides the data into different data blocks based on slice keys and distributes the data blocks evenly
To divide data blocks by slice keys,MongoDB uses hash based sharding (random average allocation) or bases
Sharding in range (value size allocation)
Nickname = “nickname” = “nickname
-
Fragmentation rules
-
Sharding rule 1: Hash strategy
For hash-based sharding,MongoDB computes the hash value of a field and uses the hash value to create blocks of data. In systems that use hash-based sharding, documents with “close” slice keys are likely not stored in the same data block, so data partitioning
Separation is better
Example: Data is fragmented according to the hash value of nickname as the slice key
sh.shardCollection("articledb.comment",{"nickname":"hashed"}) Copy the code
Viewing fragment status:
sh.status()
-
Sharding rule 2: Scope policy
For range-based sharding,MongoDB divides the data into different parts based on the range of slice keys. Suppose you have a digit slice key: Imagine a line from minus infinity to infinity, and the value of each slice key draws a point on the line. MongoDB divides this line into shorter, non-overlapping pieces called blocks, each of which contains data within a certain range of slice keys. In systems that use slice keys for scoping, documents with “close” slice keys are likely to be stored in the same data block, and therefore in the same shard.
Example: Shard by age value using the author age field as the shard key
sh.shardCollection("articledb.author",{"age":1}) Copy the code
Viewing fragment status:
sh.status()
-
Note that:
1) A set can only specify one slice key, otherwise an error is reported.
2) Once a collection is sharded, the sharding key and sharding value cannot be changed. For example, you cannot select different sharding keys for the collection or update the values of the sharding keys
3) Allocate data according to age index
-
-
Performance comparison between range-based sharding and hashing sharding
- Range-based sharding provides more efficient range queries. Given a range of slice keys, distribution routing makes it easy to determine which data block stores the requested data and forward the request to the corresponding shard. However, scope-based sharding can lead to uneven data across shards, and sometimes the negative effects outweigh the positive effects of query performance. For example, if the field in which the slice key is located grows linearly, all requests for a given period of time will fall into a fixed chunk of data, resulting in distribution in the same shard. In this case, a small number of shards carry most of the data in the cluster, and the system does not scale well.
- In contrast, sharding based on hash ensures data balance in the cluster at the cost of range query performance. The randomness of the hash values causes the data to be randomly distributed in each data block, and therefore in different slices. But also because of randomness, it is difficult for a range query to determine which shards should be requested, and often all shards need to be requested in order to return the desired result
- Otherwise, Hash Sharding is recommended
- Using _id as the slice key is a good choice because it is mandatory, and you can use the hash of the data document _id as the slice key. This scheme ensures that reads and writes are evenly distributed, and it ensures that each document has a different slice key so that the data blocks are fine
- Still not perfect, as a query against multiple documents would have to hit all shards. That said, it’s a better solution
- Ideally, shard keys allow documents to be distributed evenly across the cluster
-
7.6.3 Inserting Data after Fragmentation
-
(1) Test 1 (hash rule) : After logging in to Mongos, insert 1000 pieces of data into the comment loop
use articledb for(var i=0; i<=1000; i++){db.comment.insert({_id:i+"".nickname:"BoBo"+i})} db.comment.count() Copy the code
-
Tip: js syntax, because mongo’s shell is a JavaScript shell
-
Note: Data inserted from the route must contain a slice key, otherwise it cannot be inserted
-
The test results
- Log in to the master node of the two slices and count the number of documents
- The first shard replica set
db.comment.count()
Article 508. Note that switch the database under articledb - The second shard replica set
db.comment.count()
Article 493. Note that switch the database under articledb
-
conclusion
- It can be seen that 1000 pieces of data are almost evenly distributed on the two shards. Is assigned based on the hash value of the slice key. This allocation is easy to scale horizontally: once the data store needs more space, shards can be added, while improving performance
- Use db.ment.stats () to view the complete state of a single collection. Mongos executes this command to view the state of the data sharding of the collection
- Use sh.status() to view shard information for all collections in the library
-
(2) Test 2 (scope rule) : After logging in to MONgs, insert 20000 pieces of data into comment loop for test
-
Note: If you check the status and find no sharding, it may be caused by the following reasons:
1) The system is busy and sharding.
2) The chunk is not filled up. The default chunksize is 64M. After the chunk is filled up, other chunks will be considered
-
The data block fills the data, so for testing, you can make it smaller, in this case 1M, as follows
use config db.settings.save( { _id:"chunksize", value: 1 } ) Copy the code
-
Test it and change it back
db.settings.save( { _id:"chunksize", value: 64 } ) Copy the code
-
-
// Insert data use articledb for(var i=1; i<=20000; i++) {db.author.save({"name":"BoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoBoB oBoBoBoBoBoBoBoBo"+i,"age":NumberInt(i%120)})} Copy the code
-
View the data of two fragmented replica sets
- The first shard copy set myShards01,7681 data
- The second shard replica set myShards02, 18574
7.6.4 Adding another Routing Node
-
Prepare a directory for storing data and logs
#-----------mongos02 mkdir -p /mongodb/sharded_cluster/mymongos_27117/log Copy the code
-
Create or modify a configuration file
vi /mongodb/sharded_cluster/mymongos_27117/mongos.conf Copy the code
-
mymongos_27117/mongos.conf
SystemLog: #MongoDB sends all logs to destination: file# The path to the log file to which mongod or Mongos should send all diagnostic logging information path: "/mongodb/sharded_cluster//mymongos_27117/log/mongod.log"# When the Mongos or Mongod instance is restarted, Mongos or Mongod appends the new entry to the end of the existing log file. logAppend:trueProcessManagement: # Enable daemon mode to run mongos or Mongod processes in the background. fork:true# specify the file location used to hold the process ID of mongos or Mongod, where Mongos or Mongod will write its PID pidFilePath:"/mongodb/sharded_cluster//mymongos_27117/log/mongod.pid"When the replica is initialized, the node name is automatically set to the local domain name instead of IP #bindIpAll:trueBindIp: localhost,192.16883.133.#bindIp #bind port:27117Sharding: # specify configDB: myConfigrs /192.16883.133.:27019.192.16883.133.:27119.192.16883.133.:27219 Copy the code
-
Start the Mongos service
./mongos -f /mongodb/sharded_cluster/mymongos_27117/mongos.conf
-
Use mongo client to log in 27117, and find that the second route does not need to be configured, because the shard configuration is saved to the configuration server
6.7 SpringDataMongoDB Connects the shard cluster
-
The SpringDataMongoDB client configuration for multiple routes is as follows
server: port: 7777 spring: # data source configuration data: mongodb: # host address # host: 192.168.83.133 The default port is 27017 # port: 27017 # # database # database: test The connection string for the replica set # uri: mongo: / / 192.168.83.133:27017192168 83.133:27018192168 83.133:27019 / articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs # connect route string uri: Mongo: / / 192.168.83.133:27017192168 83.133:27117 / articledb Copy the code
-
Logs show that a route is selected to write data
7. Safety certification
7.1 User and Role Rights of the MongoDB Service
-
By default, the user access permission control is not enabled when the MongoDB instance is started and run, that is to say, the local server of the instance can be connected to the instance for all kinds of operations, and MongoDB does not conduct user authentication on the connected client, which is very dangerous
-
According to the mongodb website, you can do the following steps to ensure the security of mongodb:
1) Use a new port. The default port 27017 is not secure if it can be connected once the IP is known
2) Set up the network environment of mongodb. It is best to deploy mongodb to the internal server of the company, so that the external network can not access, and the company uses VPN for internal access
3) Enable security authentication. Set the internal authentication mode between servers and the account and password authentication mode for clients to connect to the cluster
-
To force user access control (user authentication) on, use the option –auth at startup of the MongoDB instance or add the option auth=true to the specified startup profile
-
Relevant concepts
-
1) Enable access control:
MongoDB uses role-based Access Control (RBAC) to manage users’ Access to instances
By granting users one or more roles to control user access to database resources and database operation permissions. Before assigning users roles, users cannot access instances. Add the option –auth =true when instances are started or specify the option auth=true in the startup configuration file
-
2) Roles:
In MongoDB, users are granted the operation permissions of database resources by roles. The permissions of each role can be explicitly specified, or inherited from other roles, or both.
-
3) Permissions: Permissions consist of specified database resources and actions allowed on specified resources
- Resources include databases, collections, partial collections, and clusters.
- Actions Include adding, deleting, modifying, and querying resources (CRUD).
-
-
A role can contain one or more existing roles. A newly created role inherits all permissions of the roles. In the same database, a newly created role can inherit the rights of other roles. A role created in the admin database can inherit the rights of roles in other databases
-
Common built-in roles
- Database user roles: read and readWrite
- All database user roles: readAnyDatabase, readWriteAnyDatabase, userAdminAnyDatabase, and dbAdminAnyDatabase
- Database management roles: dbAdmin, dbOwner, and userAdmin
- Cluster management roles: clusterAdmin, clusterManager, clusterMonitor, and hostManager
- Backup and restore roles: backup and restore
- Super user role: root
- Internal role: System
-
Common role permission description
- Read: Any data in the specified database can be read
- ReadWrite: You can read and write any data in a specified database, including creating, renaming, and deleting collections
- UserAdmin: you can create and modify users in the specified database
- Root: super permission
7.2 Single-instance Environment
- Objective: Enable security authentication for the MongoDB service of a single instance. The single instance refers to the MongoDB instance that has no replica set or sharding enabled
7.2.1 (Optional) Disabling Enabled Services
-
Added mongod security authentication for single instances, which can be added directly at the time of service construction or on previously built services
-
There are two ways to stop a service: quick shutdown and standard shutdown
-
Quick shutdown method (quick, simple, and data error prone) : Kill the process by running the kill command
kill -2 pID Copy the code
-
Standard shutdown method (data is not error-prone, but troublesome) : Shut down the service through the shutdownServer command in the Mongo client
// If you want to log in to the server remotely, you must log in to the server with authentication first. Mongo --port 27017 use admin // disable db.shutdownServer()Copy the code
-
-
Add: If data is corrupted, you need to do the following (understand)
-
(1) Delete lock file
rm -f /mongodb/single/data/db/*.lock Copy the code
-
(2) Data repair
/usr/local/mongodb/bin/mongod --repair --dbpath=/mongodb/single/data/db Copy the code
-
7.2.2 Adding Users and Rights
-
Start the Mongod service in single single mode
-
Client connection 27017
-
When operating users, try not to enable authorization when starting the Mongod service
-
Create two administrator users, one is the system super administrator myroot, one is the admin database management user myadmin
-
// Switch to the admin library use admin // Create system super user myroot, password: 123456, set role: root db.createUser({user:"myroot".pwd:"123456".roles: ["root"]}) // create account myadmin to manage user permissions db.createUser({user:"myadmin".pwd:"123456".roles: [{role:"userAdminAnyDatabase".db:"admin"}]}) // View the user that has been created db.system.users.find() // Delete the user db.dropUser("myadmin") // Change the password db.changeUserPassword("myroot"."123456") Copy the code
-
-
prompt
-
1) This example creates two users, one for supermanagement and one for managing users. In fact, you only need one user. Do not create supermanaged users if you have high security requirements to prevent supermanaged leaks
-
2) Like other databases (MySQL), the management of permissions is almost the same, and the information of users and permissions is saved in the corresponding tables of the database. Mongodb stores all user information. The user name, password, and database information are stored in the admin database set system.users.
-
{role:”userAdminAnyDatabase”, db:””}
-
-
Certification test
Db. auth("myroot","12345") // Password error: failed db.auth("myroot","123456")Copy the code
-
Creating a Common User
-
Create a common user. You can add a common user when authentication is not enabled or after authentication is enabled. After authentication is enabled, the user must be enabled
Use the admin user to log in and authenticate before operation. The bottom layer is a collection of user information stored in the admin database
-
Db.createuser ({user: createUser, createUser, createUser, createUser, createUser, createUser); Roles: [{role: "readWrite", db: "articledb"}]})Copy the code
-
-
Note: If authentication is enabled, the user logging in to the client must use the role of admin, such as the role of myadmin with root
User, and then through the myadmin user to create other roles of users
7.2.3 Enabling Authentication and Client Login on the Server
-
Stop services that have been started
-
There are two ways to enable permission authentication and start the service: parameter or configuration file
-
Parameters of the way
-
The configuration file
vim /mongodb/single/mongod.conf Copy the code
Security: # Enable authorization: enabledCopy the code
-
-
There are two authentication methods. One is to log in first and authenticate in Mongo shell. One is direct authentication at login
-
Connect first and then authenticate
-
db.auth("muroot","123456") Copy the code
-
-
Direct authentication at connection time
-
./mongo --port 27017 --authenticationDatabase admin -u myroot -p 123456 Copy the code
-
Tip:
-u: indicates the user name
P: password
–authenticationDatabase: Specifies the connection library. If the login is to specify a user name and password, you must specify the corresponding database
-
-
7.2.4 SpringDataMongoDB Connection authentication
- To connect to the MongoDB server using username and password, you must use the format ‘username:password@hostname/dbname’ with ‘username’ as the username and ‘password’ as the password
- Target: Connect to the MongoDB service as user bobo using password 123456
7.3 Replica Set Environment
7.3.1 preface
- For the built mongodb replica set, enable security authentication and log in using the account and password for security
- The replica set environment was built before use
- There are two aspects to configuring access control over replica sets
- Internal authentication is used between replica sets and individual node members of the shared cluster, either using key files or X.509 certificates. This document uses a key file. It is recommended that you use a key file in a test environment, but x.509 certificates are recommended in a formal environment. The principle is that each instance in the cluster connects to each other to verify whether the content of the certificate used by each other is the same. Only instances with the same certificate can access each other
- Enable access authorization when connecting to the mongodb cluster using a client. Access outside the cluster. Authorization needs to be enabled when connecting through visual clients or code
- In KeyfiFile authentication, each Mongod instance in the replica set uses the contents of the Keyfifile as a shared password, and only mongod or Mongos instances with the correct key file can connect to the replica set. The contents of the key file must be between 6 and 1024 characters, and on Unix/Linux systems the file owner must have at least read permission on the file
7.3.2 (Optional) Disabling the Enabled Replica Set Service
- Shut down the standalone Mongod service used in the previous tests
7.3.3 Adding an Administrator Account on the Active Node
-
Start the replica set service in sequence
-
You simply add users on the primary node and the replica set synchronizes automatically
-
Before enabling authentication, create the supermanagement user myroot and password 123456
7.3.4 Creating a Key File for Replica Set Authentication
-
Step 1: Generate a key file to the current folder
-
Key files can be generated using any method. For example, the following uses OpenSSL to generate a password file, and then uses chmod to change the file permissions so that only the file owner has read permissions
-
openssl rand -base64 90 -out ./mongo.keyfile chmod 400 ./mongo.keyfile ll mongo.keyfile Copy the code
-
-
Tip:
All replica set nodes must use the same keyfifile, which is usually generated on one machine and then copied to another machine, and must have read permission. Otherwise, an error will be reported in the future: permissions on /mongodb/replica_sets/myrs_27017/mongo.keyfile are too open
Make sure that the key files are consistent, regardless of location. But for easy lookup, it is recommended that each machine be placed in a single location, in the same directory as the configuration file, which is copied to multiple directories
cp mongo.keyfile /mongodb/replica_sets/myrs_27017 cp mongo.keyfile /mongodb/replica_sets/myrs_27018 cp mongo.keyfile /mongodb/replica_sets/myrs_27019 Copy the code
7.3.5 Modifying a Configuration File to Specify keyFile
-
Edit the mongod. Conf file for each service and add relevant content:
/mongodb/replica_sets/myrs_27017/mongod.conf
Security: #KeyFile Authentication file KeyFile: /mongodb/replica_sets/myrs_27017/mongo. KeyFileCopy the code
- /mongodb/replica_sets/myrs_27018/mongod. Conf /mongodb/replica_sets/myrs_27019/mongod
-
After modifying the configuration, start the replica set service one by one
7.3.6 SpringDataMongoDB connection replica set
-
Yml file
server: port: 7777 spring: # data source configuration data: mongodb: # host address # host: 192.168.83.133 The default port is 27017 # port: 27017 # # database # database: test # username # username: bobo # your password # password: 123456 The connection string for the replica set # uri: mongo: / / 192.168.83.133:27017192168 83.133:27018192168 83.133:27019 / articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs # connect route string # uri: mongo: / / 192.168.83.133:27017192168 83.133:27117 / articledb String concatenation is also used when the server is authenticated # uri: mongo: / / bobo: 123456 @192.168.83.133:27017 / articledb # copy set has authentication in case of string connection uri: Mongo: / / myroot: 123456 @192.168.83.133:27017192168 83.133:27018192168 83.133:2 7019/articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs Copy the code
7.4 Authentication in a Fragmented Cluster environment
-
Make sure that the key files are consistent, regardless of location. But for easy searching, it is recommended that each machine be placed in a fixed location,
Put them in the same directory as the configuration files. Here, copy the file to multiple directories
echo '/mongodb/sharded_cluster/myshardrs01_27018/mongo.keyfile /mongodb/sharded_cluster/myshardrs01_27118/mongo.keyfile /mongodb/sharded_cluster/myshardrs01_27218/mongo.keyfile /mongodb/sharded_cluster/myshardrs02_27318/mongo.keyfile /mongodb/sharded_cluster/myshardrs02_27418/mongo.keyfile /mongodb/sharded_cluster/myshardrs02_27518/mongo.keyfile /mongodb/sharded_cluster/myconfigrs_27019/mongo.keyfile /mongodb/sharded_cluster/myconfigrs_27119/mongo.keyfile /mongodb/sharded_cluster/myconfigrs_27219/mongo.keyfile /mongodb/sharded_cluster/mymongos_27017/mongo.keyfile /mongodb/sharded_cluster/mymongos_27117/mongo.keyfile' | xargs -n 1 cp -v /root/mongo.keyfile Copy the code
-
The shard replica set operates with the replica set
-
Different: /mongos. Conf Mongos service configuration file to add authentication
KeyFile: /mongodb/sharded_cluster/mymongos_27017/mongo.keyfileCopy the code
-
Authorization: Enabled configuration is not required. The reason is that the security authentication of replica set plus fragment needs to be configured in two aspects. Internal authentication is used between each node of replica set and communication between each internal Mongo instance, and only the same keyfifile is used
To visit each other. KeyFile: /mongodb/sharded_cluster/mymongos_27117/mongo.keyfile. However, for all mongods, this is the true shards to save data. Mongos only does routing and does not save data. So all mongods enable authorization: Enabled for data access. In this way, users can access data only if their account and password are correct
-
SpringDataMongoDB connection authentication
server: port: 7777 spring: # data source configuration data: mongodb: # host address # host: 192.168.83.133 The default port is 27017 # port: 27017 # # database # database: test # username # username: bobo # your password # password: 123456 The connection string for the replica set # uri: mongo: / / 192.168.83.133:27017192168 83.133:27018192168 83.133:27019 / articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs # connect route string # uri: mongo: / / 192.168.83.133:27017192168 83.133:27117 / articledb String concatenation is also used when the server is authenticated # uri: mongo: / / bobo: 123456 @192.168.83.133:27017 / articledb # copy set has authentication in case of string connection # uri: mongo: / / myroot: 123456 @192.168.83.133:27017192168 83.133:27018192168 83.133:2, 7019 / articledb? connect=replicaSet&slaveOk=true&replicaSet=myrs # Shard cluster has authentication in case of string connection uri: Mongo: / / myroot: 123456 @192.168.83.133:27017192168 83.133:27117 / articledb Copy the code