Product | technology drops

The author | expericnce



This paper introduces the features of distributed storage system Ceph from the aspects of architecture, application scenarios, internal IO flow, heartbeat mechanism, communication framework, CRUSH algorithm, QOS and so on. I hope it helps.

Reading Index


1. Introduction to Ceph architecture and application scenarios

1.1 introduction of Ceph

1.2 characteristics of Ceph

1.3 Ceph architecture

1.4 Core components and concepts of Ceph

1.5 Three Storage Types – Block Storage

1.6 Three Storage Types – File Storage

1.7 Three Storage Types – Object Storage

2. Ceph IO process and data distribution

2.1 Normal I/O flow chart

2.2 New I/O flowchart

2.3 Ceph IO algorithm flow

2.4 Ceph IO pseudocode process

2.5 Ceph RBD IO process

2.6 Ceph RBD IO framework Diagram

2.7 Ceph Pool and PG distribution

2.8 Ceph Data Expands PG distribution

3. Ceph heartbeat mechanism

3.1 Heartbeat Introduction

3.2 Ceph heartbeat detection

3.3 Ceph Heartbeat Detection between OSD Nodes

3.4 Ceph Heartbeat Detection between OSD nodes and Mon Nodes

3.5 Summary of Ceph Heartbeat Detection

Ceph communication framework

4.1 Introduction to Ceph communication frameworks

4.2 Ceph communication framework design mode

4.3 Flowchart of Ceph communication framework

4.4 Ceph Communication framework class diagram

4.5 Ceph communication data format

Ceph CRUSH algorithm

5.1 Data distribution algorithm challenges

5.2 Ceph CRUSH Algorithm Description

5.3 Principle of Ceph CRUSH algorithm

5.3.1 Hierarchical Cluster Map

5.3.2 Data distribution policy Placement Rules

5.3.3 Bucket Random algorithm type

5.4 Ceph CRUSH algorithm cases

6. Customize Ceph RBD QOS

6.1 introduced QOS

6.2 Ceph IO Operation Type

6.3 Ceph Official QOS Principles

6.4 Principles of Customized QOS

6.4.1 Introduction to the token bucket algorithm

6.4.2 RBD Token bucket algorithm Flow

6.4.3 RBD Token bucket algorithm framework diagram

1. Ceph framework introduction and application scenario introduction

▍ Ceph profile 1.1

Ceph is a unified distributed storage system designed to provide high performance, reliability and scalability.

The Ceph project has its roots in Sage’s doctoral work (the first results were published in 2004) and has since been contributed to the open source community. After several years of development, it has been widely used and supported by many cloud computing vendors.

RedHat and OpenStack can be integrated with Ceph to support back-end storage for VM images.

▍ Ceph 1.2 features

A high performance

A. Abandoning the traditional centralized storage metadata addressing scheme, CRUSH algorithm is adopted to achieve balanced data distribution and high parallelism.

B. Considering the isolation of Dr Domains, copy placement rules of various loads, such as cross-equipment room and rack awareness, can be implemented.

C. Supports thousands of storage nodes and supports TB to PB data.

High availability

A. The number of copies can be flexibly controlled.

B. Supports separation of fault domains, providing strong data consistency.

C. Automatic restoration in multiple fault scenarios.

D. No single point of failure, automatic management.

High scalability

A. Decentralization.

B. Flexible expansion.

C. Increases linearly as nodes increase.

Feature rich

A. Supports three storage ports: block storage, file storage, and object storage.

B. Support custom interfaces and support multiple language drivers.

▍ 1.3 Ceph architecture

Three interfaces are supported:

Object: Has native apis and is compatible with Swift and S3 apis.

Block: supports thin provisioning, snapshot, and clone.

File: indicates a Posix interface that supports snapshots.



1.4 Ceph core components and concept introduction

Monitor

A Ceph cluster requires a small cluster of multiple monitors to synchronize data through Paxos and store OSD metadata.

OSD

Object Storage Device (OSD) is a process that responds to client requests and returns specific data. A Ceph cluster usually has many OSD nodes.

MDS

MDS, Ceph Metadata Server, is the Metadata service on which CephFS depends.

Object

The lowest level storage unit of Ceph is the Object Object, each Object contains metadata and raw data.

PG

Placement Grouops is a logical concept. A PG contains multiple OSD nodes. The PG layer was introduced to better allocate and locate data.

RADOS

RADOS, which stands for Reliable Autonomic Distributed Object Store, is the essence of Ceph cluster. Users can perform cluster operations such as data distribution and Failover.

Libradio

Librados is a library provided by Rados. Because Rados is a protocol that is difficult to access directly, the upper RBD, RGW, and CephFS are accessed through Librados, which currently provides PHP, Ruby, Java, Python, C, and C++ support.

CRUSH

CRUSH is a data distribution algorithm used by Ceph, similar to consistent hashing, to distribute data to desired places.

RBD

RBD, RADOS Block Device, is a block device service provided by Ceph.

RGW

RGW, RADOS Gateway, is the object storage service provided by Ceph. Its interfaces are compatible with S3 and Swift.

CephFS

CephFS is the Ceph File System File System service provided by Ceph.

1.5 Three storage types – block storage



Typical equipment:

Disk array, hard disk

The raw disk space is mapped to the host.

Advantages:

A. Data is protected by Raid and LVM.

B. Combine several cheap hard disks to increase the capacity.

C. A logical disk is a combination of multiple disks to improve read/write efficiency.

Disadvantages:

A. If the SAN architecture is used, fc switches are costly.

B. Hosts cannot share data.

Usage Scenarios:

A. Docker container and VM disk storage allocation.

B. Log storage.

C. File storage.

D….

1.6 three storage types – file storage



Typical equipment:

FTP and NFS servers

To overcome the problem that block storage files cannot be shared, file storage was created.

Set up FTP and NFS services on the server, namely file storage.

Advantages:

A. Low cost, just any machine will do.

B. Facilitate file sharing.

Disadvantages:

A. The read/write rate is low.

B. The transmission rate is slow.

Usage Scenarios:

A. Log storage.

B. File storage with a directory structure.

C….

1.7 Three storage types – object storage



Typical equipment:

Distributed server with large capacity hard disk (SWIFT, S3)

Multiple servers are equipped with large-capacity hard disks and installed with object storage management software to provide read/write functions.

Advantages:

A. High read/write speed of block storage.

B. Supports file storage sharing.

Usage Scenarios:

(Suitable for updating data with less change)

A. Image storage.

B. Video storage.

C….

2. Ceph IO process



2.1 Normal IO flow chart




Steps:

  • 1. The client creates a cluster handler.

  • 2. The client reads the configuration file.

  • 3. The client connects to the Monitor and obtains cluster map information.

  • 4. The client reads and writes I/OS to the primary OSD node based on the CrSHmap algorithm.

  • 5. The primary OSD data node writes data to the other two replicas at the same time.

  • 6. Wait for the primary node and the other two replica nodes to finish writing data.


7. After the write status of both the primary node and the replica node is successful, the write status is returned to the client.


2.2 New master IO flow chart

Description:

If the newly added OSD1 node replaces OSD4 as the Primary OSD node, no PG is created on OSD1 and no data exists. Therefore, I/ OS on THE PG node cannot be processed. How does this function work?



Steps:

  • 1. The client connects to the Monitor to obtain cluster map information.

  • 2. At the same time, the new master OSD1 will actively report to Monitor and ask OSD2 to take over the master temporarily because there is no PG data.

  • 3. The temporary active OSD2 fully synchronizes data to the new active OSD1.

  • 4. I/O read and write the client directly connects to the temporary active OSD2 for read and write operations.

  • 5. Osd2 receives read/write I/OS and simultaneously writes to the other two replica nodes.

  • 6. Wait until OSD2 and the other two copies are written successfully.

  • 7. Osd2 returns to the client after the three data copies are written successfully.

  • 8. If DATA on OSD1 is synchronized, the temporary active OSD2 relinquishes the active role.

  • 9. Osd1 becomes the active node and OSD2 becomes the copy.

2.3 Ceph IO algorithm process



1. File File that the user needs to read and write. File – > Object mapping:

A. ino (metadata of File, unique ID of File).

B. ono(Serial number of an object generated by File segmentation, by default, a block size of 4M).

C. Oid (Object ID: ino + ono).

2. Object is the Object required by RADOS. Ceph assigns a static hash function to calculate the value of OID, maps OID to a pseudo-random value with approximately uniform distribution, and then gets PGID by phase with mask. Object – > PG mapping:

A) Hash (oid) & mask-> pgid

B) mask = total PG m(m is an integer power of 2)-1.

3. Placement Group (PG) is used to organize and map the storage of objects. (Similar to the concept of slot in Redis Cluster) A PG contains many objects. CRUSH algorithm is adopted to substitute pGID into it, and then a group of OSD is obtained. PG – > OSD mapping:

A) CRUSH (a pgid) – > (osd1, osd2, osd3).

2.4 Ceph IO pseudo-code process

1 locator = object_name
2
3 obj_hash =  hash(locator)
4
5 pg = obj_hash % num_pg
6
7 osds_for_pg = crush(pg)    # returns a list of osds
8
9 primary = osds_for_pg[0]
10
11 replicas = osds_for_pg[1:]Copy the code

2.5 Ceph RBD IO process

Data organization:



Steps:

  • 1. The client creates a pool and needs to specify the number of PGS for the pool.

  • 2. Create a Pool /image RBD device and mount it.

  • 3. The data written by the user is cut into blocks. The size of each block is 4M by default, and each block has a name (Object + serial number).

  • 4. Assign the location of each object copy through PG.

  • 5. Pg searches for three OSD nodes based on the cursh algorithm and stores the object on the three OSD nodes.

  • 6. Osd formats the underlying disk as an XFS file system.

  • 7. The storage of object becomes the storage of a file rbd0.object1.file.

2.6 Ceph RBD IO framework diagram



The process of writing data to the OSD node is as follows:

  • 1. In the form of LIBRbd, librBD is used to create a block device and write data to the block device.

  • 2. The librados interface is called locally on the client, and then layer by layer mapping through pool, RBD, Object and PG. In the PG layer, the data can be stored on the three OSD nodes, which have a master-slave relationship.

  • 3. The client establishes SOCKET communication with the Primay OSD node. The data to be written is sent to the Primary OSD node, which then sends the data to another replica OSD node.

2.7 Ceph Pool and PG distribution



Description:

  • A. Pool is a logical partition used by CEPH to store data. It acts as a namespace.

  • B. Each pool contains a configurable number of PGS.

  • Objects in c. PG are mapped to different OSD nodes.

  • D. The pool is distributed to the entire cluster.

  • E. Pool can be used as a fault isolation domain and can be isolated according to different user scenarios.

2.8 Ceph data expansion PG distribution

Scenario data migration process:

A. Status Three OSD nodes and four PGS

B. Expand the capacity to four OSD nodes and four PGS

Status:



After the expansion:



Description:

Many PGS are distributed on each OSD node, and each PGS are automatically distributed on different OSD nodes. If the capacity is expanded, PGS will be migrated to the new OSD node to balance the number of PGS.

3. Ceph heartbeat mechanism

3.1 Heartbeat introduction

Heartbeat is used to check whether each node is faulty, so that the faulty node can be found in a timely manner and the corresponding troubleshooting process can be started.

Question:

A. Balance the fault detection time with the load caused by heartbeat packets.

B. If the heartbeat frequency is too high, too many heartbeat packets affect system performance.

C. If the heartbeat frequency is too low, the system availability is affected.

A fault detection strategy should be able to:

Timely: The cluster can detect node anomalies, such as downtime or network interruption, within an acceptable time range.

Appropriate pressure: including pressure on nodes, and pressure on the network.

Tolerate network jitter: The network is occasionally delayed.

Diffusion mechanism: The metamodel changes caused by the node status changes need to spread to the whole cluster through some mechanism.

3.2 Ceph heartbeat detection



The OSD node listens on public, Cluster, front, and back ports

· Public port: Listens for connections from Monitor and Client.

· Cluster port: listens for connections from OSD peers.

· Front port: a network adapter used by clients to connect to clusters, temporarily heartbeat between clusters.

· Back port: a network card for internal use of the customer cluster. Heartbeat is performed between clusters.

· HBClient: Messenger to send ping heartbeat.

3.3 Ceph OSD mutual heartbeat detection



Steps:

A. The OSD nodes in a PG heartbeat with each other and send PING and PONG messages to each other.

B. Check every 6s (actually a random time will be added to this basis to avoid the peak).

C. No heartbeat reply is detected at 20s and the system joins the Failure queue.

3.4 Ceph OSD and Mon heartbeat detection



OSD reports to Monitor:

  • A. An event, such as a fault or PG change, occurs on the OSD.

  • B. Within 5 seconds after starting itself.

  • C. OSD periodically reports the OSD to Monito

  • D. OSD Checks the failure information about the partner OSD in failure_queue.

  • E. Send a failure report to the Monitor, add the failure information to the failure_pending queue, and remove it from the failure_queue.

  • F. When receiving the heartbeat message from the OSD node in failure_queue or failure_pending, it is removed from the two queues and tells the Monitor to cancel the failure report.

  • G. When reconnecting to the Monitor network occurs, the failure_pending error report is added back to the failure_queue and sent to the Monitor again.

  • H. Monitor Collects statistics on the offline OSD

  • I. Monitor collects partner failure reports from the OSD.

  • J. If the faulty OSD node exceeds a certain threshold and enough OSD nodes report failures, take the OSD node offline.

3.5 Ceph heart detection summary

Ceph determines whether an OSD node fails by reporting the failed node to its partner OSD node or by monitoring the heartbeat traffic from the OSD node.

In a timely manner:

The partner OSD node detects node failures at the second level and reports to the Monitor. The Monitor takes the failed OSD node offline in a few minutes.

Appropriate pressure:

Because of the partner OSD reporting mechanism, the heartbeat statistics between the OSD and the Monitor are used as an insurance measure. Therefore, the interval for the OSD to send heartbeat messages to the Monitor can be as long as 600 seconds, and the monitoring threshold can be as long as 900 seconds. Ceph actually distributes the pressure of the central node during fault detection to all OSD nodes, so as to improve the reliability of the central node Monitor and further improve the scalability of the whole cluster.

Tolerate network jitter:

After receiving the report from the OSD node to its partner OSD node, the Monitor does not immediately take the target OSD node offline. Instead, the Monitor periodically waits for the following conditions:

1. The failure time of the target OSD node is greater than the threshold dynamically determined by a fixed amount of OSd_heartbeat_GRACE and historical network conditions.

2. Reports from different hosts reach MON_osd_min_down_reporters.

3. The failure report is not canceled by the source OSD before the preceding two conditions are met.

Diffusion:

The Monitor, as the central node, does not broadcast notifications to all OSD nodes and clients after OSDMap updates. Instead, the Monitor lazily waits for OSD nodes and clients to obtain OSDMap updates. This reduces Monitor stress and simplifies interaction logic.

Ceph Communication Framework

4.1 Ceph communication framework categories introduction

There are three different implementations of the network communication framework:

Simple thread mode

Features: For each network connection, two threads are created, one for receiving and one for sending.

Disadvantages: A large number of links can generate a large number of threads, which can consume CPU resources and affect performance.

I/O multiplexing mode of Async events

Features: This is the current network communication widely used in the way. Version K already uses Asnyc by default.

XIO is implemented using accelio, an open source network communication library

Features: This method relies on third-party library Accelio stability, currently in the experimental stage.

4.2 Ceph communication framework design model

Subscribe/Publish design pattern:

The subscription publishing pattern, also known as the Observer pattern, is intended to “define a one-to-many dependency between objects,

When the state of an object changes, all objects that depend on it are notified and automatically updated.

4.3 Ceph communication framework flow chart



Steps:

  • A. Accepter calls SimpleMessenger::add_accept_pipe() to create a Pipe to SimpleMessenger:: Pipes to process the request.

  • B. Pipe is used to read and send messages. This class has two main components, Pipe::Reader and Pipe::Writer, which handle message reading and sending.

  • C. Messenger acts as the publisher of the message and each Dispatcher subclass acts as the subscriber of the message. After receiving the message, Messenger reads the message through Pipe and forwards it to Dispatcher for processing.

  • D. Dispatcher is the subscriber’s base class. The subscription backend inherits this class and registers with Messenger:: Dispatcher via Messenger::add_dispatcher_tail/head. Received the message.

  • E. DispatchQueue This class is used to cache incoming messages and wake up the DispatchQueue::dispatch_thread thread to find the dispatch_thread on the back end to process the messages.



4.4 Ceph communication framework map



4.5 Ceph communication data format

Communication protocol format requires both parties to agree on the data format.

The content of the message is divided into three parts:

· Header // Message header, the envelope of the type message

· User data // Actual data to be sent

O Payload // Operation saves metadata

O Middle // Reserved field

O data // Reads and writes data

O footer // The end of the message

1 class Message : public RefCountedObject { 2 protected: 3 ceph_msg_header header; // Message header 4 ceph_msg_footer footer; // Message bottom 5 //"front" unaligned blob
6    bufferlist       middle;   // "middle" unaligned blob
7    bufferlist       data;     // data payload (page-alignment will be preserved where possible)
8 
9  /* recv_stamp is setwhen the Messenger starts reading the 10 * Message off the wire */ 11 utime_t recv_stamp; /* dispatch_stamp is the time stamp from which data is receivedsetwhen the Messenger starts calling dispatch() on 13 * its endpoints */ 14 utime_t dispatch_stamp; /* throttle_stamp is the point atwhichwe got throttle */ 16 utime_t throttle_stamp; // Get timestamp 17 /* time at for slot throttlewhich message was fully read*/ 18 utime_t recv_complete_stamp; 20 ConnectionRef connection; // Uint32_t magic = 0; // list_member_hook<> dispatch_q; // Boost :: Intrusive; 26 27 struct ceph_msg_header { 28 __le64 seq; // The unique sequence number of the message in the current session 29 __le64 tid; // The globally unique id of the message is 30 __le16type; // Message type 31 __le16 priority; // Priority 32 __le16 version; // version number 33 34 __le32 front_len; // The length of the payload 35 __le32 middle_len; // The length of middle is 36 __le32 data_len; // The length of data is 37 __le16 data_off; Struct ceph_entity_name SRC; struct ceph_entity_name SRC; // Oldest code we think can decode this.unknownifzero. */ 43 __le16 compat_version; 44 __le16 reserved; 45 __le32 crc; /* header crc32c */ 46 } __attribute__ ((packed)); 47 48 struct ceph_msg_footer { 49 __le32 front_crc, middle_crc, data_crc; // CRC check code 50 __le64 sig; // The message's 64-bit signature 51 __u8 flags; // End flag 52} __attribute__ ((Packed));Copy the code

5. Ceph CRUSH algorithm

5.1 Data distribution algorithm challenges

Data distribution and load balancing:

1. Data is evenly distributed to each node.

2. Load balancing to balance the load of data access and read operations among nodes and disks.

Flexible cluster scaling:

1. The system can easily add or delete node devices and handle node failures.

2. After nodes are added or deleted, data is automatically balanced and data is migrated as little as possible.

Support for large-scale clusters:

1. The metadata maintained by the data distribution algorithm is relatively small, and the calculation amount should not be too large. With the increase of cluster size, the overhead of data distribution algorithm is relatively small.

5.2 Ceph CRUSH algorithm description

CRUSH algorithm: Controlled Scalable Decentralized Placement of Replicated Data

The mapping process algorithm from PG to OSD is called CRUSH algorithm. An Object needs to be stored in three osd nodes.

CRUSH algorithm is a pseudo-random process. It can randomly select an OSD set from all OSD nodes. However, the random selection result of a PG is unchanged, that is, the mapped OSD set is fixed.

5.3 Ceph CRUSH algorithm principle

CRUSH algorithm factor:

Hierarchical Cluster Map

Indicates the physical topology of the storage system. Defines the hierarchical static topology of an OSD cluster. The OSD hierarchy enables the CRUSH algorithm to realize rack-awareness when selecting OSD. In other words, replicas can be distributed on different racks and in different equipment rooms through rule definition to ensure data security.

Placement Rules

Rules that determine how copies of objects for a PG are selected. These rules allow users to customize the distribution of copies in the cluster.

● 5.3.1 Hierarchical Cluster Map



CRUSH Map is a tree structure. OSDMap records OSDMap attributes (epoch/ FSID /pool information, OSD IP address, etc.).

The leaf node is a device (also known as OSD), and the other nodes are called buckets. These buckets are imaginary nodes that can be abstracted based on the physical structure. Of course, the tree structure has only one final root node, which is called root node. The intermediate virtual bucket nodes can be data center abstract, machine room abstract, rack abstract, and host abstract.

● 5.3.2 Data distribution strategy Placement Rules

The main features of data distribution strategy Placement Rules are:

1. Search for a node in the CRUSH Map

2. Use the node as the fault isolation domain

3. Search mode of location copy (breadth-first or depth-first)

1 rule replicated_ruleset  Pool name = pool name = pool name = pool name
2 
3 {
4
5    ruleset 0                #rules specifies the number of the set6, 7type replicated          Define pool type as replicated(and erasure schema)
8
9     min_size 1                The minimum number of specified replicas in the pool cannot be less than 1
10 
11    max_size 10               The maximum number of replicas specified in the pool cannot be greater than 10
12 
13    step take default         The bucket entry point is usually a root bucket
14 
15    step chooseleaf  firstn  0  type  host Select a host and recursively select the osd node
16
17    step emit        # end18 19}Copy the code


● 5.3.3 Bucket random algorithm type

Buckets: Applies to all child nodes with the same weight and seldom adds or deletes items.

List buckets: Applicable to cluster expansion. Add item, generate optimal data movement, find item, time complexity O(n).

Tree buckets: The search responsibility is O (log N). When leaf nodes are added and deleted, nodeids of other nodes remain unchanged.

Straw buckets: All entries are allowed to “compete” with other entries in a similar way. When locating the replica, each item in the bucket corresponds to a straw of random length, and the straw with the longest length wins (is selected), adding or recalculating, and data movement between subtrees provides the optimal solution.

5.4 Ceph CRUSH algorithm cases

Description:

Some SAS and SSD disks exist in the cluster. The performance and availability of a service line are higher than those of other service lines. Can the data of this service line be stored on SSD disks?

Common users:



High quality users:



Configuration rules:



6. Customized Ceph RBD QOS

▍ 6.1 QOS is introduced

Quality of Service (QoS) originates from the network technology. It is used to solve problems such as network delay and congestion and provide better Service capability for specified network communication.

Question:

Our total Ceph cluster IO capacity is limited, such as bandwidth, IOPS. How to prevent users from competing for resources, how to ensure the high availability of all user resources in a cluster, and how to ensure the availability of high-quality user resources. So we need to allocate the limited IO capacity.

▍ Ceph 6.2 I

O Operation Types

ClientOp: Read/write I/O requests from clients.

SubOp: I/O requests between OSD nodes. Data reads and writes between replicas are generated by client I/ OS, and I/O requests are generated by data synchronization, data scanning, and load balancing.

SnapTrim: Deletes snapshot data. After a snapshot deletion command is sent from the client, metadata related to the deletion is returned directly. After that, the background thread deletes the real snapshot data. You can indirectly control the deletion rate by controlling the snaptrim rate.

Scrub: Scrub for silent data errors on objects, Scrub for metadata scanning, and deep Scrub for whole object scanning.

Recovery: data Recovery and migration. Cluster expansion or capacity reduction, and OSD failure or re-addition.

6.3 Ceph official QOS principle



MClock is an I/O scheduling algorithm based on time tags. It was first proposed by Vmware for centralized management of storage systems. (At present, the official QOS module is semi-finished).

Basic idea:

· Reservation: indicates the minimum I/O resources obtained by the client.

· Weight Indicates the proportion of shared I/O resources occupied by clients.

· Limit Indicates the maximum I/O resources available to the client.

6.4 Customized QOS principle

● 6.4.1 Introduction to token bucket algorithm



Based on TokenBucket algorithm, a set of simple and effective qos functions are implemented to meet the core requirements of cloud platform users.

Basic idea:

  • Drop tokens into the token bucket at a specific rate.

  • Packets are classified according to the preset matching rules. Packets that do not meet the matching rules are directly sent without being processed by the token bucket.

  • If the packet matches the matching rule, the token bucket is required to process the packet. When there are enough tokens in the bucket, the message can be continued, and the number of tokens in the bucket is reduced according to the length of the message.

  • If the number of tokens in the bucket is insufficient, the packet cannot be sent. The packet can be sent only after new tokens are generated in the bucket. In this way, you can limit the traffic of packets to be less than or equal to the token generation speed to limit the traffic.

● 6.4.2 RBD token bucket algorithm flow



Steps:

  • The user initiates an asynchronous I/O request to the Image.

  • The request arrives in the ImageRequestWQ queue.

  • Add TokenBucket algorithm to ImageRequestWQ when it dequeues.

  • Speed limiting is done through the token bucket algorithm and sent to ImageRequest for processing.

● 6.4.3 RBD token bucket algorithm framework diagram

Existing framework:



Token graph algorithm framework diagram:



▍ END