In the last article cache design is good, the basic services will not reverse the INTRODUCTION of db layer cache, review, THE MAIN design of DB layer cache can be summarized as:
- Caches are deleted, not updated
- Only one row record is always stored, that is, the row record corresponding to the primary key
- Unique indexes only cache primary key values, not row records.
- Anti-cache penetration design, default one minute, to prevent cache breakdown and avalanche
- Multi-line records are not cached
preface
In large business systems, through to the persistence layer to add caching, for the majority of single record query, believe that the cache can help to relieve the persistence layer access pressure big, but in the actual business, record data read more than just a single line, in the face of a lot more rows the query, the access to persistent layer can cause a lot of pressure, in addition to this, In the case of high concurrency like seckill system and course selection system, it is not realistic to simply rely on persistent layer cache. In this article, we introduce biz Cache in go-Zero practice.
Application Scenario Example
- Course selection system
- Content social system
- Seconds kill
Like these systems, we can add another layer of cache in the business layer to store the key information in the system, such as the information of student course selection in the course selection system and the remaining quota of the course; Content information, etc., over a period of time in a social system.
Next, let’s take the content social system as an example.
In social content systems, we look up a list of content and then click on something to see the details,
Before the biz cache was added, the flow chart for the query of content information should look like:
From above, and an article on the cache design, good service is basic not backward, we can know the content list of acquisition is not dependent on the cache, if we add a layer of the cache is used to store in the business layer in the list of key information (or even complete information), so many rows to access is no longer a problem, That’s what Biz Redis does. Let’s take a look at the design, assuming that a single row record in the content system contains the following fields
The field names | The field type | note |
---|---|---|
id | string | Content id |
title | string | The title |
content | string | The detailed content |
createTime | time.Time | Creation time |
Our goal is to obtain a batch of content list, and try to avoid the content list db caused by access pressure, first we use redis sort Set data structure to store, root need to store the field information, there are two redis storage scheme:
- Caching local information
Compress and store key field information (such as ID, etc.) according to certain rules. Score is usedcreateTime
The advantage of this storage scheme is that it saves redis storage space. On the other hand, the disadvantage of this storage scheme is that it requires a second lookup of the list details (but this lookup takes advantage of the row record cache in the persistence layer).
- Caching full information
All published content will be compressed and stored in accordance with certain rules. Similarly, score is still usedcreateTime
Millisecond value, the advantages of this kind of storage solution is a business to add, delete, check, change all walk reids, with no thought to the db layer this time can row caching, persistence layer provides the data backup and restore using only, on the other hand, its shortcomings are obvious, need higher requirement, configuration of storage space, costs will increase.
Sample code:
type Content struct {
Id string `json:"id"`
Title string `json:"title"`
Content string `json:"content"`
CreateTime time.Time `json:"create_time"`
}
const bizContentCacheKey = `biz#content#cache`
// AddContent provides content storage
func AddContent(r redis.Redis, c *Content) error {
v := compress(c)
_, err := r.Zadd(bizContentCacheKey, c.CreateTime.UnixNano()/1e6, v)
return err
}
// DelContent provides content removal
func DelContent(r redis.Redis, c *Content) error {
v := compress(c)
_, err := r.Zrem(bizContentCacheKey, v)
return err
}
// Compress content
func compress(c *Content) string {
// todo: do it yourself
var ret string
return ret
}
// Decompress the content
func uncompress(v string) *Content {
// todo: do it yourself
var ret Content
return &ret
}
// ListByRangeTime Provides data query by time range
func ListByRangeTime(r redis.Redis, start, end time.Time) ([]*Content, error) {
kvs, err := r.ZrangebyscoreWithScores(bizContentCacheKey, start.UnixNano()/1e6, end.UnixNano()/1e6)
iferr ! =nil {
return nil, err
}
var list []*Content
for _, kv := range kvs {
data := uncompress(kv.Key)
list = append(list, data)
}
return list, nil
}
Copy the code
In the above example, there is no set the expiration time redis, we will add, delete, change, check operation synchronization to redis, we think that the content of the list of social system access request is higher to do such a scheme design, in addition, there are some data access, not as frequent as content design system access, How do we think about cache design in the case of a sudden increase in traffic for a certain period of time, and then a long period of time after that, or not at all? In the go-Zero content practice, there are two solutions to this problem:
- Increase memory cache: Through the memory Cache to store current data may burst traffic is larger, the commonly used storage solution using the map data structure to store, the map data storage implementation is simpler, but need to increase the Cache expiration processing timer, another solution is to pass the go – zero in the library Cache, it is dedicated to the memory Cache management.
- Use Biz Redis and set a reasonable expiration time
conclusion
The above two scenarios can contain most of the multi-row cache. For the scenario where the multi-row cache query volume is not large, it is not necessary to directly put Biz Redis in the cache. You can try to let the DB take care of it first.
The project address
Github.com/tal-tech/go…
Welcome to Go-Zero and star support us!