background
The target table visitors structure is as follows:
CREATE TABLE `visitors` (
`id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'Increment primary key',
`openid` varchar(100) DEFAULT ' ' COMMENT 'openid',
`unionid` varchar(100) DEFAULT ' ' COMMENT 'unionid',
`member_id` int(11) DEFAULT '0' COMMENT 'member_id',
`account_id` varchar(50) DEFAULT ' ' COMMENT 'account_id'.-- omitted some fields
`created_at` datetime DEFAULT NULL COMMENT 'Creation time',
`updated_at` datetime DEFAULT NULL COMMENT 'Update Time'.PRIMARY KEY (`id`)
)
Copy the code
During the National Day in October last year, all records of visitors table were updated with the same member_id. At that time, the cause of the accident was not found, so the problem was shelved. At that time, I thought that the function based on visitors had been online for two months, so it should be very stable.
So three months passed.
On New Year’s Day, I asked for a long vacation and went home to do business. On January 3, the member_id of more than 500,000 user data in visitors table was updated to the same user. At this time, this problem attracted my sufficient attention. On January 5th, I spent half a day using the backup data of the previous day to repair the damaged data (the data changes of the last 10 hours are gone). This problem was enforced from the code level, but the root cause of updating the global table was not found at that time, so it was shelved (full table will never be updated again, but data integrity will definitely be affected, just a small probability event, not a concern).
So two months passed.
In March, after simplifying the functional logic, there were many visitors table data records where certain fields were not updated. After careful analysis, it was found that the master and slave synchronization problems caused by the previous New Year’s day problems, finally understand the logic of a series of accidents.
That is, the newly created data cannot be queried due to master/slave synchronization, which causes a problem in updating the row records of the database.
demo
Direct cause: The connection to the database is in a proxy mode, with writes acting on the master library and reads acting on the slave library.
Create a record
insert into visitors(openid, unionid) values("openid", "unionid");
Copy the code
Update record
select * from visitors where openid="openid";
update visitors set member_id = 100, account_id='gog' where openid="openid";
Copy the code
Code analysis:
type Visitors struct {
ID int64 `json:"id"`
Openid string `json:"openid"`
UnionId string `json:"unionid"`
MemberId int64 `json:"member_id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
RegisteredAt time.Time `json:"registered_at"`
AccountId string `json:"account_id"`
}
func GetVisitorByOpenId(openId string) (*Visitors, error) {
visitor := new(Visitors)
err := MysqlDB.Where("openid=?", openId).Last(&visitor).Error
return visitor, err
}
visitor, err := GetVisitorByOpenId(userInfo.OpenID)
// Err processing logic
/ / update
err = yidui_mini_mysql.UpdateMemberAttrs(visitor, map[string]interface{} {"member_id": member.ID,
"registered_at": time.Now(),
"account_id": authService.AccountId,
})
func UpdateMemberAttrs(visitor *Visitors, elements map[string]interface{}) error {
if visitor == nil || visitor.ID == 0 {
return errors.New("UpdateMemberAttrs visitor is not correct")}return MysqlDB.Model(visitor).Updates(elements).Error
}
Copy the code
1, if the newly created object, using GetVisitorByOpenId probably can’t check the data, the visitor variable is pointing to a piece of empty memory, if there is no visitor in UpdateMemberAttrs = = nil | | visitor. ID = = 0, This judgment was added on New Year’s Day, but few data were missing data integrity due to null judgment in the subsequent log query.
2. In March, due to changes in business requirements, we streamlined the insert and update statements and reduced the execution time between the two operators. This led to more empty visitor calls to the UpdateMemberAttrs method. Let both read and write use the master library connection to avoid reading from the slave library.
However, these data latency issues should be addressed by keeping the complete data object of the current database after the data is created or updated, and later operations should be performed directly with the object rather than retrieving it from mysql again.
Updating part of data in Golang using ORM is more troublesome, even gorM, it is not very convenient to use, now it seems that it is more appropriate to write native SQL statement to join.