The paper
In the course of daily development, there is always the need to scroll and split log files. In each major Golang log library, some implement the rolling sharding function, some do not provide, and the implementation of the log library is different. What if we want to do log scrolling with any log library?
The best way to initialize Logger is to replace the writer with the one that already implements log scrolling, so that no matter what log library is used, the scrolling function can be implemented. Lumberjack provides such a writer. For example, when we use the Logrus logging library, we simply need to initialize the Lumberjack Writer by calling SetLogger().
Source code analysis
1. Code statistics
Cloc –by-file-by-lang — exclud-dir =. Github — exclud-lang =YAML,Markdown [project-dir], The results are as follows (omit the statistics for markup languages such as YAML):
File | blank | comment | code |
---|---|---|---|
./lumberjack_test.go | 162 | 91 | 563 |
./linux_test.go | 38 | 9 | 158 |
./testing_test.go | 12 | 19 | 60 |
./rotate_test.go | 4 | 2 | 19 |
./example_test.go | 2 | 2 | 13 |
./lumberjack.go | 69 | 137 | 335 |
./chown_linux.go | 3 | 1 | 15 |
./chown.go | 3 | 1 | 7 |
SUM: | 293 | 262 | 1170 |
The total number of lumberjack go code lines is 1170, of which 813(=563+158+60+19+13) are test class code lines, and only 357 lines are actually valid.
2. Example
We’ll start with the official example, which looks like this:
log.SetOutput(&lumberjack.Logger{
Filename: "/var/log/myapp/foo.log",
MaxSize: 500.// megabytes
MaxBackups: 3,
MaxAge: 28.//days
Compress: true.// disabled by default
})
Copy the code
The use method is also very simple and straightforward:
- Initialize the lumberjack. Logger
- Set the output of the log library to initialize the Lumberjack Logger as above
3. Read in detail
Since there are only three files in the library that are actually valid outside of the test code, two of which simply encapsulate the OPERATING system’s CHown API, we’ll just look at the lumberjack.go file.
The Lumberjack Logger is essentially a struct that implements Golang’s writer and closer interfaces. This means that we only use its Write() and Close() methods when we use it as the underlying writer of the logging library. Let’s see how the Write() method works.
func (l *Logger) Write(p []byte) (n int, err error) {
l.mu.Lock() // Lock to prevent multiple coroutine write at the same time, will log write disorderly
defer l.mu.Unlock()
writeLen := int64(len(p))
// If the amount of data written is greater than the size of a single log file, an error is returned
// This means that we must ensure that each write is smaller than the MaxSize set when initializing Logger
if writeLen > l.max() {
return 0, fmt.Errorf(
"write length %d exceeds maximum file size %d", writeLen, l.max(),
)
}
// Open the file
if l.file == nil {
if err = l.openExistingOrNew(len(p)); err ! =nil {
return 0, err
}
}
// Log scrolling is performed when the amount of data to be written plus the amount of data currently written is greater than the maximum file size
if l.size+writeLen > l.max() {
iferr := l.rotate(); err ! =nil {
return 0, err
}
}
n, err = l.file.Write(p) // Write log data
l.size += int64(n) // Update the record value of the write quantity
return n, err
}
Copy the code
The process is simple:
- lock
- Check whether the amount of data to be written exceeds the maximum file size
- If the file is not opened, open it
- Log scrolling is performed when the amount of data to be written plus the amount of data currently written is greater than the maximum size of the file
- Writes log data and updates the current record value for the amount written
The main logic is in the openExistingOrNew() and Rotate () methods.
func (l *Logger) openExistingOrNew(writeLen int) error {
l.mill() // What do you do?
filename := l.filename()
info, err := osStat(filename)
if os.IsNotExist(err) {
return l.openNew()
}
iferr ! =nil {
return fmt.Errorf("error getting log file info: %s", err)
}
if info.Size()+int64(writeLen) >= l.max() {
return l.rotate() The rotate method is called again, just like the previous logic
}
file, err := os.OpenFile(filename, os.O_APPEND|os.O_WRONLY, 0644)
iferr ! =nil {
return l.openNew() // If an error occurs when opening a file, open a new file and continue writing
}
l.file = file
l.size = info.Size()
return nil
}
Copy the code
Rotate (),openNew(), and Mill (), where mill() is a method whose name doesn’t show what it does. Let’s look at Rotate () first, because openNew() and Mill () are also called by Rotate (), as follows:
func (l *Logger) rotate(a) error {
if err := l.close(a); err ! =nil {
return err
}
iferr := l.openNew(); err ! =nil {
return err
}
l.mill()
return nil
}
Copy the code
It looks like the rotate() logic is pretty simple, shutting down the original log file and creating a new one, and there’s a call to mill(). Now, there are still some questions that are not clear:
- When creating a new file, the name of the original file and the new file must be different. How to handle this?
- How do I clear log files that are stored too many or expire?
To answer the above question, let’s move on to openNew() with the following code:
func (l *Logger) openNew(a) error {
err := os.MkdirAll(l.dir(), 0755)
iferr ! =nil {
return fmt.Errorf("can't make directories for new logfile: %s", err)
}
name := l.filename()
mode := os.FileMode(0600)
info, err := osStat(name)
if err == nil {
// Copy the mode off the old logfile.
mode = info.Mode()
// move the existing file
// Change the name of the log file to a time suffix
newname := backupName(name, l.LocalTime)
iferr := os.Rename(name, newname); err ! =nil {
return fmt.Errorf("can't rename log file: %s", err)
}
// this is a no-op anywhere but linux
iferr := chown(name, info); err ! =nil {
return err
}
}
// we use truncate here because this should only get called when we've moved
// the file ourselves. if someone else creates the file in the meantime,
// just wipe out the contents.
f, err := os.OpenFile(name, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, mode)
iferr ! =nil {
return fmt.Errorf("can't open new logfile: %s", err)
}
l.file = f
l.size = 0 // Reset the counter
return nil
}
Copy the code
Rename specific format:
func backupName(name string, local bool) string {
// Omit part of the code
timestamp := t.Format(backupTimeFormat) T15 / / backupTimeFormat = "2006-01-02-04-05.000"
/dir/ xxx-service-2021-01-01t01-01-01000.log /dir/ xxx-service-2021-01-01t01-01-01000.log
return filepath.Join(dir, fmt.Sprintf("%s-%s%s", prefix, timestamp, ext))
}
Copy the code
Lumberjack renames the original log file and adds time, so that the original file name opens a new file to continue writing.
It seems important to call mill() everywhere in the above methods.
// startMill is a sync.Once instance, and millCh is a bool channel
func (l *Logger) mill(a) {
l.startMill.Do(func(a) {
l.millCh = make(chan bool.1)
go l.millRun()
})
select {
case l.millCh <- true:
default:}}Copy the code
This function starts a millRun() coroutine, and in order for the coroutine to be started globally only Once, sync.once-do () is used. In other words, except for starting millRun() for the first time, the main function of this function is to send a true signal…… to millCh We’ll just have to dig deeper into millRun() :
func (l *Logger) millRun(a) {
for range l.millCh {
_ = l.millRunOnce()
}
}
Copy the code
MillRun () only reads signals from millCh and calls millRunOnce every time it receives a signal.
func (l *Logger) millRunOnce(a) error {
// If you do not set the maximum number of backup files, the maximum retention time of the files, and the file needs to be compressed, then do nothing and return
if l.MaxBackups == 0 && l.MaxAge == 0 && !l.Compress {
return nil
}
// Fetch all old files
files, err := l.oldLogFiles()
iferr ! =nil {
return err
}
var compress, remove []logInfo
// If the maximum number of old logs to be backed up exceeds the threshold, plan the list to be deleted
if l.MaxBackups > 0 && l.MaxBackups < len(files) {
// Preserved looks like a map, but this is used as a set in context, since the value of each key stored in it is true
preserved := make(map[string]bool)
var remaining []logInfo
for _, f := range files {
// Only count the uncompressed log file or the
// compressed log file, not both.
fn := f.Name()
if strings.HasSuffix(fn, compressSuffix) {
fn = fn[:len(fn)-len(compressSuffix)]
}
preserved[fn] = true
// When this set exceeds the maximum number of files to be backed up, the next set is added to the list to be deleted
if len(preserved) > l.MaxBackups {
remove = append(remove, f)
} else {
// Put everything in the reserved list
remaining = append(remaining, f)
}
}
files = remaining // Update the list
}
// Check whether the files in the list have exceeded the maximum retention period set
if l.MaxAge > 0 {
diff := time.Duration(int64(24*time.Hour) * int64(l.MaxAge))
cutoff := currentTime().Add(- 1 * diff)
var remaining []logInfo
for _, f := range files {
if f.timestamp.Before(cutoff) {
// The excess is put in the list to be deleted
remove = append(remove, f)
} else {
// Put everything in the reserved list
remaining = append(remaining, f)
}
}
files = remaining // Update the list
}
// If compression is required
if l.Compress {
for _, f := range files {
if! strings.HasSuffix(f.Name(), compressSuffix) {// Put it in the list of compressed files
compress = append(compress, f)
}
}
}
// Delete files that do not need to be kept
for _, f := range remove {
errRemove := os.Remove(filepath.Join(l.dir(), f.Name()))
if err == nil&& errRemove ! =nil {
err = errRemove
}
}
// Perform compression
for _, f := range compress {
fn := filepath.Join(l.dir(), f.Name())
errCompress := compressLogFile(fn, fn+compressSuffix)
if err == nil&& errCompress ! =nil {
err = errCompress
}
}
return err
}
Copy the code
As you can see from the above logic, millRunOnce() is the person who cleans up log files and compresses them. But here’s a detail: in planning the cleanup of the list, you just go through the old files sequentially and put the excess files into the list to be deleted, so it looks like the oldLogFiles() method is taking out the files in chronological order.
func (l *Logger) oldLogFiles(a) ([]logInfo, error) {
files, err := ioutil.ReadDir(l.dir())
iferr ! =nil {
return nil, fmt.Errorf("can't read log file directory: %s", err)
}
logFiles := []logInfo{}
prefix, ext := l.prefixAndExt()
for _, f := range files {
if f.IsDir() {
continue
}
// The timeFromName method extracts the time suffix from the file name. The return value t is of type time.time
if t, err := l.timeFromName(f.Name(), prefix, ext); err == nil {
logFiles = append(logFiles, logInfo{t, f})
continue
}
if t, err := l.timeFromName(f.Name(), prefix, ext+compressSuffix); err == nil {
logFiles = append(logFiles, logInfo{t, f})
continue
}
// error parsing means that the suffix at the end was not generated
// by lumberjack, and therefore it's not a backup file.
}
// Sure enough, this is the sort operation
sort.Sort(byFormatTime(logFiles))
return logFiles, nil
}
Copy the code
Here we use logInfo and byFormatTime types:
type logInfo struct {
timestamp time.Time
os.FileInfo
}
// byFormatTime implements sort
type byFormatTime []logInfo
// Sort is sorted in ascending order
// Sort by timestamp size, according to the caller's requirements above
// Put new files in front and old ones in back
// That is, the file with a larger timestamp is less
func (b byFormatTime) Less(i, j int) bool {
return b[i].timestamp.After(b[j].timestamp)
}
func (b byFormatTime) Swap(i, j int) {
b[i], b[j] = b[j], b[i]
}
func (b byFormatTime) Len(a) int {
return len(b)
}
Copy the code