In the process of writing a project, when some tasks need to be deployed to run regularly or for a long time, in order to retain some information that causes exceptions or errors in the program, logs are usually used to record these information.

When logging is used in Python, it is inevitable to use the built-in logging standard library. Although the logging library is modular in design, you can set up different handlers to combine, but the configuration is often cumbersome; Also, using logging in some multi-threaded or multi-process scenarios can cause logging to be confused or lost if not handled properly.

However, there is a library that not only reduces the cumbersome configuration process but also implements similar functionality to logging, while keeping the logging process safe and logging compatible, and further tracking exceptions and code backtracking. It’s called Loguru — a birthday log for lazy people like me.

Loguru is a library designed to make logging enjoyable in Python. The main concept of Loguru is that there is only one Logger.

The logging and Loguru modules are not used

import sys

class Logger() :
    def __init__(self, filename='./logger.txt') :
        self.terminal = sys.stdout
        self.log = open(filename, 'a')

    def write(self, message) :
        self.terminal.write(message)
        self.log.write(message)
        self.log.flush()

    def flush(self) :
        self.log.flush()

sys.stdout = Logger()
Copy the code

Directly encapsulate into a class library, record printing information, save the program error information, convenient rough, but not practical, because the use of the append write mode, print more information, the larger the file.

The use of logging

import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

logger.info('this is another debug message')
logger.warning('this is another debug message')
logger.error('this is another debug message')
logger.info('this is another debug message')
Copy the code

Logging is a more basic module configuration than loguru

Loguru installation instructions

The terminal directly executes the PIP command:

pip install loguru
Copy the code

Simple to use

The Loguru library is easy to use and can be invoked directly by importing its own encapsulated Logger class.

Attach a list of log levels

level value methods
trace 5 logger.trace()
debug 10 logger.debug()
info 20 logger.info()
success 25 logger.success()
warning 30 logger.warning()
error 40 logger.error()
critical 50 logger.critical()
from loguru import logger

logger.trace('this is tarce message! ')
logger.debug('this is debug message! ')
logger.info('this is info message! ')
logger.success('this is success message! ')
logger.warning('this is warning message! ')
logger.error('this is error message! ')
logger.critical('this is critical message! ')
Copy the code

When run in PyCharm, there are also different style distinctions to make the results more aesthetically pleasing


Use in detail

Parameters that

def add(
        self,
        sink,
        *,
        level=_defaults.LOGURU_LEVEL,
        format=_defaults.LOGURU_FORMAT,
        filter=_defaults.LOGURU_FILTER,
        colorize=_defaults.LOGURU_COLORIZE,
        serialize=_defaults.LOGURU_SERIALIZE,
        backtrace=_defaults.LOGURU_BACKTRACE,
        diagnose=_defaults.LOGURU_DIAGNOSE,
        enqueue=_defaults.LOGURU_ENQUEUE,
        catch=_defaults.LOGURU_CATCH,
        **kwargs
    ) :
    pass
Copy the code
  1. sink (file-like object, str, pathlib.Path, callable, Coroutine function or logging.Handler) — An object in charge of receiving formatted logging messages and propagating They to an appropriate endpoint.【 log sending destination 】
  2. Level (int or STR, optional) — The minimum severity level from which logged messages should be sent to The sink.
  3. Format (STR or callable, optional) — The template used to format logged messages before being sent to The sink.
  4. filter (callable, str or dict, Optional) — A directive optionally used to decide for each logged message whether it should be sent to the sink or Not. [Day chase into filter]
  5. colorize (bool, Optional) — Whether the color markups contained in the formatted message should be converted to ANSI codes for terminal coloration, or stripped otherwise. If None, The choice is automatically made based on the sink being a tty or not.
  6. serialize (bool, Optional) — Whether the LOGGED message and its records should be first converted to a JSON string before being sent to The sink.【 Whether serial number 】
  7. Backtrace (bool, Optional) — Whether the exception trace upward should be extended. beyond the catching point, to show the full stacktrace which generated the error.
  8. diagnose (bool, Optional) — Whether the exception trace should display the variables values to eases the debugging. This should be set to False in production to avoid leaking sensitive data.
  9. enqueue (bool, Optional) — Whether the messages to be logged should first pass through a multiprocess- Safe queue before reaching the sink. This is useful while logging to a file through multiple processes. This also has the advantage of making logging calls non-blocking.
  10. catch (bool, Optional) — Whether errors while sink handles logs messages should be automatically caught. If True, an exception message is displayed on sys.stderr but the exception is not propagated to the caller, preventing your app to crash.
  11. **kwargs — Additional parameters that are only valid to configure a coroutine or file sink
  12. rotation (str, int, datetime.time, datetime.timedelta or callable, Optional) — A condition that the current logged file should be closed and A new one started.
  13. retention (str, int, datetime.timedelta or callable, Optional) — A directive filtering old files that should be removed during rotation or end of program.
  14. compression (str or callable, Optional) — A compression or archive format to which log files should be converted at closure.【 用 法 】
  15. Delay (bool, Optional) — Whether the file should be created as soon as the sink is configured, or delayed until first logged message. It defaults to False.
  16. mode (str, Optional) — The opening mode as for built-in open() function. It defaults to “a” (open The file in appending mode).
  17. Buffering (int, Optional) — The buffering policy as for built-in open() function. It defaults to 1 (line buffered file).
  18. Encoding (STR, optional) — The file encoding as for built-in open() function. If None, encoding (STR, optional) it defaults to locale.getpreferredencoding().

Written to the log

Just pass a string in the first argument to the path of the file you want to save (personal preference to save in the current folder).

logger.add('./log.txt')
Copy the code

format

Key The official Describe the remark
elapsed The time elapsed since the start of the program The date of
exception The formatted exception if any, none otherwise
extra The dict of attributes bound by the user (see bind())
file The file where the logging call was made Wrong file
function The function from which the logging call was made Error method
level The severity used to log the message The level of logging
line The line number in the source code The number of rows
message The logged message (not yet formatted) information
module The module where the logging call was made The module
name The name where the logging call was made name
process The process in which the logging call was made Process ID or process name. Default is ID
thread The thread in which the logging call was made Thread ID or process name, default is ID
time The aware local time when the logging call was made The date of

Rotation is set

Example: ‘100KB’, ‘100MB’, ‘100GB’

logger.add('./log{time}.txt', rotation='100KB')
for n in range(10000):
    logger.info(f'test - {n}')
Copy the code

Compression will set

Example: ‘gz’, ‘tar’, ‘zip’

logger.add('./log{time}.txt', rotation='100KB', compression='zip')
for n in range(10000):
    logger.info(f'test - {n}')
Copy the code

The retention Settings

Set the number of retained log files

logger.add('./log{time}.txt', rotation='100KB', compression='zip',retention=1)
for n in range(10000):
    logger.info(f'test - {n}')
Copy the code

Serialize set

Convert the log to serialized JSON output

logger.add('./log{time}.txt', serialize=True)
logger.info('hello, world! ')
Copy the code

Outlier traceability in two ways

Trace the display of outliers, set Backtrace to True, set Diagnose to True during development to display variable values to simplify debugging, and set The value to False for production environments

from loguru import logger

logger.add("./log.txt", backtrace=True, diagnose=True)


def func(a, b) :
    return a / b


def nested(c) :
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What? !")


if __name__ == "__main__":
    nested(0)
Copy the code

After using loguru, we can use the decorator provided by loguru to record the Traceback directly. After running, we can find the Traceback information in the log, and output the value of the variable at the time.

@logger.catch
def my_function(x, y, z) :
    # An error? It's caught anyway!
    return 1 / (x + y + z)
Copy the code

Multi-process error capture, can clearly and intuitively list which process reported what error, easy to trace the cause

from loguru import logger
from multiprocessing import Process

logger.add("./log.txt", backtrace=True, diagnose=True)


def func(a, b) :
    return a / b


def nested(c) :
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What? !")


if __name__ == "__main__":
    process_list = []
    for i in range(1.4):
        process_list.append(Process(target=nested, args=(0,), name='the first f{i}Process '))
    for p in process_list:
        p.start()
    for p in process_list:
        p.join()
Copy the code

References:

Blog.csdn.net/cui_yonghua…

Kangyucheng.blog.csdn.net/article/det…

Hope to be helpful to everybody, the place that has a problem also asks everybody to criticize and point out, thank!!

It would be nice to get some attention