At PyCon 2018, Mario Corchero talked about the process of making logging easier and easier during development.
The content of the speech includes:
- Why is logging important
- What is the logging process
- How to log
- How do I configure logging
- Common pitfalls in logging use
The core of the presentation is to introduce the logging module and its configuration.
The importance of logging
In the development process, if there is a problem in the program operation, we can use our own Debug tool to detect the problem in the end, if there is a problem, it is easy to troubleshoot. But after the program is developed, we will deploy it to the production environment. At this time, the code is like running in a black box environment, we can only see the effect of its operation, not directly see the status of each step in the process of running the code. In this environment, it is inevitable that there will be a problem somewhere during the operation, or even a problem that we have not encountered during the development process. What should we do in this case?
If we only know that now the phenomenon of the current problems, without any other information, if we want to get rid of this problem, so only phenomenon in an attempt to reconstruct it according to the problems, and then to debug step by step, I’m afraid that it is difficult, we are unable to accurately on the probability of large emersion this problem, The Debug process can also take an enormous amount of time, making it very difficult to fix any problems that occur in production. But if we did log at that time, whether it is normal operation or error, there are relevant time records, status records, error records, etc., so that we can easily track what happened in the operation process at that time, so that we can quickly troubleshoot problems.
Therefore, logging is very necessary, any software without standard logging, is not a qualified software. As developers, we need to take logging seriously and do it well.
A process framework for logging
So what counts as a fairly standard logging process in Python? A lot of people might use print statements to print some run information, then watch it on the console, then redirect the output to a file at run time and save the output stream to a file. This is actually quite non-standard. There is a standard logging module in Python that we can use for annotated logging. Using it we can log more easily, but also can do more convenient level differentiation and some additional log information, such as time, running module information, etc.
Let’s look at the overall framework of the logging process.
As shown in the figure, the overall logging framework can be divided into several parts:
- Logger: Logger Main Class, is the object created when we Log, we can call its method to pass in Log template and information, to generate a Log Record, called Log Record.
- Log Record: the generation of Log records.
- Handler: this class is used to process the Log Record. It can output the Log Record to the specified location and storage format. For example, we can specify the FTP protocol to Record the Log to a remote server, and the Handler will do these things for us.
- The Formatter: In fact, the generated Log Record is also an object, so if we want to save them as a Log text we need to have a formatting process, so this process is done by Formatter, returns the Log string, and then passes it back to the Handler for processing.
- The Filter: Also save the time log we may not need all save, we may have to do is save we want to be part of, so save is needed before the Filter, leaving us to log, if only to save a certain level of logs, or save only contains a keyword logs, etc., then the filtering process to the Filter.
- Parent Handler: There can be hierarchical relationships between handlers so that different handlers share code with the same functionality.
This is the basic architecture and object function of the logging module. After understanding this, we will look at the usage of the logging module in detail.
Related uses of logging
In general, the logging module has several advantages over print:
- The logging level can be set in the logging module, and different output levels can be set in different versions (such as development environment, production environment) to record the corresponding log, providing flexibility.
- The output of print is printed to the standard output stream, whereas the logging module is more flexible and can be set to any location, such as writing to a file or remote server.
- The logging module has flexible configuration and formatting functions, such as configuring the output of the current module information, running time, etc., which is more convenient and easy to use than the string formatting of print.
To get a feel for the basic usage of the Logging module, let’s use an example:
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
logger.info('This is a log info')
logger.debug('Debugging')
logger.warning('Warning exists')
logger.info('Finish')
Copy the code
Here, we first introduce the logging module, and then perform basic configuration. Here, we configure level information and format information through basicConfig. Here, level is configured as INFO information, that is, only INFO level information is output. Asctime, name, LevelName, message, module name, log level, log content This is the global configuration of logging.
Next, we declare a Logger object, which is the main class for logging output. Calling the object’s info() method can output info level logs, and calling the debug() method can output DEBUG level logs, which is very convenient. When initializing, we pass in the name of the module. Instead, we use __name__, which is the name of the module, __main__ if we run the script directly, or the name of the imported module if it is an import module. The name of this variable varies from module to module. The output of the following four log messages, including two INFO messages, one WARNING message, and one DEBUG message, is as follows:
2018- 06- 03 13:42:43.526 - __main__ - INFO - This is a log info
2018- 06- 03 13:42:43.526 - __main__ - WARNING - Warning exists
2018- 06- 03 13:42:43.526 - __main__ - INFO - Finish
Copy the code
It can be seen that there are three log messages in the output, each of which corresponds to the specified formatting content. In addition, we find that the DEBUG message is not output, because we set the output level to INFO during the global configuration, so the DEBUG message is filtered out.
If we set the output log level to DEBUG, we can see the DEBUG level log output:
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
Copy the code
Output result:
2018- 06- 03 13:49:22.770 - __main__ - INFO - This is a log info
2018- 06- 03 13:49:22.770 - __main__ - DEBUG - Debugging
2018- 06- 03 13:49:22.770 - __main__ - WARNING - Warning exists
2018- 06- 03 13:49:22.770 - __main__ - INFO - Finish
Copy the code
Print time, module name, and log information of different levels can be distinguished and filtered.
Of course, this is only a small part of the logging module. Let’s first look at the parameters of basicConfig:
- Filename: specifies the name of the log output file. If this information is specified, FileHandler is actually enabled instead of StreamHandler, so that the log information is output to the file.
- Filemode: Specifies the log file write mode. There are two formats: W and A, respectively.
- Format: Specifies the output format of log information, that is, the parameter shown in the previous example. For details, see:Docs.python.org/3/library/l…, some parameters are as follows:
- %(levelno)s: indicates the value of the print log level.
- %(levelName)s: indicates the name of the log level.
- %(pathName)s: Prints the path of the currently executing program, which is sys.argv[0].
- %(filename)s: prints the name of the running program.
- %(funcName)s: current function for printing logs.
- %(lineno)d: indicates the forward number of the log to be printed.
- %(asctime)s: indicates the time when a log is printed.
- %(thread)d: displays the ID of the thread.
- %(threadName)s: prints the threadName.
- %(process)d: prints the process ID.
- %(processName)s: Prints the thread name.
- %(module)s: prints the module name.
- %(message)s: Displays log information.
- Datefmt: Output format for the specified time.
- Style: If the format argument is specified, this parameter specifies the formatting placeholder style, such as %, {, $, and so on.
- Level: specifies the log output category. The program will output information greater than or equal to this level.
- Stream: StreamHandler is used by default when filename is not specified, in which case the stream can specify the file stream to initialize.
- Handlers: You can specify the handlers to be used for logging, which must be iterative.
Here’s another example to get a feel for it:
import logging
logging.basicConfig(level=logging.DEBUG,
filename='output.log',
datefmt='%Y/%m/%d %H:%M:%S',
format='%(asctime)s - %(name)s - %(levelname)s - %(lineno)d - %(module)s - %(message)s')
logger = logging.getLogger(__name__)
logger.info('This is a log info')
logger.debug('Debugging')
logger.warning('Warning exists')
logger.info('Finish')
Copy the code
Here we specify the name of the output file as output.log, and specify the output format of the date, in which the format of year, month and day is changed to %Y/%m/% D. In addition, lineno and Module are added to the output format. After running, an output.log file is generated, with the following contents:
2018/06/03 14:43:26 - __main__ - INFO - 9 - demo3 - This is a log info
2018/06/03 14:43:26 - __main__ - DEBUG - 10 - demo3 - Debugging
2018/06/03 14:43:26 - __main__ - WARNING - 11 - demo3 - Warning exists
2018/06/03 14:43:26 - __main__ - INFO - 12 - demo3 - Finish
Copy the code
As you can see, the log is output to the file, along with the line number, module name, and so on.
We used basicConfig to do some global configuration, we can also use Formatter, Handler for more flexible processing, let’s see.
Level
The logging module provides the following levels of output logs. Each level corresponds to a specific value, as listed below:
level | The numerical |
---|---|
CRITICAL | 50 |
FATAL | 50 |
ERROR | 40 |
WARNING | 30 |
WARN | 30 |
INFO | 20 |
DEBUG | 10 |
NOTSET | 0 |
The highest levels are CRITICAL and FATAL, both of which correspond to values of 50, and the short form WARN is provided for WARNING, both of which correspond to values of 30.
If the output level is set, the system outputs only logs whose level is greater than or equal to this level. For example, if the output level is set to INFO, the system outputs logs whose level is greater than or equal to INFO, such as WARNING and ERROR. DEBUG and NOSET messages are not output.
import logging
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.WARN)
# Log
logger.debug('Debugging')
logger.critical('Critical Something')
logger.error('Error Occurred')
logger.warning('Warning exists')
logger.info('Finished')
Copy the code
Here we set the output level to WARN and output five different levels of log information. The result is as follows:
Critical Something
Error Occurred
Warning exists
Copy the code
Only CRITICAL, ERROR, and WARNING information is displayed, but DEBUG and INFO information is not displayed.
Handler
Handler Handler Handler Handler Handler Handler Handler Handler Handler Handler Handler
import logging
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
handler = logging.FileHandler('output.log')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info('This is a log info')
logger.debug('Debugging')
logger.warning('Warning exists')
logger.info('Finish')
Copy the code
Instead of using the basicConfig global configuration, we first refer to a Logger object and then specify its Handler as FileHandler. The Logger will be output to output.log as follows:
2018- 06- 03 14:53:36.467 - __main__ - INFO - This is a log info
2018- 06- 03 14:53:36.468 - __main__ - WARNING - Warning exists
2018- 06- 03 14:53:36.468 - __main__ - INFO - Finish
Copy the code
Other handlers are also available for logging. The logging module provides the following handlers:
- StreamHandler: logging. StreamHandler; Logs are output to streams, which can be sys.stderr, sys.stdout, or files.
- FileHandler: logging. FileHandler; Logs are output to files.
- BaseRotatingHandler: logging handlers. BaseRotatingHandler; Basic log rollback mode.
- RotatingHandler: logging handlers. RotatingHandler; Log rollback mode: supports the maximum number of log files and the rollback of log files.
- TimeRotatingHandler: logging handlers. TimeRotatingHandler; Log rollback mode: rollback log files in a specified period of time.
- SocketHandler: logging handlers. SocketHandler; Remote output log to TCP/IP Sockets.
- DatagramHandler: logging handlers. DatagramHandler; Remote output log to UDP Sockets.
- SMTPHandler: logging handlers. SMTPHandler; Remotely outputting logs to email addresses.
- SysLogHandler: logging handlers. SysLogHandler; Logs are output to syslog.
- NTEventLogHandler: logging handlers. NTEventLogHandler; Remotely outputting logs to Windows NT/2000/XP event logs.
- MemoryHandler: logging handlers. MemoryHandler; The log is output to the specified buffer in memory.
- An HTTPHandler: logging handlers. An HTTPHandler; Remote output to HTTP server via “GET” or “POST”.
Here we use three handlers to output logs to the console, file, and HTTP server at the same time:
import logging
from logging.handlers import HTTPHandler
import sys
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.DEBUG)
# StreamHandler
stream_handler = logging.StreamHandler(sys.stdout)
stream_handler.setLevel(level=logging.DEBUG)
logger.addHandler(stream_handler)
# FileHandler
file_handler = logging.FileHandler('output.log')
file_handler.setLevel(level=logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# HTTPHandler
http_handler = HTTPHandler(host='localhost:8001', url='log', method='POST')
logger.addHandler(http_handler)
# Log
logger.info('This is a log info')
logger.debug('Debugging')
logger.warning('Warning exists')
logger.info('Finish')
Copy the code
Before running, we need to start HTTP Server and run it on port 8001, where the log interface is used to receive logs.
After running, the console output will print the following:
This is a log info
Debugging
Warning exists
Finish
Copy the code
The output. Log file will write the following:
2018- 06- 03 15:13:44.895 - __main__ - INFO - This is a log info
2018- 06- 03 15:13:44.947 - __main__ - WARNING - Warning exists
2018- 06- 03 15:13:44.949 - __main__ - INFO - Finish
Copy the code
The HTTP Server receives the console output.
In this way, we control the multi-destination output of the log by setting up multiple handlers.
It is also worth noting that we have no Formatter for the StreamHandler object, so the console outputs only the log content, not the time, module, etc. With FileHandler we set a Formatter object with the setFormatter() method, so the output is formatted log information.
In addition, each Handler can set the level information. The output level information will take the intersection of the Logger object’s level and the Handler object’s level.
Formatter
If basicConfig is used to format logs, basicConfig is used to format logs. If basicConfig is used to format logs, basicConfig is used to format logs.
import logging
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.WARN)
formatter = logging.Formatter(fmt='%(asctime)s - %(name)s - %(levelname)s - %(message)s', datefmt='%Y/%m/%d %H:%M:%S')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
# Log
logger.debug('Debugging')
logger.critical('Critical Something')
logger.error('Error Occurred')
logger.warning('Warning exists')
logger.info('Finished')
Copy the code
Here we specify a Formatter and pass in the FMT and datefmt parameters, which specify the output format and time format of the log result. The handler then sets the Formatter object using the setFormatter() method. The following output is displayed:
2018/06/03 15:47:15 - __main__ - CRITICAL - Critical Something
2018/06/03 15:47:15 - __main__ - ERROR - Error Occurred
2018/06/03 15:47:15 - __main__ - WARNING - Warning exists
Copy the code
This allows us to format the output individually for each Handler, which is very flexible.
Capture the Traceback
If we encounter an error, we would prefer the detailed Traceback information to be reported for debugging. Using the Logging module, we can easily implement this recording. Let’s use an example to feel:
import logging
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.DEBUG)
# Formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# FileHandler
file_handler = logging.FileHandler('result.log')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# StreamHandler
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)
# Log
logger.info('Start')
logger.warning('Something maybe fail.')
try:
result = 10 / 0
except Exception:
logger.error('Faild to get result', exc_info=True)
logger.info('Finished')
Copy the code
Here we add a parameter to the error() method that sets exc_info to True so that we can output the execution information, the complete Traceback information.
The running results are as follows:
2018- 06- 03 16:00:15.382 - __main__ - INFO - Start print log
2018- 06- 03 16:00:15.382 - __main__ - DEBUG - Do something
2018- 06- 03 16:00:15.382 - __main__ - WARNING - Something maybe fail.
2018- 06- 03 16:00:15.382 - __main__ - ERROR - Faild to get result
Traceback (most recent call last):
File "/private/var/books/aicodes/loggingtest/demo8.py", line 23.in <module>
result = 10 / 0
ZeroDivisionError: division by zero
2018- 06- 03 16:00:15.383 - __main__ - INFO - Finished
Copy the code
As you can see, this makes it very easy for us to record the error information, and once there is an error, we can easily troubleshoot it.
Configure the Shared
The logging module provides a mechanism for the parent module to share the configuration, automatically loading the parent module’s configuration based on the Logger name.
For example, here we first define a main.py file:
import logging
import core
logger = logging.getLogger('main')
logger.setLevel(level=logging.DEBUG)
# Handler
handler = logging.FileHandler('result.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info('Main Info')
logger.debug('Main Debug')
logger.error('Main Error')
core.run()
Copy the code
Here we configure the log output format and file path, name Logger main, introduce another module core, and finally call the run() method of core.
Next we define core.py as follows:
import logging
logger = logging.getLogger('main.core')
def run(a):
logger.info('Core Info')
logger.debug('Core Debug')
logger.error('Core Error')
Copy the code
Here we define the name of the Logger as main.core. Notice that it starts with main, which is the name of the Logger in main.py. This way the Logger in core.py will reuse the Logger configuration in main.py instead of having to configure it again.
A result.log file will be generated after running, with the following contents:
2018- 06- 03 16:55:56.259 - main - INFO - Main Info
2018- 06- 03 16:55:56.259 - main - ERROR - Main Error
2018- 06- 03 16:55:56.259 - main.core - INFO - Core Info
2018- 06- 03 16:55:56.259 - main.core - ERROR - Core Error
Copy the code
You can see that both parent and child modules use the same output configuration.
In this way, the logging module’s output configuration is defined in the entry file, and the child module can share the configuration by defining the Logger object with the name of the parent module.
Configuration file
In the process of development, configuration in code written to death is not a good habit, it would be better configuration will be written in a configuration file, and we can configure will be written to the configuration file, and then the inside of the runtime reads the configuration file configuration, it’s more convenient to manage and maintain, we an instance to explain below, First we define a YAML configuration file:
version: 1
formatters:
brief:
format: "%(asctime)s - %(message)s"
simple:
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
handlers:
console:
class : logging.StreamHandler
formatter: brief
level : INFO
stream : ext://sys.stdout
file:
class : logging.FileHandler
formatter: simple
level: DEBUG
filename: debug.log
error:
class: logging.handlers.RotatingFileHandler
level: ERROR
formatter: simple
filename: error.log
maxBytes: 10485760
backupCount: 20
encoding: utf8
loggers:
main.core:
level: DEBUG
handlers: [console, file, error]
root:
level: DEBUG
handlers: [console]
Copy the code
We define modules like Formatters, Handlers, Loggers and root, which correspond to the configuration of the formatters, handlers and loggers, with the same parameters and their constructors.
Next we define a main entry file, main.py, which looks like this:
import logging
import core
import yaml
import logging.config
import os
def setup_logging(default_path='config.yaml', default_level=logging.INFO):
path = default_path
if os.path.exists(path):
with open(path, 'r', encoding='utf-8') as f:
config = yaml.load(f)
logging.config.dictConfig(config)
else:
logging.basicConfig(level=default_level)
def log(a):
logging.debug('Start')
logging.info('Exec')
logging.info('Finished')
if __name__ == '__main__':
yaml_path = 'config.yaml'
setup_logging(yaml_path)
log()
core.run()
Copy the code
Here we define a setup_logging() method which reads the yamL file configuration and passes the configuration item to the logging module for global initialization via the dictConfig() method.
This module also introduces another module, core, so we define core.py as follows:
import logging
logger = logging.getLogger('main.core')
def run(a):
logger.info('Core Info')
logger.debug('Core Debug')
logger.error('Core Error')
Copy the code
The content of this document is unchanged from the above.
Looking at the configuration file, the main entry file main.py actually corresponds to the root configuration, which specifies that handlers are console, that is, output to the console only. Handlers are console, file and error, that is, to output to console, to normal files and to rollback files.
After running like this, we can see all the results of the run printed to the console:
2018- 06- 03 17:07:12.727 - Exec
2018- 06- 03 17:07:12.727 - Finished
2018- 06- 03 17:07:12.727 - Core Info
2018- 06- 03 17:07:12.727 - Core Info
2018- 06- 03 17:07:12.728 - Core Error
2018- 06- 03 17:07:12.728 - Core Error
Copy the code
The debug.log file contains the result of core.py:
2018- 06- 03 17:07:12.727 - main.core - INFO - Core Info
2018- 06- 03 17:07:12.727 - main.core - DEBUG - Core Debug
2018- 06- 03 17:07:12.728 - main.core - ERROR - Core Error
Copy the code
As you can see, configuration files provide a very flexible way to define Handler, Formatter, Logger, and other configurations. It is also very intuitive and easy to maintain. In practical projects, it is recommended to use this configuration method.
These are the basic usage methods of the logging module. With this module, we can easily manage and maintain logs, which will bring great convenience to our work.
Common pitfalls in logging use
String concatenation is often used in log output. In many cases, we might use string format() to construct a string, but this is not a good method because there are better methods. Here are two examples:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# bad
logging.debug('Hello {0}, {1}! '.format('World'.'Congratulations'))
# good
logging.debug('Hello %s, %s! '.'World'.'Congratulations')
Copy the code
There are two ways to print logs. The first is constructed using the format() method of the string. The logging module uses only the first argument. Placeholder %s, %d, etc., and then add the corresponding value to subsequent parameters. This method is recommended.
The running results are as follows:
2018- 06- 03 22:27:51.220 - root - DEBUG - Hello World, Congratulations!
2018- 06- 03 22:27:51.220 - root - DEBUG - Hello World, Congratulations!
Copy the code
In addition, when handling exceptions, we usually format the exception as a string, but in fact, we can directly specify a parameter to print the traceback, as shown in the following example:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
try:
result = 5 / 0
except Exception as e:
# bad
logging.error('Error: %s', e)
# good
logging.error('Error', exc_info=True)
# good
logging.exception('Error')
Copy the code
If we formatted the error directly using the string formatting method, we would not contain Traceback information, but if we added the exc_info argument or printed the error directly using the exception() method, we would have Traceback information.
The running results are as follows:
2018- 06- 03 22:24:31.927 - root - ERROR - Error: division by zero
2018- 06- 03 22:24:31.927 - root - ERROR - Error
Traceback (most recent call last):
File "/private/var/books/aicodes/loggingtest/demo9.py", line 6.in <module>
result = 5 / 0
ZeroDivisionError: division by zero
2018- 06- 03 22:24:31.928 - root - ERROR - Error
Traceback (most recent call last):
File "/private/var/books/aicodes/loggingtest/demo9.py", line 6.in <module>
result = 5 / 0
ZeroDivisionError: division by zero
Copy the code
This is the overview of the Logging module. Well, it’s time to ditch print and start logging.
Refer to the content
- Docs.python.org/3/library/l…
- www.cnblogs.com/dahu-daqing…