1. Introduction to Model Log
Model Log is a Python3 based lightweight Machine Learning, Deep Learning Model training evaluation index visualization tool. Combined with TensorFlow, Pytorch and PaddlePaddle, it can record the hyperparameters, Loss, Accuracy, Precision and F1 values during the model training process, and display and compare them in the form of curves, which can be achieved in three easy steps.
GitHub project address: github.com/NLP-LOVE/Mo…
The Model can be trained many times by adjusting the super parameters and recorded by the Model Log tool, which can be called the magic instrument of parameter tuning. The following is the change curve of Loss during model training after using the tool. Visit the online experience version: mantchs.com/model_log.h…
The above figure clearly shows the training effects of the two models, and the modified hyperparameters are highlighted in the table for convenient model analysis.
2. Model Log feature
- Lightweight, no configuration, minimal API, right out of the box.
- You only need to add the hyperparameter and evaluation index data of the model through THE API, which can be easily realized in three steps.
- The modified hyperparameters are highlighted to facilitate model analysis.
- Automatic detection and acquisition of training model data, and visualization, without human involvement.
- SQLite lightweight local database storage can be used by multiple users at the same time, ensuring that each user sees independent data.
- The visualization component adopts Echarts framework and is designed with interactive graph, so that the index data and change trend of each epoch cycle can be clearly seen.
3. Model Log demo address
Visit the online experience version: mantchs.com/model_log.h…
4. Install Model Log
Python3 or later, install through PIP.
pip install model-log
Copy the code
Note: If the following information occurs during the installation, the model-log command is installed in the bin directory in Python. If you enter model-log directly, command Not found may be displayed. You can directly run the command in the bin directory.
5. Use Model Log
5.1 Starting the Web End
After the Model Log is successfully installed, run the following command on the Linux or Mac terminal, and enter the following command on the Windows cli:
model-log
Copy the code
Port 5432 is enabled by default. You can specify the port number by using -p=5000 in the startup command. If the command does not exist, run it in the Python/3.7/bin directory.
After startup, you can enter http://127.0.0.1:5432 in the browser
You can also visit the online version: mantchs.com/model_log.h…
-
The front page of the Web is a list of projects, and a project can have multiple models that can be compared visually in a graph.
-
The Web terminal will automatically detect whether there is a new model to start training. If there is, it will directly jump to the corresponding evaluation indicator page such as Loss, and at the same time, it will automatically obtain indicator data for presentation.
-
SQLite can be used by multiple users by adding nicknames. SQLite lightweight local database storage ensures that each user sees the data independently.
-
You can switch the evaluation curves of different models by clicking on the legend below the curve.
5.2 Model Log API Usage
Easy to use in three steps
-
Step 1: Create the ModelLog class and add the necessary properties
from model_log.modellog import ModelLog """ :param nick_name: STR. :param project_name: STR, the project name. :param project_remark: STR, project remarks, empty by default. New "" if project name does not exist model_log = ModelLog(nick_name='mantch', project_name='Demo Entity Recognition', project_remark=' ') Param model_name: STR, model name """ model_log.add_model_name(model_name='BILSTM_CRF model') "" :param remark: STR """ model_log.add_model_remark(remark='Model notes') """ :param param_dict: dict. Training parameter dictionary :param param_type: STR. Parameter types, such as TF parameter and Word2Vec parameter. "" " model_log.add_param(param_dict={'lr':0.01}, param_type='tf_param') Copy the code
-
Step 2: Evaluation index data can be added to each epoch (cycle) of model training. Evaluation indexes can be selected as follows.
When the API is called for the first time, the data set above (model name, remarks, etc.) will be persisted to the SQLite database, and the Web side will automatically obtain the evaluation indicator data for graphical display.
""" :param metric_name: STR, evaluation indicator name, optional ['train_loss', 'test_loss', 'test_ACC ', 'test_recall', 'test_precision', 'test_F1'] :param metric_value: Float, evaluates the indicator value. :param epoch: int, training cycle Metric_name parameter can select only the preceding six parameters. When the API is called for the first time, the data (model name, remarks, etc.) set above will be persisted to the SQLite database, and the Web end will automatically obtain the data for graphical display. The API can be used to add metrics for training sets and test sets at the end of each EPOCH cycle, and the Web side automatically picks up the data. "" " model_log.add_metric(metric_name='train_loss', metric_value=4.5646, epoch=1) Copy the code
-
Step 3: After model training is completed, the best evaluation data can be added.
""" :param best_name: STR, best evaluation indicator name, :param best_value: float, best evaluation indicator value. :param BEST_EPOCH: int, the training cycle adds the best evaluation data in the current model training, which is generally added at the end of the model training. "" " model_log.add_best_result(best_name='best_loss', best_value=1.2122, best_epoch=30) """ Close the SQLite database connection """ model_log.close() Copy the code
5.3 Model Log Usage Examples
MIST Handwritten digital recognition: github.com/NLP-LOVE/Mo…