Those of you who have used TensorFlow extensively have probably used TensorBoard, because tensorBoard is an advanced visualization tool, many of you are also fond of TensorFlow. No deep learning library other than TensorFlow has developed a perfect set of visualization tools, which is one of the reasons why TensorFlow is so popular. If you want to visualize the training process without Using Tensorboard, you have to save the variables and draw the curves yourself.

So there are a lot of people using other deep learning frameworks working on porting TensorBoard to their frameworks, and there are a lot of successful examples, or I wouldn’t be writing this article, but here are some of the most popular approaches.

1. Use the Crayon

Crayon is a framework that supports tensorboard for any language, and its documentation is available at the following url. Currently it only supports Python and Lua, and the installation process is cumbersome and requires docker, so it is not recommended.

2. Use tensorboard_logger

Tensorboard_logger tensorboard_logger is a library developed by TeamHg-memex that uses tensorboard. It provides access to the documentation interface, and the installation is a bit cumbersome. You need to install Tensorflow and their developed tensorboard_logger. Once installed, follow the documentation to use the TensorBoard.

3. Import a script to implement TensorBoard

PIP install TensorFlow is a quick way to install the CPU version of TensorFlow. Then just copy the code from this site into your project file directory. Create a new logger.py file and copy the code into it.

Then type from Logger import Logger in your Python file and define the folder where you want to store the tensorBoard file before training. Logger = logger (‘./logs’) You can use any folder to store the tensorBoard file.

Then during the training process you can record the desired variables in the following way

# (1) Log the scalar values
info = {
    'loss': loss.data[0].
    'accuracy': accuracy.data[0]
}

for tag. value in info.items() :
    logger.scalar_summary(tag. value. step)

# (2) Log values and gradients of the parameters (histogram)
for tag. value in model.named_parameters() :
    tag = tag.replace('. '. '/')
    logger.histo_summary(tag. to_np(value), step)
    logger.histo_summary(tag+'/grad'. to_np(value.grad), step)

# (3) Log the images
info = {
    'images': to_np(img.view(-1. 28. 28)[:10])
}

for tag. images in info.items() :
    logger.image_summary(tag. images. step)
Copy the code

Then we type tensorbard –logdir=’./logs’ in the current directory. Here you need to enter your folder name, mine was previously defined as logs, and you will see the following interface

& lt; img src=”https://pic4.zhimg.com/v2-db7e0c44b49460b9468e50bf9ef5ae37_b.png” data-rawwidth=”750″ data-rawheight=”438″ class=”origin_image zh-lightbox-thumb” width=”750″ data-original=”https://pic4.zhimg.com/v2-db7e0c44b49460b9468e50bf9ef5ae37_r.png”& gt;



Type http://0.0.0.0:6006/ into your browser and you’ll be taken to the TensorBoard interface, as shown below

& lt; img src=”https://pic3.zhimg.com/v2-3603082c6b5dfffd942220350cd7bf22_b.png” data-rawwidth=”1441″ data-rawheight=”726″ class=”origin_image zh-lightbox-thumb” width=”1441″ data-original=”https://pic3.zhimg.com/v2-3603082c6b5dfffd942220350cd7bf22_r.png”& gt;
& lt; img src=”https://pic2.zhimg.com/v2-70c78e92845da9b1bbcdc6d9a518d741_b.png” data-rawwidth=”1667″ data-rawheight=”704″ class=”origin_image zh-lightbox-thumb” width=”1667″ data-original=”https://pic2.zhimg.com/v2-70c78e92845da9b1bbcdc6d9a518d741_r.png”& gt;
& lt; img src=”https://pic2.zhimg.com/v2-07a63f1285a486d8512b3ba8635a3b35_b.png” data-rawwidth=”1851″ data-rawheight=”893″ class=”origin_image zh-lightbox-thumb” width=”1851″ data-original=”https://pic2.zhimg.com/v2-07a63f1285a486d8512b3ba8635a3b35_r.png”& gt;



This allows us to successfully visualize tensorBoard in PyTorch.

This article is from Yunjey’s Github

The full code has been uploaded to Github

Welcome to my zhihu column, in-depth alchemy

Welcome to my blog