· How PyTorch uses GPU acceleration (conversion of CPU to GPU data)
Understanding Numpy, Tensor and Variable in PyTorch
Related reading:
· Speed comparison of mainstream deep learning hardware (CPU, GPU, TPU)
·TensorFlow&KerasGPU
1. Problem description
In the process of deep learning development, GPU acceleration can improve the efficiency of our development. For the comparison of speed, please refer to the author’s blog post: [Deep Application]· Comparison of speed of mainstream deep learning hardware (CPU, GPU, TPU) conclusions: Compared with the CPU of ordinary comparison notebook (I5 8250U), an entry-level graphics card (GPU MX150) can improve the speed by about 8 times, while high-performance graphics card (GPU GTX1080ti) can improve the speed by 80 times. If multiple Gpus are used, the speed will be faster. Therefore, GPU is recommended for regular training.
Using a GPU in PyTorch is different from using TensorFlow. If TensorFlow does not specify a device, TensorFlow automatically transfers data and operations to the GPU when it detects the GPU. PyTorch is similar to MxNet in that it explicitly specifies where data and operations are to be performed, which is more liberal, but also a bit cumbersome. Because if you forget to convert at any point you’re going to get an error running.
In this paper, on the level of data storage, to help you parse the CPU and GPU data conversion. PyTorch lets you learn how to use GPU acceleration.
2. Principle explanation
I need to install the GPU version of PyTorch before using the GPU. Conda is recommended
Conda install PyTorch TorchVision CudatoolKit = 9.0-C PytorchCopy the code
To check whether a GPU can be used, use a global variable use_GPU for later use
use_gpu = torch.cuda.is_available()
Copy the code
GPU can be used, use_GPU is True, otherwise False. When GPU is available, and we don’t want to use it, we can simply assign use_gpu = False
We need to convert data, network, and loss functions to GPU during the transformation
1. When constructing the network, transfer the network and loss function to GPU
model = get_model()
loss_f = t.nn.CrossEntropyLoss()
if(use_gpu):
model = model.cuda()
loss_f = loss_f.cuda()
Copy the code
2. Transfer data to GPU during network training
if (use_gpu):
x,y = x.cuda(),y.cuda()
Copy the code
3. If data is to be retrieved, switch the GPU to the CPU
if(use_gpu):
loss = loss.cpu()
acc = acc.cpu()
Copy the code
For further data manipulation, see this blog post: Numpy, Tensor and Variable in PyTorch for an in-depth understanding and conversion tips
Here is a collection of PDF books for getting started, covering TensorFlow, PyTorch, and MXNet. The reason recommended is easy to understand, suitable for beginners to study. The list of books is as follows: Simple and crude TensorFlow2 Latest Chinese version, Hands-on Deep Learning PyTorch Latest Chinese version, Hands-on deep Learning MXNet latest Chinese version
Based on the sharing of theoretical learning and application development technology of deep learning, the author will often share the dry contents of deep learning. When learning or applying deep learning, you can also communicate with me on this page if you have any questions.
Xiao Song is from CSDN blog expert and zhihu deep learning columnist