Introduction:

Pytorch, the deep learning framework, is gaining momentum, and I’ve seen a lot of use of Pytorch on Github. Read this article carefully and you will learn:

  • Understand the creation of Tensor
  • Understand the Tensor acceleration
  • Understand the common properties of Tensor
  • Common ways to understand Tensor

Tensor to create

We all know that Numpy is a popular extension library that supports a wide range of dimensional array and matrix operations. But Numpy really doesn’t seem to have the capacity for calculating graphs, or deep learning, or gradients, because it can’t be accelerated on the GPU like Tensor. Today we’re going to talk about Pytorch’s basic concept of Tensor.

Tensor is an N-dimensional array, conceptually the same as a Numpy array, except that Tensor can track graphs and compute gradients.

1. Create it from Numpy

import torch
import numpy as np

numpy_array= np.array([1.2.3])
torch_tensor1 = torch.from_numpy(numpy_array)
torch_tensor2 = torch.Tensor(numpy_array)
torch_tensor3 = torch.tensor(numpy_array)
Copy the code

Note that torch.Tensor() is an alias for the default Tensor type torch.FloatTensor(), which means that returning torch. The default data type can also be changed:

torch.set_default_tensor_type(torch.DoubleTensor)
Copy the code

Torch.tensor () generates torch.longtensor, torch.floattensor, and torch.doubletensor based on the input data type.

Of course, we can translate that into numpy through numpy()

numpy_array = torch_tensor1.numpy() # If tensor is on the CPU
numpy_array = torch_tensor1.cpu.numpy() # If tensor is on the GPU
print(type(numpy_array)) 
      
Copy the code

Note that the Tensor needs to be applied to the CPU if you have a Tensor on the GPU.

2. Create from a Python built-in type

lst = [1.2.3]
torch_tensor1 = torch.tensor(a)
tp = (1.2.3)
torch_tensor2  = torch.tensor(a1)
Copy the code

3. Other ways

Create a Tensor for the same elements
torch_tensor1  = torch.full([2.3].2)
Create a Tensor with all 1s
torch_tensor2 = torch.ones([2.3])
Create a Tensor with all zeros
torch_tensor3 = torch.zeors([2.3])
Create a Tensor for diagonal matrices
torch_tensor4  = torch.eye(3)
Create a random Tensor in the interval [1,10
torch_tensor5 = torch.randint(1.10[2.2])
# etc...
Copy the code

When you create a Tensor you can also specify the data type and the device you want to store it in

torch_tensor= torch.zeros([2.3],dtype=torch.float64,device=torch.device('cuda:0'))
torch_tensor.dtype #torch.float64
torch_tensor.device #cuda:0
torch_tensor.is_cuda #True
Copy the code

Tensor acceleration

We can make the Tensor accelerate on the GPU in two ways.

The first is to define CUDA data types.

dtype = torch.cuda.FloatTensor
gpu_tensor = torch.randn(1.2).type(dtype) Tensor converts to CUDA
Copy the code

The second way is to put the Tensor directly on the GPU (recommended).

gpu_tensor = torch.randn(1.2).cuda(0)Put Tensor directly on the first GPU
gpu_tensor = torch.randn(1.2).cuda(1)Put Tensor directly on the second GPU
Copy the code

And it’s easy to put Tensor on the CPU.

cpu_tensor = gpu_tensor.cpu()
Copy the code

Tensor common attributes

1. Check the Tensor type properties

tensor1 = torch.ones([2.3])
tensor1.dtype # torch.float32
Copy the code

2. Check your Tensor dimension properties

tensor1.shape # size
tenosr1.ndim # dimension
Copy the code

3. Check whether the Tensor is stored on the GPU

tensor1.is_cuda #False
Copy the code

Check your Tensor storage device

tensor1.device # cpu
tensor1.cuda(0)
tensor1.device # cuda:0
Copy the code

5. Check the Tensor gradient calculation

tensor1.grad
Copy the code

Tensor common methods

1. Torch. Squeeze () : Delete dimension 1 and return Tensor

tensor1 = torch.ones([2.1.3])
torch_tensor1.size() #torch.Size([2, 1, 3])
tensor2=torch.squeeze(tensor1)
print(tensor2.size())#torch.Size([2, 3])
Copy the code

From this example you can see that the Tensor dimension has changed from [2,1,3] to [2,3], and the 1 dimension has been removed.

2. Torch.tensor. Permute () replaces the Tensor dimension and returns to the new view.

tensor1 = torch.ones([2.1.3])
print(tensor1.size()) # torch.Size([2, 1, 3])
tensor2 = tensor1.permute(2.1.0) # 0 - > 2, 0
print(tensor2.size()) # torch.Size([3, 1, 2])
Copy the code

And you can see from the example that the Tensor takes the first dimension and then the second dimension and then transposes it to 3 and 2.

Torch.tensor. Expand () : Extend the dimension of 1. The extended Tensor doesn’t allocate new memory, it just creates a new view from the original and returns it.

>>>tensor1 = torch.tensor([[3], [2]])
>>>tensor2 = tensor1.expand(2.2)
>>>tensor1.size()
torch.Size([2.1])
>>>tensor2
tensor([[3.3],
        [2.2]])
>>>tensor2.size()
torch.Size([2.2])
Copy the code

So the Tensor originally has a dimension of (2,1), and because it extends from the dimensions of 1, you can expand into (2,2),(2,3) and so on, but you have to keep the dimensions that are not 1 the same.

Torch.tensor. Repeat () : Repeat along a Tensor dimension. Unlike expand(), it copies the original data.

>>>tensor1 = torch.tensor([[3], [2]])
>>>tensor1.size()
torch.Size([2.1])
>>>tensor2=tensor1.repeat(4.2)
>>>tensor2.size()
torch.Size([8.2])
>>>tensor2
tensor([[3.3],
        [2.2],
        [3.3],
        [2.2],
        [3.3],
        [2.2],
        [3.3],
        [2.2]])
Copy the code

So in our example tensor1 is (2,1), tensor1. Repeat (4,2), that’s the Tensor at 0 and at 1 4 times and 2 times, so the post-repeat dimension becomes (8,2). Look at the following example.

>>>tensor1 = torch.tensor([[2.1]])
>>>tensor1.size()
torch.Size([1.2])
>>>tensor2=tensor1.repeat(2.2.1)
>>>tensor2.size()
torch.Size([2.2.2])
>>>tensor2
tensor([[[2.1],
         [2.1]],

        [[2.1],
         [2.1]]])
Copy the code

In the example, tensor1 dimension is (1,2), tensor1.repeat(2,2,1), at this time, the former does not correspond to the dimension of the latter, it can be understood as tensor1 rewriting the dimension to (1,1,2), then tensor1.repeat(2,2,1), Tensor1 the Tensor at 0,1,2 is repeated once, once, twice, and the post-repeat Tensor is (2,2,2).

CVpython public number: CVpython, dedicated to sharing Python and computer vision, we insist on original, not regular update, hope the article is helpful to you, quickly scan code attention.