Preface:

The application of Transformer in video understanding direction can be implemented in the following ways: Joint space-time Attention, Sparse Local Global Attention and Axial Attention. What these approaches have in common is that the image is partitioned in the same way as in ViT, but the difference between them is how to process the blocks with self attention.

In this paper, a new processing method, Divided space-time Attention, is proposed. By evaluating the above methods and Divided space-time Attention on large-scale behavioral classification data sets, The authors found that the use of Divided Attention is the best design for handling these blocks.

The TimeSformer achieves SOTA results on several mainstream behaviour-recognition benchmarks, including maximising accuracy at Kinetics-400 and Kinetics-600. In addition, TimeSformer has faster training speed and higher test efficiency compared with other models.

Is space-time Attention All You Need for Video Understanding?

Code: github.com/lucidrains/…

The code is not fully open source yet, but the model is out there. The code is simple.

Paper way of thinking

Video understanding has a lot in common with NLP. First, both video and statement are sequential. Also, a word can only be understood in relation to other words in the statement, and a segment in the video action needs to be associated with the context of the video. Therefore, we expect that the long-range self-attention model in NLP can also have high performance in the video model.

In the video field, 2D or 3D convolution is the mainstream operation used to extract spatiotemporal features. However, an obvious problem of convolution operation is that the receptive field is limited. To obtain the global receptive field, many layers of convolution layers need to be stacked, and their information propagation paths are relatively long. The operation of self-attention can easily obtain the global receptive field and capture the dependence of local and long range.

Another problem with convolution operations is that memory constraints, especially in the video domain, often require trade-offs between high resolution and long range frames. In recent years, some researchers have shown that Transformer can train and reason faster than CNN, so Transformer can use more learning capacity for the same computation budget.

Standard self-attention needs to calculate the similarity of all tokens to each other, which is a lot of calculation, so we need to consider how to use self-attention to process image blocks. This paper compares several methods of processing this aspect, and proposes that Divided attention has the best performance.

This article will focus on these methods.

Some of the details

A common part of these methods is to divide video frames into blocks of size PxP, each frame can be divided into N=HW/(P*P) blocks.

The difference is how you choose which pieces to put together for self-attention.

Space Attention is just self-attention on all the pieces of the same frame together. This approach obviously takes no account of the timing information between different frames.

Joint space-time Attention is to self-attention all the blocks of the image. The most obvious problem of this method is that it takes too much calculation.

Sparse Local Global Attention is divided into two steps: select blocks in the Local area for self-attention to extract Local information, and select blocks in a certain step size for self-attention to extract Global information. This method has some sparseness and is characterized by reduced computation.

Axial Attention is divided into three steps. First, time Attention is performed on blocks with the same position in different frames, and then space Attention is performed on blocks with the same horizontal and vertical in the same frame according to horizontal and vertical respectively.

The Divided space-time Attention method proposed in this paper is Divided into two steps. First, Time Attention is performed on blocks with the same position in different frames, and then Space Attention is performed on all blocks in the same frame.

The detailed schematic diagram is as follows.

The experimental conclusion

The comparison of the number and accuracy of these methods.

Welcome to the public account CV technical guide, focusing on the technical summary of computer vision, the latest technology tracking, classical paper interpretation.

Other articles

Summary of computer vision terms (1) to build a knowledge system of computer vision

Summary of underfitting and overfitting techniques

Summary of normalization methods

Summary of common ideas of paper innovation

Summary of methods of reading English literature efficiently in CV direction

A review of small sample learning in computer vision

A brief overview of knowledge distillation

Optimize OpenCV video read speed

NMS summary

Technical summary of loss function

Technical summary of attention mechanisms

Summary of feature pyramid technology

Summary of pooling technology

Summary of data enhancement methods

Summary of CNN structure Evolution (I) Classic model

Summary of CNN structure evolution (II) Lightweight model

Summary of CNN structure evolution (III) Design principles

How to view the future of computer vision

Summary of CNN Visualization Technology (I) – Visualization of feature map

CNN Visualization Technology Summary (II) – convolution kernel visualization

Summary of CNN Visualization Technology (III) – Class visualization

CNN Visualization Technology Summary (IV) – Visualization tools and projects