- Article transferred from: wechat official account [Machine Learning Alchemy]
- Note by Brother Lian Dan (reproduced with authorization)
- Contact: Wechat CYX645016617
- Thesis title: “Masked Autoencoders Are Scalable Vision Learners”
0 in this paper,
In this paper, masked auto-encoder (MAE) is proved to be an extensible self-supervised learner for computer vision. Our MAE method is simple: We mask random patches of the input image and rebuild the lost pixels.
The design is based on two cores:
- We develop an asymmetric encoder-decoder architecture in which encoders run only on a subset of visible patches (without masks), and a lightweight decoder that reconstructs the original image from underlying representations and mask tokens.
- Second, we find that masking a high proportion of input images (e.g., 75%) produces an extraordinary and meaningful self-monitoring task.
Combining these two designs allows us to train large models efficiently: we speed up the training (3x or more) and improve accuracy.
One way to
As you can see from the picture, the model is very simple:
- It is a Transformer structure similar to VIT. The image is divided into patches, and the model can only see a small part (25%) of the patches, while the remaining 75% is invisible.
- The encoder input is the 25% patch that you can see plus the 25% location mask;
- Later, decoder was used to restore 25% of patches information to the entire image for reconstruction.
- After pre-training, the decoder is discarded and the encoder is applied to the uncorrupted image to produce a representation of the recognition task.
2 Code section – Step 1
Because it’s simple, I’ll just look at the code. The code is being reproduced by some bigshot, not by anyone!
def pretrain_mae_small_patch16_224(pretrained=False, **kwargs) :
model = PretrainVisionTransformer(
img_size=224,
patch_size=16,
encoder_embed_dim=384,
encoder_depth=12,
encoder_num_heads=6,
encoder_num_classes=0,
decoder_num_classes=768,
decoder_embed_dim=192,
decoder_depth=4,
decoder_num_heads=3,
mlp_ratio=4,
qkv_bias=True,
norm_layer=partial(nn.LayerNorm, eps=1e-6),
**kwargs)
model.default_cfg = _cfg()
if pretrained:
checkpoint = torch.load(
kwargs["init_ckpt"], map_location="cpu"
)
model.load_state_dict(checkpoint["model"])
return model
Copy the code
In the code, patch_size encoder_embed_dim these parameters, it is easy to understand, this PretrainVisionTransformer is a classic VIT transformer structure (after the first guess, validation).
3 code section – Step 2
class PretrainVisionTransformer(nn.Module) :
""" Vision Transformer with support for patch or hybrid CNN input stage """
def __init__(self,
img_size=224,
patch_size=16,
encoder_in_chans=3,
encoder_num_classes=0,
encoder_embed_dim=768,
encoder_depth=12,
encoder_num_heads=12,
decoder_num_classes=768,
decoder_embed_dim=512,
decoder_depth=8,
decoder_num_heads=8,
mlp_ratio=4.,
qkv_bias=False,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.,
norm_layer=nn.LayerNorm,
init_values=0.,
use_learnable_pos_emb=False,
num_classes=0.# avoid the error from create_fn in timm
in_chans=0.# avoid the error from create_fn in timm
) :
super().__init__()
self.encoder = PretrainVisionTransformerEncoder(
img_size=img_size,
patch_size=patch_size,
in_chans=encoder_in_chans,
num_classes=encoder_num_classes,
embed_dim=encoder_embed_dim,
depth=encoder_depth,
num_heads=encoder_num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop_rate=drop_rate,
attn_drop_rate=attn_drop_rate,
drop_path_rate=drop_path_rate,
norm_layer=norm_layer,
init_values=init_values,
use_learnable_pos_emb=use_learnable_pos_emb)
self.decoder = PretrainVisionTransformerDecoder(
patch_size=patch_size,
num_patches=self.encoder.patch_embed.num_patches,
num_classes=decoder_num_classes,
embed_dim=decoder_embed_dim,
depth=decoder_depth,
num_heads=decoder_num_heads,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
drop_rate=drop_rate,
attn_drop_rate=attn_drop_rate,
drop_path_rate=drop_path_rate,
norm_layer=norm_layer,
init_values=init_values)
self.encoder_to_decoder = nn.Linear(encoder_embed_dim, decoder_embed_dim, bias=False)
self.mask_token = nn.Parameter(torch.zeros(1.1, decoder_embed_dim))
self.pos_embed = get_sinusoid_encoding_table(self.encoder.patch_embed.num_patches, decoder_embed_dim)
trunc_normal_(self.mask_token, std=. 02)
def _init_weights(self, m) :
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
def get_num_layers(self) :
return len(self.blocks)
@torch.jit.ignore
def no_weight_decay(self) :
return {'pos_embed'.'cls_token'.'mask_token'}
def forward(self, x, mask) :
x_vis = self.encoder(x, mask) # [B, N_vis, C_e]
x_vis = self.encoder_to_decoder(x_vis) # [B, N_vis, C_d]
B, N, C = x_vis.shape
# we don't unshuffle the correct visible token order,
# but shuffle the pos embedding accorddingly.
expand_pos_embed = self.pos_embed.expand(B, -1, -1).type_as(x).to(x.device).clone().detach()
pos_emd_vis = expand_pos_embed[~mask].reshape(B, -1, C)
pos_emd_mask = expand_pos_embed[mask].reshape(B, -1, C)
x_full = torch.cat([x_vis + pos_emd_vis, self.mask_token + pos_emd_mask], dim=1)
x = self.decoder(x_full, pos_emd_mask.shape[1]) # [B, N_mask, 3 * 16 * 16]
return x
Copy the code
Overall, it is composed of Encoder and Decoder. Let’s list the parameters:
img_size
= 224patch_size
= 16encoder_in_chans
= 3encoder_num_classes
= 0encoder_embed_dim
= 768encoder_depth
= 12encoder_num_heads
= 12decoder_num_classes
= 768decoder_embed_dim
= 512decoder_depth
= 8decoder_num_heads
= 8mlp_ratio
= 4.qkv_bias
=Falseqk_scale
=Nonedrop_rate
= 0.attn_drop_rate
= 0.drop_path_rate
= 0.norm_layer
=nn.LayerNorminit_values
= 0.use_learnable_pos_emb
=Falsenum_classes
=0 # avoid the error from create_fn in timmin_chans
=0, # avoid the error from create_fn in timm
4 Code part -encoder
class PretrainVisionTransformerEncoder(nn.Module) :
""" Vision Transformer with support for patch or hybrid CNN input stage """
def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=0, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None,
use_learnable_pos_emb=False) :
super().__init__()
self.num_classes = num_classes
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
num_patches = self.patch_embed.num_patches
# TODO: Add the cls token
# self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
if use_learnable_pos_emb:
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
else:
# sine-cosine positional embeddings
self.pos_embed = get_sinusoid_encoding_table(num_patches, embed_dim)
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
self.blocks = nn.ModuleList([
Block(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
init_values=init_values)
for i in range(depth)])
self.norm = norm_layer(embed_dim)
self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
if use_learnable_pos_emb:
trunc_normal_(self.pos_embed, std=. 02)
# trunc_normal_(self.cls_token, std=.02)
self.apply(self._init_weights)
def _init_weights(self, m) :
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
def get_num_layers(self) :
return len(self.blocks)
@torch.jit.ignore
def no_weight_decay(self) :
return {'pos_embed'.'cls_token'}
def get_classifier(self) :
return self.head
def reset_classifier(self, num_classes, global_pool=' ') :
self.num_classes = num_classes
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def forward_features(self, x, mask) :
x = self.patch_embed(x)
# cls_tokens = self.cls_token.expand(batch_size, -1, -1)
# x = torch.cat((cls_tokens, x), dim=1)
x = x + self.pos_embed.type_as(x).to(x.device).clone().detach()
B, _, C = x.shape
x_vis = x[~mask].reshape(B, -1, C) # ~mask means visible
for blk in self.blocks:
x_vis = blk(x_vis)
x_vis = self.norm(x_vis)
return x_vis
def forward(self, x, mask) :
x = self.forward_features(x, mask)
x = self.head(x)
return x
Copy the code
In building Encoder, these modules are used:
- Self. patch_Embed: Patch the image
- Depth of stacked blocks, feature extraction section of Transformer
- Self. Head: This is an identity layer, meaningless.
5 Code section – Patch_Embed
class PatchEmbed(nn.Module) :
""" Image to Patch Embedding """
def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768) :
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
self.img_size = img_size
self.patch_size = patch_size
self.num_patches = num_patches
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
def forward(self, x, **kwargs) :
B, C, H, W = x.shape
# FIXME look at relaxing size constraints
assert H == self.img_size[0] and W == self.img_size[1], and \f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
x = self.proj(x).flatten(2).transpose(1.2)
return x
Copy the code
As you can see from the code below, it is just a convolution layer containing self.proj(x). I made a simple demo to study how the Patchembed module affects the shape of an image:
The input is a feature graph of 1x3x224x224, and the shape of y output is:
Here I understand the process and what the two parameters mean:
- 196 refers to the number of patches in an image, and the input of 224, 16 is the patch size. Therefore, an image has (224/16) square patches, namely 196 patches.
- Each patch is convolved into a vector of 768 dimensions. 768 corresponds to the hyperparameter
embed_dim
- In this, both kernel_size and stride are set to the same size as patch. In fact, it is mathematically equivalent to making a full connection layer for all elements of a patch. A patch contains 14×14 pixels, namely 196. Such a convolution layer is equivalent to a full connection layer of 196 to 768.
6 Code part -Block
class Block(nn.Module) :
def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., init_values=None, act_layer=nn.GELU, norm_layer=nn.LayerNorm,
attn_head_dim=None) :
super().__init__()
self.norm1 = norm_layer(dim)
self.attn = Attention(
dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
attn_drop=attn_drop, proj_drop=drop, attn_head_dim=attn_head_dim)
# NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
if init_values > 0:
self.gamma_1 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
self.gamma_2 = nn.Parameter(init_values * torch.ones((dim)),requires_grad=True)
else:
self.gamma_1, self.gamma_2 = None.None
def forward(self, x) :
if self.gamma_1 is None:
x = x + self.drop_path(self.attn(self.norm1(x)))
x = x + self.drop_path(self.mlp(self.norm2(x)))
else:
x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x)))
x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
return x
Copy the code
This Block contains three modules: Attention,Mlp, and DropPath.
The input x is normalized by the Layer norm, then placed in Attention, then the Layer norm, then the DropPath, then the Mlp, then the DropPath.
Code section 6 -Attention
class Attention(nn.Module) :
def __init__(
self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0.,
proj_drop=0., attn_head_dim=None) :
super().__init__()
self.num_heads = num_heads
head_dim = dim // num_heads
if attn_head_dim is not None:
head_dim = attn_head_dim
all_head_dim = head_dim * self.num_heads
self.scale = qk_scale or head_dim ** -0.5
self.qkv = nn.Linear(dim, all_head_dim * 3, bias=False)
if qkv_bias:
self.q_bias = nn.Parameter(torch.zeros(all_head_dim))
self.v_bias = nn.Parameter(torch.zeros(all_head_dim))
else:
self.q_bias = None
self.v_bias = None
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(all_head_dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
def forward(self, x) :
B, N, C = x.shape
qkv_bias = None
if self.q_bias is not None:
qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
# qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2.0.3.1.4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
attn = attn.softmax(dim=-1)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1.2).reshape(B, N, -1)
x = self.proj(x)
x = self.proj_drop(x)
return x
Copy the code
Through this line of full connection layer, input 768 features are extended to 2304 dimensions, corresponding to q, K and V variables respectively.
0 0 Is 0 0 0 0 0 0 0 0 0 0 1 This 3, it just happens to be assigned to QKV. After two matrix multiplications, the final output is still [Batch,196,768] dimension.
【 Summary 】 : Attention is actually a feature extraction module, the input is [Batch,196,768], the output is also [Batch,196,768].
7 Code section -Mlp
class Mlp(nn.Module) :
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.) :
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x) :
x = self.fc1(x)
x = self.act(x)
# x = self.drop(x)
# commit this for the orignal BERT implement
x = self.fc2(x)
x = self.drop(x)
return x
Copy the code
This MLP is two fully connected layers, magnifying 768 to 768×4 dimensions and then 768.
7 Code part -Decode
class PretrainVisionTransformerDecoder(nn.Module) :
""" Vision Transformer with support for patch or hybrid CNN input stage """
def __init__(self, patch_size=16, num_classes=768, embed_dim=768, depth=12,
num_heads=12, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop_rate=0., attn_drop_rate=0.,
drop_path_rate=0., norm_layer=nn.LayerNorm, init_values=None, num_patches=196.) :
super().__init__()
self.num_classes = num_classes
assert num_classes == 3 * patch_size ** 2
self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
self.patch_size = patch_size
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
self.blocks = nn.ModuleList([
Block(
dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
init_values=init_values)
for i in range(depth)])
self.norm = norm_layer(embed_dim)
self.head = nn.Linear(embed_dim, num_classes) if num_classes > 0 else nn.Identity()
self.apply(self._init_weights)
def _init_weights(self, m) :
if isinstance(m, nn.Linear):
nn.init.xavier_uniform_(m.weight)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
def get_num_layers(self) :
return len(self.blocks)
@torch.jit.ignore
def no_weight_decay(self) :
return {'pos_embed'.'cls_token'}
def get_classifier(self) :
return self.head
def reset_classifier(self, num_classes, global_pool=' ') :
self.num_classes = num_classes
self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()
def forward(self, x, return_token_num) :
for blk in self.blocks:
x = blk(x)
if return_token_num > 0:
x = self.head(self.norm(x[:, -return_token_num:])) # only return the mask tokens predict pixels
else:
x = self.head(self.norm(x)) # [B, N, 3*16^2]
return x
Copy the code
But in general, there are some differences between this code repetition and MAE in the paper. There was a problem with the decoder part. Then fix it yourself.
I think the general problem is that in this code, after encoder, before decoder, there is a lack of restoring the image position. These are the steps in the red box below:
However, this step does not affect the training of the model, just to generate a complete reconstruction of the graph.