• Article transferred from wechat official account “Alchemy of Machine Learning”
  • Author: Brother Lian Dan (authorized)
  • The author’s contact information: wechat CYX645016617
  • ‘Fast Symmetric Diffeomorphic Image Registration with Convolutional Neural Networks’
  • Links to papers: arxiv.org/abs/2003.09…

0 review

Review of differential homeomorphism

ϕ(1)\phi^{(1)}ϕ(1) is a deformation field with 1 time intervals. The x in the figure above represents the deformation field, and then the deformation field plus the velocity of the deformation field is a deformation field with more time intervals.

According to lie algebraic group theory, the conclusion is as follows: ϕ (1/2) = ϕ (1/4) ∘ ϕ (1/4) \ phi ^ {} (1/2) = \ phi ^ {} (1/4), the circ, phi ^ {} (1/4) ϕ (1/2) = ϕ (1/4) ∘ ϕ (1/4), we can from time interval for a quarter of the deformation field, 1/2 the deformation field of launch time interval.

As we can also see from the code in the previous article, the implementation of this part in the paper is the following logic, taking T=7 in SYMnet as an example, right? Each time interval is divided into 272^727 segments.

  • Divide the deformation field that the model inputs into 272^727 for ϕ1/27\phi^{1/2^7}
  • 1/27 through 1/27 ϕ ∘ ϕ \ phi ^ 1/2 ^ 7} {\ circ \ phi ^ 1/2 ^ 7} {ϕ 1/27 ∘ ϕ 1/27 get ϕ 1/26 \ phi ^ 1/2 ^ 6} {ϕ 1/26
  • 1/26 through 1/26 ϕ ∘ ϕ \ phi ^ 1/2 ^ 6} {\ circ \ phi ^ 1/2 ^ 6} {ϕ 1/26 ∘ ϕ 1/26 get ϕ 1/25 \ phi ^ 1/2 ^ 5} {ϕ 1/25
  • And finally, ϕ(1)\phi^{(1)}ϕ(1)

2. Model Structure

The idea of this model is that the two images, X and Y, are originally registered from X to Y, but now we need to find an intermediate Z between X and Y, and let X register to Z, and then let Y register to Z.

The ϕXY(1)\phi_{XY}^{(1)}ϕXY(1) is a deformation field from X to Y.

For the final reasoning part, we definitely need to register from X to Y. So we use first ϕ XY0.5 \ phi_ {X, y} ^ 0.5} {ϕ XY0.5 for registration to the intermediate state Z, X and then use ϕ YX (0.5) – \ phi_ ^ {YX} {(0.5)} ϕ YX (0.5) – let the intermediate state Z reverse registration to Z.

2.1 FCN

U-net is still used for feature extraction:

  • The input of the network is still X and Y two pictures spliced together, 2 channel pictures;
  • According to the paper, at the end of the model, two convolution layers with convolution layer 5 are used to generate two velocity fields, vXY,vYXv_{XY},v_{YX}vXY,vYX
  • In followed by a softsign activation x1 + ∣ x ∣ \ frac {x} {1 + | | x} 1 + ∣ x ∣ x
  • Then multiply by a constant c to make the velocity range within [-c, C]. The given c in the paper is 100, which is valid for large deformation.
  • Except for the Output convolutional layer, each convolutional layer is followed by the ReLU activation layer.
  • As a consequence of the differential homeomorphism, if T=7 ϕ, we end up with only ϕ(0.5)\phi^{(0.5)}ϕ(0.5).

3. Loss function

  • The main loss is NCC, which is common for such image registration tasks. NCC is used in the paper, and MSE can also be used

The loss of SYMnet model in this paper is as follows:

  • LsimL_{sim}Lsim measures the loss of similarity;
  • LJdetL_{Jdet}LJdet uses jacobian determinant instead of gradient smoothing loss.

3.1 similarity loss

Lsim=Lmean+LpairL_{sim} = L_{mean} + L_{pair}Lsim=Lmean+Lpair:

  • Lmean = – NCC (X (ϕ Y (0.5)), Y (ϕ YX (0.5))) L_ {mean} = – NCC (X (\ phi_ {X, Y} ^ {} (0.5)), Y (\ phi_ ^ {YX} {} (0.5))) Lmean = – NCC (X (ϕ Y (0.5)), Y (ϕ YX ( 0.5))). And this is pretty straightforward, because you want the intermediate state Z from X to be the same as the intermediate state Z from Y;
  • Lpari = – NCC (X (ϕ Y (1)), Y) – NCC (Y (ϕ YX (1)), X) L_ = {pari} – the NCC (X (\ phi_ {X, Y} ^ {} (1)), Y) – NCC (Y (\ phi_ ^ {YX} {} (1)), X) Lpari = – NCC (X (ϕ Y (1)), Y) – NCC (Y (ϕ YX (1)), X). Well, it’s easy to understand, you want the Y and the Y that you convert from X to be the same as the real Y and X;

Among them, it is worth mentioning:


  • ϕ X Y ( 1 ) = ϕ X Y ( 0.5 ) ϕ Y X ( 0.5 ) ^ \ phi_ {x, y} {} (1) = \ phi_ {x, y} ^ {} (0.5), the circ and phi_ {YX} ^ {} (0.5)

  • ϕ Y X ( 1 ) = ϕ Y X ( 0.5 ) ϕ X Y ( 0.5 ) ^ \ phi_ {YX} {} (1) = \ phi_ {YX} ^ {} (0.5), the circ, phi_ {x, y} ^ {} (0.5)

3.2 Loss of Jacobi determinant

The author proposes Jacobi determinant loss to replace Voxelmorph’s smooth grad loss.

This loss replaces the previous Grad smooth loss and pays more attention to the consistency of local direction. Using existing methods such as L1 or L2 regularization to constrain the gradient in the deformation field, such global regularization will greatly reduce the accuracy of registration

In this paper, jacobian determinant method is proposed to impose local direction consistency constraint on the estimated deformation field

Where N represents the number of elements in the deformation field,
sigma \sigma
Is the ReLU activation function,
J ϕ ( p ) J_{\phi}(p)
The Jacobian determinant of the deformation field at position P. The determinant is defined as follows:

The code is as follows:

def JacboianDet(y_pred, sample_grid) :
    J = y_pred + sample_grid
    dy = J[:, 1:, : -1To: -1, :] - J[:, :-1To: -1To: -1, :]
    dx = J[:, :-1.1:, : -1, :] - J[:, :-1To: -1To: -1, :]
    dz = J[:, :-1To: -1.1:, :] - J[:, :-1To: -1To: -1, :]

    Jdet0 = dx[:,:,:,:,0] * (dy[:,:,:,:,1] * dz[:,:,:,:,2] - dy[:,:,:,:,2] * dz[:,:,:,:,1])
    Jdet1 = dx[:,:,:,:,1] * (dy[:,:,:,:,0] * dz[:,:,:,:,2] - dy[:,:,:,:,2] * dz[:,:,:,:,0])
    Jdet2 = dx[:,:,:,:,2] * (dy[:,:,:,:,0] * dz[:,:,:,:,1] - dy[:,:,:,:,1] * dz[:,:,:,:,0])

    Jdet = Jdet0 - Jdet1 + Jdet2

    return Jdet


def neg_Jdet_loss(y_pred, sample_grid) :
    neg_Jdet = -1.0 * JacboianDet(y_pred, sample_grid)
    selected_neg_Jdet = F.relu(neg_Jdet)

    return torch.mean(selected_neg_Jdet)
Copy the code

It is worth noting that the proposed selective Ja- cobian determinant regularization loss means not to replace the global regularizer. Instead, we utilize both regulariza- tion loss functions in our method to produce smooth and topology-preservation transformations while alleviating the tradeoff between smoothness and registration accuracy.

The authors emphasize that this Jacobian determinant loss does not replace the previous global regularization loss, but overlays both.

So here is the gradient regularization loss for L2:

And then in order to increase the magnitude loss, that is to ensure
v x y v_{xy}
and
x y x x_{yx}
Of the same order of magnitude:

So overall, this SYMnet has four parts of loss:

4 the result

Unsurprisingly, the result was the best possible.

The paper mainly focuses on Jacobi loss, so the influence of different weight of Jacobi loss on the final result is specially studied

Interestingly, although the author says
J ϕ Smaller is better, but here’s what I saw D S C The effect increases with the Jacobi weight reduction haha. Seems to work best without this loss constraint lol | | J_ {\ phi} as small as possible, but I see is the effect of the DSC as jacoby weight reduce ascend ha ha. Seems to work best without this loss constraint lol