• Article reprinted from wechat official account “Alchemy of machine Learning”
  • Author: Brother Lian Dan (authorized)
  • The author’s contact information: wechat CYX645016617 (welcome to exchange and make progress together)

NCCNormalized cross-correlation NCCNormalized cross-correlation NCCNormalized cross-correlation

Whether the two pictures are the same content, the current deep learning scheme naturally uses neural network, for example, the structure of twin network human face recognition, etc.

In the traditional non-parametric methods, there are also common correlation coefficients. In the study of Voxelmorph’s model in the last article, I found that in the medical image registration task (not limited to medicine), there is a measure called NCC to measure the similarity between two images

Normalized cross-correlation NCC is Normalized cross-correlation.

1 is the correlation number

If you know the correlation numbers, then you have a good understanding of the normalized correlation numbers.

Correlation coefficient calculation formula is as follows: r (X, Y) = Cov (X, Y) Var (X) Var (Y) r (X, Y) = \ frac {Cov (X, Y)} {\ SQRT {Var (X) Var (Y)}} r (X, Y) = Var (X) Var (Y) Cov (X, Y)

X and Y in the formula represent two pictures respectively, Cov(X,Y)Cov(X,Y)Cov(X,Y) Cov(X,Y) represents the covariance of the two pictures, Var(X)Var(X)Var(X) represents the variance of X itself;

2. Normalized cross-correlation NCC

If you slide an image along a certain number of pixels, say a 9×9 box, you can divide the image into many smaller 9×9 images. Then NCC is the average of the number of relationships between the smaller images in X and Y.

Here have a look at the covariance calculation: Cov (X, Y) = E [- E (X) (X) (Y) – E (Y)] Cov (X, Y) = E [(X – E (X)) (Y – E (Y))] Cov (X, Y) = E [- E (X) (X) (Y) – E (Y)]

Variance calculation is: the Var (X) = E [- E (X) (X) 2] Var (X) = E [(X – E (X)) ^ 2) Var (X) = E [- E (X) (X) 2]

NCC is easy to understand, but how do you calculate it in code? Of course, we could go through it line by line, but that would be too time-consuming, so we’d better go with matrix operations.

3 NCC loss function code

class NCC:
    """ Local (over window) normalized cross correlation loss. """

    def __init__(self, win=None) :
        self.win = win

    def loss(self, y_true, y_pred) :

        I = y_true
        J = y_pred

        # get dimension of volume
        # assumes I, J are sized [batch_size, *vol_shape, nb_feats]
        ndims = len(list(I.size())) - 2
        assert ndims in [1.2.3]."volumes should be 1 to 3 dimensions. found: %d" % ndims

        # set window size
        win = [9] * ndims if self.win is None else self.win

        # compute filters
        sum_filt = torch.ones([1.1, *win]).to("cuda")

        pad_no = math.floor(win[0] /2)

        if ndims == 1:
            stride = (1)
            padding = (pad_no)
        elif ndims == 2:
            stride = (1.1)
            padding = (pad_no, pad_no)
        else:
            stride = (1.1.1)
            padding = (pad_no, pad_no, pad_no)

        # get convolution function
        conv_fn = getattr(F, 'conv%dd' % ndims)

        # compute CC squares
        I2 = I * I
        J2 = J * J
        IJ = I * J

        I_sum = conv_fn(I, sum_filt, stride=stride, padding=padding)
        J_sum = conv_fn(J, sum_filt, stride=stride, padding=padding)
        I2_sum = conv_fn(I2, sum_filt, stride=stride, padding=padding)
        J2_sum = conv_fn(J2, sum_filt, stride=stride, padding=padding)
        IJ_sum = conv_fn(IJ, sum_filt, stride=stride, padding=padding)

        win_size = np.prod(win)
        u_I = I_sum / win_size
        u_J = J_sum / win_size

        cross = IJ_sum - u_J * I_sum - u_I * J_sum + u_I * u_J * win_size
        I_var = I2_sum - 2 * u_I * I_sum + u_I * u_I * win_size
        J_var = J2_sum - 2 * u_J * J_sum + u_J * u_J * win_size

        cc = cross * cross / (I_var * J_var + 1e-5)

        return -torch.mean(cc)
Copy the code

This code is actually not very nice to look at, and IT took me a long time to figure it out. The key is how to understand:

# compute CC squares
        I2 = I * I
        J2 = J * J
        IJ = I * J

        I_sum = conv_fn(I, sum_filt, stride=stride, padding=padding)
        J_sum = conv_fn(J, sum_filt, stride=stride, padding=padding)
        I2_sum = conv_fn(I2, sum_filt, stride=stride, padding=padding)
        J2_sum = conv_fn(J2, sum_filt, stride=stride, padding=padding)
        IJ_sum = conv_fn(IJ, sum_filt, stride=stride, padding=padding)

        win_size = np.prod(win)
        u_I = I_sum / win_size
        u_J = J_sum / win_size

        cross = IJ_sum - u_J * I_sum - u_I * J_sum + u_I * u_J * win_size
        I_var = I2_sum - 2 * u_I * I_sum + u_I * u_I * win_size
        J_var = J2_sum - 2 * u_J * J_sum + u_J * u_J * win_size
Copy the code

We can just get to, this cross is supposed to be the covariance part, I_var and J_var is the variance part.

We derive the covariance formula: Cov (X, Y) = [E – E (X) (X) (Y) – E (Y)] Cov (X, Y) = E [(X – E (X)) (Y – E (Y))] Cov (X, Y) = E [- E (X) (X) (Y) – E (Y)] = E [XY – XE (Y) – YE (X) + E (X) E (Y)] = E [XY – XE (Y) – YE (X) + E (X) E (Y)] = E [XY – XE (Y) – YE (X) + E (X) E (Y)]

So that corresponds exactly to cross.

  • IJ_sum = E[XY]
  • u_J * I_sum = E[XE(Y)]
  • u_I * u_J * win_size = E[E(X)E(Y)]

Derivation of the opposite difference formula: Var (X) = [E – E (X) (X) 2] = E [X2-2 xe (X) + E (X) 2] Var (X) = E [(X – E (X)) ^ 2) = E (X ^ 2-2 xe (X) (X) + E ^ 2] Var (X) = E [- E (X) (X) 2] = E [X2-2 xe (X) + E (X) 2]

  • J2_sum = E(X^2)
  • 2 * u_J * J_sum = E[2XE(X)]
  • u_J * u_J * win_size = E[E(X)^2]