Preconditioned ADMM with nonlinear operator constraint

Martin Benning, Florian Knoll, Carola-Bibiane Schönlieb & Tuomo Valkonen

This is the corresponding code to [1]. In order to run this code it is necessary to either execute

setpath

first, or to add the folders manually to the path.

Author: Martin Benning - mb941@cam.ac.uk

Last updated: 07.06.2016

Load coil sensitivity maps and brain phantom

We begin by loading an artificial brain phantom ugt and measured coil sensitivities b. The coil sensitivities were generated from the measurements of a water bottle with an 8-channel head coil array. The numerical phantom is based on [2]; details can be found in [1, Section 6.1]. Courtesy of Florian Knoll, NYU.

load('Data/B1Map_190.mat')

load('Data/digital_brain_phantom_images_2d.mat')

b = b1;

ugt = u_mri_flair/max(abs(u_mri_flair(:)));

nrcoils = size(b, 3);

For comparison to later reconstructions, we visualise the modulus of the ground truth spin proton density and the coil sensitivities.

figure()

imagesc(abs(ugt), [0 1])

axis image

colormap(gray(512))

colorbar

drawnow

figure()

for i=1:nrcoils

subplot(2, 4, i), imagesc(abs(b(:, :, i)), [0 max(abs(b1(:)))]);

axis image

colorbar

colormap(gray(512))

end

drawnow

Initialise operators to create artificial k-space-data

In order to create artificial k-space data from ugt and b, we need to intialise the forward operators G and F first. Here G will realise equation (11) as described in [1], whereas F performs a two-dimensional discrete Fourier transform with circular shift from the coordinates [1, 1] to [95, 95].

G = coilsensop(size(ugt), nrcoils);

F = mridft(size(ugt), [95 95]);

We load a synthetic spiral sampling scheme samp that is sampled on a cartesisan grid.

load('Supplementary Files/spiralsampling.mat')

We further create a sub-sampling matrix S from samp and multiply it with F to create the sub-sampled Fourier operator SF.

S = speye(size(F, 1));

S = S(samp(:), :);

SF = S*F;

Create artificial k-space-data

Now we can create artificial k-space data by applying G and SF to ugt and b. First, we group ugt and b to one auxiliary variable aux1 and apply G to produce another auxiliary variable aux2.

aux1 = cat(3, ugt, b);

aux2 = G.feval(aux1(:));

Then we produce artificial, noisefree k-space data g for each coil by applying SF to each component of aux2.

g = zeros(nrcoils*size(SF, 1), 1);

for i=1:nrcoils

g(((i - 1)*size(SF, 1) + 1):(i*size(SF, 1))) = SF*aux2(((i - ...

1)*size(F, 2) + 1):(i*size(F, 2)));

end

Subsequently, we contaminate the noisefree k-space data g by adding Gaußian noise with standard deviation and obtain our artificial k-space data f.

sigma = 0.05;

f = g + sigma*randn(size(g));

Note that we reproduce the low noise-level results from [1] with this choice of sigma. In order to reproduce the high noise-level results, we need to set here.

Compute zero-filled reconstructions

For comparison purposes, we also compute a naive zero-filling reconstruction. We apply the Hermitian transpose of SF to f and store the mean as a new variable uzfmean.

uzf = zeros([size(G, 1) 1]);

for i=1:nrcoils

uzf(((i - 1)*size(SF, 2) + 1):(i*size(SF, 2))) = SF'*f(((i - ...

1)*size(SF, 1) + 1):(i*size(SF, 1)));

end

uzf = reshape(uzf, sizend(G, 1));

uzfmean = reshape(mean(uzf, 3), [size(F, 2) 1]);

We plot the computed zero-filled reconstructions to emphasise that significant artifacts are introduced when the modelling of the coil sensitivities is neglected.

figure()

for i=1:nrcoils

subplot(2, 4, i), imagesc(abs(uzf(:, :, i)), [0 max(abs(b1(:)))]);

axis image

colorbar

colormap(gray(512))

end

drawnow

Set up gradient operator for reconstruction method

For regularisation purposes we need to initialise a two-dimensional finite difference approximation of the gradient operator Grad with stepsize one in both dimensions.

Grad = buildgradient(sizend(F, 2), ones(2, 1));

The gradient has to be applied to the image and all coils, see the equation following after Equation (11) in [1]. Together with the operator G it forms the operator required in Algorithm 1. As we have used F for the notation of the Fourier transform already, we denote this non-linear operator as A.

Gradall = kron(speye(nrcoils + 1), Grad);

A = [G; Gradall];

Set up proximity operators for reconstruction method

The next requirement for the application of Algorithm 1 is the initialisation of the proximity operators corresponding to and . As these consist of multiple partial proximity operators, we also initialise them that way. As described in the paper, we set . As we have in our case, we initialise 9 trivial proximity operators of correct size as follows.

H0 = proxoperator(sizend(F, 2));

H1 = proxoperator(sizend(F, 2));

H2 = proxoperator(sizend(F, 2));

H3 = proxoperator(sizend(F, 2));

H4 = proxoperator(sizend(F, 2));

H5 = proxoperator(sizend(F, 2));

H6 = proxoperator(sizend(F, 2));

H7 = proxoperator(sizend(F, 2));

H8 = proxoperator(sizend(F, 2));

We concatenate them to one proxmity operator for . Note that in future releases operation like H(1:9) = proxoperator( ... ) are planned to be supported.

H = [H0, H1, H2, H3, H4, H5, H6, H7, H8];

We now do the same for the proximity operator of . According to the paper, we have with for , and . We initialise the proximity operators for first.

J0 = diagorthprox(S, F);

J1 = diagorthprox(S, F);

J2 = diagorthprox(S, F);

J3 = diagorthprox(S, F);

J4 = diagorthprox(S, F);

J5 = diagorthprox(S, F);

J6 = diagorthprox(S, F);

J7 = diagorthprox(S, F);

Next, we initialise .

J8 = onetwonormprox([size(F, 2) 2]);

And conclude with the proximity operators for .

J9 = twonormprox([size(F, 2)*2 1]);

J10 = twonormprox([size(F, 2)*2 1]);

J11 = twonormprox([size(F, 2)*2 1]);

J12 = twonormprox([size(F, 2)*2 1]);

J13 = twonormprox([size(F, 2)*2 1]);

J14 = twonormprox([size(F, 2)*2 1]);

J15 = twonormprox([size(F, 2)*2 1]);

J16 = twonormprox([size(F, 2)*2 1]);

As for , we concatenate all the individual proximity operators to obtain one proximity operator for .

J = [J0, J1, J2, J3, J4, J5, J6, J7, J8, J9, J10, J11, J12, J13, J14, ...

J15, J16];

Associate data to prox operators

Before initialising Algorithm 1, we equip the proximity terms that correspond to the data fidelities with the noisy k-space data f.

J0.setdata(f(1:size(SF, 1)))

J1.setdata(f((size(SF, 1) + 1):(2*size(SF, 1))))

J2.setdata(f((2*size(SF, 1) + 1):(3*size(SF, 1))))

J3.setdata(f((3*size(SF, 1) + 1):(4*size(SF, 1))))

J4.setdata(f((4*size(SF, 1) + 1):(5*size(SF, 1))))

J5.setdata(f((5*size(SF, 1) + 1):(6*size(SF, 1))))

J6.setdata(f((6*size(SF, 1) + 1):(7*size(SF, 1))))

J7.setdata(f((7*size(SF, 1) + 1):(8*size(SF, 1))))

Initialise simplified version Algorithm 1

As we now have all the ingredients necessary, we proceed to initialise an object solver of the simplified version of Algorithm 1, where (respectively A in terms of the variable name) is only non-linear in but not in .

solver = NPADMMsimple(A, H, J);

We set the parameters and fixed to 1/3, which experimentally appears to guarantee convergence - otherwise we need to compute the operator norm of the linearised A in each iteration. In addition, we fix the (maximum) number of iterations to 1500 as described in [1].

solver.param.tau = 1/3;

solver.param.delta = 1/3;

solver.param.maxiter = 1500;

Set initial values

As described in [1, Section 6.2], we initialise with being the constant one-vector, for all . All other initial variables (, , ) are set to zero.

solver.reinitialise;

solver.var.u = ones(9*size(F, 2), 1);

Set regularisation parameters

We fix the regularisation parameters to , and for , in order to reproduce the low noise-level results of [1]. In order to reproduce the high noise-level results, the parameters have to be adapted according to [1, Section 6.2].

lambda = 0.0621;

alpha0 = 0.0062;

alphaj = 0.9317;

Finally, we modifiy the regularisation parameters of the objects of the proximity operators.

J0.setalpha(lambda)

J1.setalpha(lambda)

J2.setalpha(lambda)

J3.setalpha(lambda)

J4.setalpha(lambda)

J5.setalpha(lambda)

J6.setalpha(lambda)

J7.setalpha(lambda)

J8.setalpha(alpha0)

J9.setalpha(alphaj)

J10.setalpha(alphaj)

J11.setalpha(alphaj)

J12.setalpha(alphaj)

J13.setalpha(alphaj)

J14.setalpha(alphaj)

J15.setalpha(alphaj)

J16.setalpha(alphaj)

Solve nonlinear coilsensitivity estimation problem

Now we run Algorithm 1 by executing

solver.solve;

Iteration nr. 1, Sensitivity: 0.66667
Iteration nr. 2, Sensitivity: 0.1658
Iteration nr. 3, Sensitivity: 0.12415
Iteration nr. 4, Sensitivity: 0.11722
Iteration nr. 5, Sensitivity: 0.11119
Iteration nr. 6, Sensitivity: 0.10595
Iteration nr. 7, Sensitivity: 0.1013
Iteration nr. 8, Sensitivity: 0.096705
Iteration nr. 9, Sensitivity: 0.091859
Iteration nr. 10, Sensitivity: 0.086507
Iteration nr. 11, Sensitivity: 0.080509
Iteration nr. 12, Sensitivity: 0.073898
Iteration nr. 13, Sensitivity: 0.066891
Iteration nr. 14, Sensitivity: 0.059815
Iteration nr. 15, Sensitivity: 0.052998
Iteration nr. 16, Sensitivity: 0.046687
Iteration nr. 17, Sensitivity: 0.041019
Iteration nr. 18, Sensitivity: 0.036036
Iteration nr. 19, Sensitivity: 0.031718
Iteration nr. 20, Sensitivity: 0.028008
Iteration nr. 21, Sensitivity: 0.024835
Iteration nr. 22, Sensitivity: 0.022128
Iteration nr. 23, Sensitivity: 0.019817
Iteration nr. 24, Sensitivity: 0.017844
Iteration nr. 25, Sensitivity: 0.016156
Iteration nr. 26, Sensitivity: 0.01471
Iteration nr. 27, Sensitivity: 0.01347
Iteration nr. 28, Sensitivity: 0.012408
Iteration nr. 29, Sensitivity: 0.011497
Iteration nr. 30, Sensitivity: 0.010717
Iteration nr. 31, Sensitivity: 0.010051
Iteration nr. 32, Sensitivity: 0.0094824
Iteration nr. 33, Sensitivity: 0.0089986
Iteration nr. 34, Sensitivity: 0.0085876
Iteration nr. 35, Sensitivity: 0.0082394
Iteration nr. 36, Sensitivity: 0.007945
Iteration nr. 37, Sensitivity: 0.0076967
Iteration nr. 38, Sensitivity: 0.0074874
Iteration nr. 39, Sensitivity: 0.0073116
Iteration nr. 40, Sensitivity: 0.007164
Iteration nr. 41, Sensitivity: 0.0070404
Iteration nr. 42, Sensitivity: 0.0069369
Iteration nr. 43, Sensitivity: 0.0068502
Iteration nr. 44, Sensitivity: 0.0067779
Iteration nr. 45, Sensitivity: 0.0067174
Iteration nr. 46, Sensitivity: 0.0066664
Iteration nr. 47, Sensitivity: 0.0066238
Iteration nr. 48, Sensitivity: 0.0065881
Iteration nr. 49, Sensitivity: 0.0065581
Iteration nr. 50, Sensitivity: 0.0065334
Iteration nr. 51, Sensitivity: 0.0065133
Iteration nr. 52, Sensitivity: 0.0064967
Iteration nr. 53, Sensitivity: 0.0064829
Iteration nr. 54, Sensitivity: 0.0064717
Iteration nr. 55, Sensitivity: 0.0064625
Iteration nr. 56, Sensitivity: 0.0064549
Iteration nr. 57, Sensitivity: 0.0064488
Iteration nr. 58, Sensitivity: 0.0064443
Iteration nr. 59, Sensitivity: 0.0064408
Iteration nr. 60, Sensitivity: 0.0064379
Iteration nr. 61, Sensitivity: 0.0064348
Iteration nr. 62, Sensitivity: 0.0064317
Iteration nr. 63, Sensitivity: 0.0064283
Iteration nr. 64, Sensitivity: 0.0064249
Iteration nr. 65, Sensitivity: 0.0064207
Iteration nr. 66, Sensitivity: 0.0064157
Iteration nr. 67, Sensitivity: 0.0064101
Iteration nr. 68, Sensitivity: 0.0064036
Iteration nr. 69, Sensitivity: 0.0063954
Iteration nr. 70, Sensitivity: 0.0063853
Iteration nr. 71, Sensitivity: 0.006374
Iteration nr. 72, Sensitivity: 0.0063617
Iteration nr. 73, Sensitivity: 0.0063489
Iteration nr. 74, Sensitivity: 0.0063359
Iteration nr. 75, Sensitivity: 0.0063227
Iteration nr. 76, Sensitivity: 0.0063091
Iteration nr. 77, Sensitivity: 0.0062954
Iteration nr. 78, Sensitivity: 0.0062814
Iteration nr. 79, Sensitivity: 0.0062668
Iteration nr. 80, Sensitivity: 0.0062524
Iteration nr. 81, Sensitivity: 0.0062381
Iteration nr. 82, Sensitivity: 0.0062239
Iteration nr. 83, Sensitivity: 0.00621
Iteration nr. 84, Sensitivity: 0.0061965
Iteration nr. 85, Sensitivity: 0.0061829
Iteration nr. 86, Sensitivity: 0.0061692
Iteration nr. 87, Sensitivity: 0.0061551
Iteration nr. 88, Sensitivity: 0.0061408
Iteration nr. 89, Sensitivity: 0.0061262
Iteration nr. 90, Sensitivity: 0.0061115
Iteration nr. 91, Sensitivity: 0.0060968
Iteration nr. 92, Sensitivity: 0.0060821
Iteration nr. 93, Sensitivity: 0.0060675
Iteration nr. 94, Sensitivity: 0.0060532
Iteration nr. 95, Sensitivity: 0.0060388
Iteration nr. 96, Sensitivity: 0.0060245
Iteration nr. 97, Sensitivity: 0.0060104
Iteration nr. 98, Sensitivity: 0.0059965
Iteration nr. 99, Sensitivity: 0.0059827
Iteration nr. 100, Sensitivity: 0.005969
Iteration nr. 101, Sensitivity: 0.0059552
Iteration nr. 102, Sensitivity: 0.0059414
Iteration nr. 103, Sensitivity: 0.0059279
Iteration nr. 104, Sensitivity: 0.0059145
Iteration nr. 105, Sensitivity: 0.0059009
Iteration nr. 106, Sensitivity: 0.0058875
Iteration nr. 107, Sensitivity: 0.0058742
Iteration nr. 108, Sensitivity: 0.0058609
Iteration nr. 109, Sensitivity: 0.0058474
Iteration nr. 110, Sensitivity: 0.0058341
Iteration nr. 111, Sensitivity: 0.0058206
Iteration nr. 112, Sensitivity: 0.005807
Iteration nr. 113, Sensitivity: 0.0057934
Iteration nr. 114, Sensitivity: 0.0057797
Iteration nr. 115, Sensitivity: 0.0057659
Iteration nr. 116, Sensitivity: 0.0057521
Iteration nr. 117, Sensitivity: 0.0057384
Iteration nr. 118, Sensitivity: 0.0057245
Iteration nr. 119, Sensitivity: 0.0057104
Iteration nr. 120, Sensitivity: 0.0056962
Iteration nr. 121, Sensitivity: 0.0056819
Iteration nr. 122, Sensitivity: 0.0056676
Iteration nr. 123, Sensitivity: 0.0056532
Iteration nr. 124, Sensitivity: 0.0056389
Iteration nr. 125, Sensitivity: 0.0056244
Iteration nr. 126, Sensitivity: 0.0056098
Iteration nr. 127, Sensitivity: 0.0055951
Iteration nr. 128, Sensitivity: 0.0055803
Iteration nr. 129, Sensitivity: 0.0055654
Iteration nr. 130, Sensitivity: 0.0055504
Iteration nr. 131, Sensitivity: 0.0055353
Iteration nr. 132, Sensitivity: 0.0055202
Iteration nr. 133, Sensitivity: 0.0055048
Iteration nr. 134, Sensitivity: 0.0054894
Iteration nr. 135, Sensitivity: 0.0054739
Iteration nr. 136, Sensitivity: 0.0054584
Iteration nr. 137, Sensitivity: 0.0054426
Iteration nr. 138, Sensitivity: 0.0054268
Iteration nr. 139, Sensitivity: 0.0054108
Iteration nr. 140, Sensitivity: 0.0053948
Iteration nr. 141, Sensitivity: 0.0053787
Iteration nr. 142, Sensitivity: 0.0053624
Iteration nr. 143, Sensitivity: 0.005346
Iteration nr. 144, Sensitivity: 0.0053294
Iteration nr. 145, Sensitivity: 0.0053128
Iteration nr. 146, Sensitivity: 0.0052963
Iteration nr. 147, Sensitivity: 0.0052796
Iteration nr. 148, Sensitivity: 0.0052629
Iteration nr. 149, Sensitivity: 0.0052461
Iteration nr. 150, Sensitivity: 0.0052292
Iteration nr. 151, Sensitivity: 0.005212
Iteration nr. 152, Sensitivity: 0.0051947
Iteration nr. 153, Sensitivity: 0.0051774
Iteration nr. 154, Sensitivity: 0.0051599
Iteration nr. 155, Sensitivity: 0.0051423
Iteration nr. 156, Sensitivity: 0.0051246
Iteration nr. 157, Sensitivity: 0.0051067
Iteration nr. 158, Sensitivity: 0.0050888
Iteration nr. 159, Sensitivity: 0.0050708
Iteration nr. 160, Sensitivity: 0.0050527
Iteration nr. 161, Sensitivity: 0.0050345
Iteration nr. 162, Sensitivity: 0.0050163
Iteration nr. 163, Sensitivity: 0.004998
Iteration nr. 164, Sensitivity: 0.0049795
Iteration nr. 165, Sensitivity: 0.0049609
Iteration nr. 166, Sensitivity: 0.0049423
Iteration nr. 167, Sensitivity: 0.0049235
Iteration nr. 168, Sensitivity: 0.0049048
Iteration nr. 169, Sensitivity: 0.004886
Iteration nr. 170, Sensitivity: 0.0048671
Iteration nr. 171, Sensitivity: 0.0048483
Iteration nr. 172, Sensitivity: 0.0048293
Iteration nr. 173, Sensitivity: 0.0048102
Iteration nr. 174, Sensitivity: 0.004791
Iteration nr. 175, Sensitivity: 0.0047717
Iteration nr. 176, Sensitivity: 0.0047523
Iteration nr. 177, Sensitivity: 0.0047328
Iteration nr. 178, Sensitivity: 0.0047134
Iteration nr. 179, Sensitivity: 0.0046939
Iteration nr. 180, Sensitivity: 0.0046744
Iteration nr. 181, Sensitivity: 0.0046548
Iteration nr. 182, Sensitivity: 0.0046351
Iteration nr. 183, Sensitivity: 0.0046154
Iteration nr. 184, Sensitivity: 0.0045957
Iteration nr. 185, Sensitivity: 0.0045759
Iteration nr. 186, Sensitivity: 0.0045561
Iteration nr. 187, Sensitivity: 0.0045361
Iteration nr. 188, Sensitivity: 0.0045159
Iteration nr. 189, Sensitivity: 0.0044957
Iteration nr. 190, Sensitivity: 0.0044753
Iteration nr. 191, Sensitivity: 0.004455
Iteration nr. 192, Sensitivity: 0.0044346
Iteration nr. 193, Sensitivity: 0.0044144
Iteration nr. 194, Sensitivity: 0.0043941
Iteration nr. 195, Sensitivity: 0.0043738
Iteration nr. 196, Sensitivity: 0.0043534
Iteration nr. 197, Sensitivity: 0.004333
Iteration nr. 198, Sensitivity: 0.0043126
Iteration nr. 199, Sensitivity: 0.0042921
Iteration nr. 200, Sensitivity: 0.0042716
Iteration nr. 201, Sensitivity: 0.004251
Iteration nr. 202, Sensitivity: 0.0042303
Iteration nr. 203, Sensitivity: 0.0042097
Iteration nr. 204, Sensitivity: 0.0041891
Iteration nr. 205, Sensitivity: 0.0041685
Iteration nr. 206, Sensitivity: 0.004148
Iteration nr. 207, Sensitivity: 0.0041274
Iteration nr. 208, Sensitivity: 0.0041069
Iteration nr. 209, Sensitivity: 0.0040864
Iteration nr. 210, Sensitivity: 0.0040658
Iteration nr. 211, Sensitivity: 0.0040453
Iteration nr. 212, Sensitivity: 0.0040249
Iteration nr. 213, Sensitivity: 0.0040045
Iteration nr. 214, Sensitivity: 0.0039841
Iteration nr. 215, Sensitivity: 0.0039636
Iteration nr. 216, Sensitivity: 0.0039432
Iteration nr. 217, Sensitivity: 0.0039229
Iteration nr. 218, Sensitivity: 0.0039024
Iteration nr. 219, Sensitivity: 0.0038817
Iteration nr. 220, Sensitivity: 0.0038611
Iteration nr. 221, Sensitivity: 0.0038405
Iteration nr. 222, Sensitivity: 0.0038201
Iteration nr. 223, Sensitivity: 0.0037998
Iteration nr. 224, Sensitivity: 0.0037793
Iteration nr. 225, Sensitivity: 0.0037589
Iteration nr. 226, Sensitivity: 0.0037385
Iteration nr. 227, Sensitivity: 0.0037183
Iteration nr. 228, Sensitivity: 0.0036982
Iteration nr. 229, Sensitivity: 0.0036782
Iteration nr. 230, Sensitivity: 0.003658
Iteration nr. 231, Sensitivity: 0.0036378
Iteration nr. 232, Sensitivity: 0.0036177
Iteration nr. 233, Sensitivity: 0.0035971
Iteration nr. 234, Sensitivity: 0.0035762
Iteration nr. 235, Sensitivity: 0.0035558
Iteration nr. 236, Sensitivity: 0.0035357
Iteration nr. 237, Sensitivity: 0.0035157
Iteration nr. 238, Sensitivity: 0.0034958
Iteration nr. 239, Sensitivity: 0.003476
Iteration nr. 240, Sensitivity: 0.003456
Iteration nr. 241, Sensitivity: 0.003436
Iteration nr. 242, Sensitivity: 0.0034162

We extract the reconstructed variables via

recon = solver.var.u;

and store the spin proton density as urec via

urec = reshape(recon(1:size(F, 2)), sizend(F, 2));

and the coil sensitivities as coilsens via

coilsens = zeros([sizend(F, 2) nrcoils]);

for i=1:nrcoils

coilsens(:, :, i) = reshape(recon((i*size(F, 2) + 1):(i + 1)*size(F,...

2)), sizend(F, 2));

end

Plot results

To conclude, we visualise the results.

figure()

imagesc(abs(urec), [0 1])

axis image

colorbar

colormap(gray(512))

drawnow

figure()

for i=1:nrcoils

subplot(2, 4, i), imagesc(abs(coilsens(:, :, i)), [0 max(abs(b1(:)))]);

axis image

colorbar

colormap(gray(512))

drawnow

end

References

[1] Benning, Martin, Florian Knoll, Carola-Bibiane Schönlieb, and Tuomo Valkonen. "Preconditioned ADMM with nonlinear operator constraint." Accepted for publication in IFIP TC 7 2015 conference proceedings (2016).

Preprint: arXiv preprint arXiv:1511.00425 (2015)

[2] Aubert-Broche, Berengere, Alan C. Evans, and Louis Collins. "A new improved version of the realistic digital brain phantom." NeuroImage 32, no. 1 (2006): 138-145.