Problem creating own atlas with msct_multiatlas_seg

Hi Julien

I was trying to create an atlas using msct_multiatlas_seg. It always ended up in an NaN error when trying to calculate the PCA in the end.
I was using the following command:
msct_multiatlas_seg -path-data test -o testAtlas -sq-size 30 -axial-res 0.1333

I digged a little deeper and I have seen that the problem lies within the co-registration in coregister_model_data. After

im_slice_reg, fname_src2dest, fname_dest2src = register_data(im_src=im_slice, im_dest=im_mean, param_reg=self.param_data.register_param, path_copy_warp=warp_dir)

the image im_slice_reg only consists of the upper left quarter and everything else is cropped to 0.

I also digged into that and have seen that the warping fields of the registration steps are not properly concatenated together using sct_register_multimodal > sct_concat_transfo > isct_ComposeMultiTransform.
At line
sct.run(['sct_concat_transfo', '-w', ','.join(warp_forward), '-d', 'dest.nii', '-o', 'warp_src2dest.nii.gz'], verbose)

warp_forward is ['warp_forward_0.nii.gz', 'warp_forward_1.nii.gz', 'warp_forward_2.nii.gz']
and each 'warp_forward_*.nii.gz' in my case was a 0-vector field (probably because I already preregistered them, but there might also be a problem),
but 'warp_src2dest.nii.gz' consisted of 0s in the upper left quarter and everything else had the value of 3.4028235e+38.
The anterior-posterior and left-right dimension sizes of 'warp_src2dest.nii.gz' and 'warp_forward_*.nii.gz' were the same. So it did not just concatenate the image domains. But I wonder what happens inside isct_ComposeMultiTransform.

I tested it with 2 different SCT versions:

  • the current dev version (Fri Feb 8 17:36:15 2019 -0500)
  • 3.2.7

Together with numpy 1.12.0.

Is there an easy fix to this problem?

Best,
Antal

Hi Antal,

Thank you for these investigations. Short answer: there could be an easy way to fix the problem, but I would have to put my head in this (very) old code, which is not maintained anymore: as you know we moved to deep learning for GM seg.

I see two approaches (not mutually exclusive):

  • If you provide me with example data that you used to reproduce the bug, I could spend a bit of time reproducing the bug on my side, and debug from there. I’m willing to do it if the 2nd approach does not work for you.
  • Since you seem to be familiar with programming and you have already spent some time looking at the code and identifying potential problems, you could open a Github issue and propose a fix to this problem (or at least avenues). If you document well enough the potential cause of the problem, we might be able to provide help without having to dive too much into the code.

I hope that helps!
Julien