Improving segmentation in large datasets?

Dear SCT-team,

We are trying to conduct a multicenter study with a fairly large data set (~150 participants) and are using SCT to help us measure C2/C3 CSA on brain images as suggested here: How to compute CSA from brain data.

For various reasons, data quality is sometimes lacking which makes accurate segmentation more difficult and necessitates manual segmentation edits. We’ve tried using T1, T2, propseg, smoothing with different parameters, but although some segmentations improve when we change parameters, other segmentations worsen.

Apart from improving input image quality, are there any general recommendations or suggestions for how one can improve segmentation or how to adress segmentation issues in larger datasets? As mentioned, we haven’t been able to find a “fix all” solution yet for these datasets that surpasses the output we get when using Julien’s suggestion from the post mentioned above. Doing a lot of manual segmentation-edits in FSLeyes is of course possible but our concern is that this may introduce more variation and also make it harder to reproduce possible findings.

Kind regards,

Spaulding Neuroimaging Lab

Ps I have not attached any code or screenshots as we’re hoping for more general guidance. WIll be happy to post this, though, if it would be helpful.

Hi @Carl !

Thank you for reaching out. Amongst all the parameters that can be tweaked on SCT’s functions, the most critical parameter is the image itself, so it would be very helpful if you could show us 2-3 representative images (all the available contrasts, so we can decide which ones would be the most relevant to use).

Hi Julien, sure and thank you for replying! I’m attaching screen shots of 2 subjects and their respective T1 and T2 QC-reports. The terminal input should be visible in each image. We used the viewer to manually designate the spinal cord center in the T2-contrast for Example1 as both cnn and svm appeared to be struggling (taking a very long time) although this means more manual intervention. We would be very grateful if you have any ideas/suggestions and I’ll be glad to provide more material if required.

Kind regards,


Thank you, it is helpful. Overall, I think the problem is related to the T2w images which show very little CSF around the cord (and the model was trained to see CSF, most of the time). I’m wondering if running the model two times, with flags -c t1 and -c t2 and then sum up the resulting segmentation wouldn’t help. I would also like to explore options to improve centerline detection.

Maybe at this point if you send me 2-3 representative subjects privately, I’d be able to find a custom solution for you.

Ok! That sounds great, would an emailed Dropbox-link to de-identified data be acceptable? I should add that we’ve had issues with the NIfTI-headers and have been using the sct_image “set-sform-to-qform”-command to try to correct this; I believe that the headers may be corrupt in the de-identified data that we look to provide you with.

yes, this is usually how researchers share their data with me

Sorry for the late reply. So, I have reviewed some of the images you sent and identified a few workarounds. They are organized below under dedicated section per issue.

T2w images with missing CSF

The T2w images used to train the deep learning model mostly contained visible CSF surrounding the spinal cord. It is not the case in some of your images, so the ‘t2’ model does not work as well:

In such cases I would suggest to use the ‘t1’ contrast flag when calling SCT’s function, which might provide better results (although it is not a ‘clear cut’):

Note: We are currently working on an improved model for SC segmentation which should accommodate images like these ones, so stay tuned!

Failed centerline detection

A more obvious issue happens when the spinal cord centerline is not properly detected, which could lead to results like this one:

In this case, I also suggest to try changing the contrast, which could help with the detection, and hence with the subsequent segmentation. Example:

If that still doesn’t work, you could try initializing the segmentation with a manual centerline, by adding the flag -centerline viewer. Also see this tutorial.

Hope that helps!

Thank you, we’ll definitely try some of these work-arounds! Is it generally recommended to use the T2-contrast rather than the T1? Looking forward to trying out the new segmentation model when it becomes available.

I usually prefer the T2w SPACE vs. the T1w MPRAGE as I find the T2w cleaner in terms of motion artifact, contrast between cord/CSF and sharpness of the boundary (higher res).