Improving segmentation in large datasets?

Dear SCT-team,

We are trying to conduct a multicenter study with a fairly large data set (~150 participants) and are using SCT to help us measure C2/C3 CSA on brain images as suggested here: How to compute CSA from brain data.

For various reasons, data quality is sometimes lacking which makes accurate segmentation more difficult and necessitates manual segmentation edits. We’ve tried using T1, T2, propseg, smoothing with different parameters, but although some segmentations improve when we change parameters, other segmentations worsen.

Apart from improving input image quality, are there any general recommendations or suggestions for how one can improve segmentation or how to adress segmentation issues in larger datasets? As mentioned, we haven’t been able to find a “fix all” solution yet for these datasets that surpasses the output we get when using Julien’s suggestion from the post mentioned above. Doing a lot of manual segmentation-edits in FSLeyes is of course possible but our concern is that this may introduce more variation and also make it harder to reproduce possible findings.

Kind regards,

Spaulding Neuroimaging Lab

Ps I have not attached any code or screenshots as we’re hoping for more general guidance. WIll be happy to post this, though, if it would be helpful.

Hi @Carl !

Thank you for reaching out. Amongst all the parameters that can be tweaked on SCT’s functions, the most critical parameter is the image itself, so it would be very helpful if you could show us 2-3 representative images (all the available contrasts, so we can decide which ones would be the most relevant to use).

Hi Julien, sure and thank you for replying! I’m attaching screen shots of 2 subjects and their respective T1 and T2 QC-reports. The terminal input should be visible in each image. We used the viewer to manually designate the spinal cord center in the T2-contrast for Example1 as both cnn and svm appeared to be struggling (taking a very long time) although this means more manual intervention. We would be very grateful if you have any ideas/suggestions and I’ll be glad to provide more material if required.

Kind regards,


Thank you, it is helpful. Overall, I think the problem is related to the T2w images which show very little CSF around the cord (and the model was trained to see CSF, most of the time). I’m wondering if running the model two times, with flags -c t1 and -c t2 and then sum up the resulting segmentation wouldn’t help. I would also like to explore options to improve centerline detection.

Maybe at this point if you send me 2-3 representative subjects privately, I’d be able to find a custom solution for you.

Ok! That sounds great, would an emailed Dropbox-link to de-identified data be acceptable? I should add that we’ve had issues with the NIfTI-headers and have been using the sct_image “set-sform-to-qform”-command to try to correct this; I believe that the headers may be corrupt in the de-identified data that we look to provide you with.

yes, this is usually how researchers share their data with me