We are trying to conduct a multicenter study with a fairly large data set (~150 participants) and are using SCT to help us measure C2/C3 CSA on brain images as suggested here: How to compute CSA from brain data.
For various reasons, data quality is sometimes lacking which makes accurate segmentation more difficult and necessitates manual segmentation edits. We’ve tried using T1, T2, propseg, smoothing with different parameters, but although some segmentations improve when we change parameters, other segmentations worsen.
Apart from improving input image quality, are there any general recommendations or suggestions for how one can improve segmentation or how to adress segmentation issues in larger datasets? As mentioned, we haven’t been able to find a “fix all” solution yet for these datasets that surpasses the output we get when using Julien’s suggestion from the post mentioned above. Doing a lot of manual segmentation-edits in FSLeyes is of course possible but our concern is that this may introduce more variation and also make it harder to reproduce possible findings.
Spaulding Neuroimaging Lab
Ps I have not attached any code or screenshots as we’re hoping for more general guidance. WIll be happy to post this, though, if it would be helpful.