Dear SCT community
I have a rather large dataset (90 scans in total) on which to evaluate the effect of a vendor-provided GRE distortion correction. Siemens scanners, so this is the “phase stabilisation” method. The effect is to counteract the echo-time dependent distortion on multi-echo scans.
To illustrate, here are four sets, first echo (TE=3ms) and last echo (TE=16ms) for the same subject, same session, with and without correction, same slice (though movement between the two has not been corrected for), same windowing:
Echo 1, Without correction:
Echo 6, without correction:
Echo 1, with correction:
Echo 6, with correction:
The images I am showing above are MT contrast, below is PD (with a later echo time):
TE=3 without:
TE=24 without:
TE=3 with:
TE=24 with:
I might have been staring at these images for too long, but I can definitely see a qualitative difference. The correction does seem to lead to a preservation of boundaries, of definition, and improve image quality. The images look less, for lack of a better word, fuzzy.
When I look at the whole image, rather than just the SC, this becomes more apparent.
I have tried to quantify this by segmenting the SC from all echoes, and then getting the CSA and eccentricity, but that does not show significant differences. There is an effect, though, as the CSA decreases less with echo time using the correction, so I am confident I am not chasing a ghost.
I think solidity might do it, but I am not sure. I think the SC segmentation might also be too good to be bothered by such effect.
Does anyone have a suggestion on how to quantify this?