Error when running sct_get_centerline

Thank you very much for your help Julien. I shared the data via cloudstore.

i received it but it is password-protected

Password is Juliencohen-2020

Let me know please whether it works or not. So, I may need to re-send them again.

1 Like

working!

1 Like

@ihattan I’ve looked at the data, they are superbe. The plan is to generate a specific deep learning model for your data, make it available in a new release of SCT, so you will be able to use it. It should take ~1w.

1 Like

Julein I really appreciate your great help and support.

Thank you very much.

Cheers,

Ibrahim Hattan

1 Like

Hi @ihattan,

What has been done:

  • We used a deep learning model trained on exvivo T1w data 0.08mm isotropic, to infer on your DWI data the spinal cord and grey matter segmentation.
  • The results are encouraging, but not great because of the contrast difference / image appearance between the data used for the original model training and your new data (DWI), see here.

What we plan to do:

  • Manually segment few slices (~40 slices on both images you sent) based on the model predictions
  • Use the pre-trained model and fine-tune the parameters (using the new ground-truth generated on your data).
  • Test the new generated model.

I will message you next Monday morning, Hobart time, to give you updates.

Cheers,
Charley

Hi @ihattan,

Just a quick Monday update: we are still working on generating some ground truth to fine tune a model for your data. It should not be too long now (1 or 2 more days to generate the GT, then model generation).

I will make sure to keep you posted.
Cheers,
Charley

1 Like

Hi @ihattan,
I will be working on your model today and tomorrow. And will let you know how it goes by the end of the week.
Apologies for the delay.
Cheers,
Charley

1 Like

The new trained model reaches 80-85% of Dice score! We will manually segment 40 more slices and hope to reach 90% of Dice score before putting the model into production.
Regards,
Charley

1 Like

Hi Charley,

Thank you very much for your help and continues support.

Ok almost there and Many thanks once again.

Cheers,

Ibrahim

Hi @ihattan, just following on this issue. Has this been resolved with the new model?

1 Like

Hi @jcohenadad,

I’ll share with you the outcome soon with screenshots for the results.

Many thanks for your help and continues support.

Cheers,

Ibrahim

Hi @jcohenadad and @Charley_Gros,

please find attached screenshots results for the new model…

Command used here is available in the latest version SCT v5.2.0

It was used as instructed by @Charley_Gros

sct_deepseg -install-task seg_mice_gm-wm_dwi

… and can be used as follows:

sct_deepseg -i NIFTI_IMAGE -task seg_mice_gm-wm_dwi

1- dwi data before processing…

2- gm seg result …

3- wm-gm seg result…

Many many thanks for this robust model and thank you for all SCT team for continues help and support to the community…

Cheers,

Ibrahim

1 Like

Fantastic!!! :tada: Thank you so much @Charley_Gros for these efforts, and @ihattan for the feedback, it is greatly appreciated.

1 Like

I would like to clarify something on this thread: it all started with @ihattan not being able to analyse human ex vivo data (see: Error when running sct_get_centerline).

Then, @Charley_Gros trained a new deepseg model, called seg_mice_gm-wm_dwi (see: Error when running sct_get_centerline). So: this model was trained on mouse data (assumed from the name of the model). @ihattan can you please confirm that the data you sent to Charley were indeed mouse data and not human data?

If so, what is the rationale for segmenting human ex vivo data using a mouse ex vivo model?

Hi @jcohenadad & @Charley_Gros,

Thank you very much @jcohenadad for getting back with these valuable information. The data was for DW human exvivo not mouse. There was just a mistake when named the model for a mice instead human I think. Please note that the T1 ex vivo template was generated for T1 from the same samples.

I hope this clarify your enquiry.

Many many thanks in advance for your effort and time to solve these issues.

Cheers,

Ibrahim

Thank you for the quick response Ibrahim,

OK, it all makes sense now. I’ve further confirmed it by looking at the dimensions of the data (see here). We will fix the name of the model then.

@ihattan since we’re at it: what is the contrast of the image that was used to train the model? You mention DW data, but DW acquisition is 4D (ie: it includes DW files and b=0 files which are more T2w). Was the b=0 image used? (it should have, because it is the one that shows best gray/white matter contrast).

I did the ground truth segmentation for wm on b0 image. @Charley_Gros which contrast was used to train the model?
could you please provide more details for @jcohenadad ?

Many thanks @jcohenadad & @Charley_Gros for your time…

Cheers,
Ibrahim

so it is very likely that Charley used the b=0 image then. So I would not call it a “DW model” but rather a T2 model.