The datasets used in SCT?

Hi, SCT provides a large number of models, it’s awesome!
Are these models trained using public datasets? If possible, could you provide the information on these public datasets (maybe the name of the paper of the corresponding dataset)? I think this will be very helpful for research, Thanks!

Hi,

These models are not trained with public data. The paper associated with the model is normally given in the help, exemple:

julien-macbook:~ $ sct_deepseg_sc 

--
Spinal Cord Toolbox (git-HEAD-b918050113a12dfd7a77d808150f9e5d67fe85f8)

sct_deepseg_sc 
--

usage: sct_deepseg_sc -i <file> -c {t1,t2,t2s,dwi} [-h] [-centerline {svm,cnn,viewer,file}] [-file_centerline <str>]
                      [-thr <float>] [-brain {0,1}] [-kernel {2d,3d}] [-ofolder <str>] [-o <file>] [-r {0,1}] [-v <int>]
                      [-qc <str>] [-qc-dataset <str>] [-qc-subject <str>] [-igt <str>]

Spinal Cord Segmentation using convolutional networks. Reference: Gros et al. Automatic segmentation of the spinal cord and
intramedullary multiple sclerosis lesions with convolutional neural networks. Neuroimage. 2019 Jan 1;184:901-915.

Thanks! :smiley: