由于注释图像的昂贵和稀缺,基于半监督学习的医学图像分割受到了广泛关注,而如何有效利用无标记数据则成为一个极具挑战的任务。为了充分利用无标记数据,同时解决标记数据和无标记数据之间的经验分布不匹配问题,设计了一种基于多尺度特征融合网络的两阶段分割模型。该模型在第一阶段使用标记数据训练一个教师模型,第二阶段联合无标记数据共同训练学生模型。为了提升教师模型的鲁棒性,使用复制-粘贴增强策略来增加数据的多样性。为了缓解在第二阶段生成的伪标签带来的错误指导问题,引入基于分类噪声过程假设的置信学习,减少伪标签引起的潜在偏差。在两个公开器官数据集上进行了综合实验和消融实验,结果表明所提出的模型实现了高精度分割。
Medical image segmentation based on semi-supervised learning has attracted extensive attention because of the high cost and scarcity of annotated images. Effectively leveraging unlabeled data remains a challenging task. This paper proposes a two-stage segmentation model based on a multi-scale feature fusion network to make use of unlabeled data, and address the empirical distribution mismatch between labeled and unlabeled data. The model uses labeled data to train a teacher model in the first stage and both labeled and unlabeled data are used to co-train a student model in the second stage. To improve the robustness of the teacher model, a copy-paste strategy is employed to increase data diversity. To alleviate the misguidance problem caused by the pseudo-labels generated in the second stage, confidence learning based on an assumption of classified noised process is introduced, thereby reducing the potential bias caused by pseudo-labels. Extensive experiments and ablation studies on two publicly available organ datasets demonstrate that the proposed model achieves high-precision segmentation.
[1] Wang Q, Li W, Van Gool L. Semi-supervised learning by augmented distribution alignment [C]//IEEE/CVF International Conference on Computer Vision, 2019: 1466-1475.
[2] Luo W, Yang M. Semi-supervised semantic segmentation via strong-weak dual-branch network [C]//16th European Conference on Computer Vision, 2020: 784-800.
[3] Bai Y, Chen D, Li Q, et al. Bidirectional copy-paste for semi-supervised medical image segmentation [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 11514-11524.
[4] Wang K P, Zhan B, Zu C, et al. Tripled-uncertainty guided mean teacher model for semisupervised medical image segmentation [C]//International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021, 12902: 450-460.
[5] Xu Z, Wang Y X, Lu D H, et al. Ambiguity-selective consistency regularization for meanteacher semi-supervised medical image segmentation [J]. Medical Image Analysis, 2023, 88: 102880.
[6] Xiong Z H, Xia Q, Hu Z Q, et al. A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging [J]. Medical Image Analysis, 2021, 67: 101832.
[7] Bernard O, Lalande A, Zotti C, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved ? [J]. IEEE Transactions on Medical Imaging, 2018, 37(11): 2514-2525.
[8] Woo S H, Park J, Lee J Y, et al. CBAM: convolutional block attention module [C]//15th European Conference on Computer Vision (ECCV), 2018, 11211: 3-19.
[9] Zou Y, Yu Z D, Liu X F, et al. Confidence regularized self-training [C]//IEEE/CVF International Conference on Computer Vision, 2019: 5981-5990.
[10] Xu Z, Lu D H, Luo J, et al. Anti-interference from noisy labels: mean-teacher-assisted confident learning for medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2022, 41(11): 3062-3073.
[11] Yu L Q, Wang S J, Li X M, et al. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation [C]//10th International Workshop on Machine Learning in Medical Imaging (MLMI) /22nd International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI), 2019, 11765: 605-613.
[12] Wu Y C, Xu M F, Ge Z Y, et al. Semi-supervised left atrium segmentation with mutual consistency training [C]//International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021, 12902: 297-306.
[13] Wu Y C, Wu Z H, Wu Q Y, et al. Exploring smoothness and class-separation for semisupervised medical image segmentation [C]//International Conference on Medical Image Computing and Computer Assisted Intervention, 2022, 13435: 34-43.