多尺度融合与注意力结合的头颈部危及器官自动分割
Automatic segmentation of organs at risk in head and neck carcinoma from radiation therapy using multi-scale fusion and attention based mechanisms
摘要目的:开发一种多尺度融合与注意力机制结合的头颈部肿瘤放疗危及器官图像分割方法。方法:基于U-Net卷积神经网络,为增强分割模型的特征表达能力,将空间和通道注意力模块与U-Net模型相结合,提高与分割任务相关性更大的特征通道权重;在网络模型编码阶段引入本文提出的多尺度特征融合算法,补充模型下采样过程中损失的特征信息。使用戴斯相似性系数(DSC)和95%豪斯多夫距离(HD)作为不同深度学习模型之间比较的性能评估标准。结果:在医学图像计算和计算机辅助干预国际会议(MICCAI)StructSeg 2019数据集上进行头颈部22个危及器官的分割。相比于已有方法,本文提出的分割方法平均DSC提升了3%~6%,22种头颈部危及器官的分割平均DSC为78.90%,平均95%HD为6.23 mm。结论:基于多尺度融合和注意力机制的U-Net卷积神经网络对头颈部危及器官达到了更好的分割精度,有望在临床应用中提高医生的工作效率。
更多相关知识
abstractsObjective:To develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.Methods:We proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.Results:The segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.Conclusion:Automatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.
More相关知识
- 浏览0
- 被引0
- 下载0

相似文献
- 中文期刊
- 外文期刊
- 学位论文
- 会议论文


换一批



