TN-SCUI2020挑战赛——基于多任务的分割分类网络

2020-07-28 11:36:03 浏览数 (1)

今天将分享甲状腺超声结节多任务的二值分割和二值分类的完整实现过程,为了方便大家学习理解整个流程,将整个流程步骤进行了整理,并给出详细的步骤结果。感兴趣的朋友赶紧动手试一试吧。

一、超声图像分析与预处理

(1)、3644张超声结节原始数据和标注数据及训练标签文件可以在官网上下载获取到,下载下来后如下所示。测试数据一共有910张数据,目前也可以在官网上下载了。

(2)、由于原始数据大小不一样,这里对图像做统一大小设置,都设置成512x512的大小。

(3)、使用全部的3644例数据来训练,为了增加模型鲁棒性,可以对原始数据做数据增强处理,但是在这里就不做数据增强操作了,直接在原始数据量上进行分割和分类。

(4)、原始图像和金标准Mask图像的预处理还需要做归一化操作,统一都归一化到(0,1)。

二、MutilTaskVNet2d分割网络

(1)、在前面分享的文章中,对于分割和分类网络都是分别搭建网络再分别训练的,为了更高效地实现分割和分类任务可以将分割网络和分类网络进行合并来搭建多任务的VNet2d模型,网络输入大小是(512,512),在最后一层编码器的结果中再连接两个全连接层用于分类任务,而解码器的结果用于分割任务。

代码实现如下:

代码语言:javascript复制
def _create_conv_net(X, image_width, image_height, image_channel, phase, drop_conv, n_class=1, n_classify=2):
    inputX = tf.reshape(X, [-1, image_width, image_height, image_channel])  # shape=(?, 32, 32, 1)
    # VNet model
    # layer0->convolution
    layer0 = conv_relu(x=inputX, kernalshape=(3, 3, image_channel, 16), scope='layer0')
    # layer1->convolution
    layer1 = conv_bn_relu_drop(x=layer0, kernalshape=(3, 3, 16, 16), phase=phase, drop_conv=drop_conv,
                               scope='layer1-1')
    layer1 = conv_bn_relu_drop(x=layer1, kernalshape=(3, 3, 16, 16), phase=phase, drop_conv=drop_conv,
                               scope='layer1-2')
    layer1 = resnet_Add(x1=layer0, x2=layer1)
    # down sampling1
    down1 = down_sampling(x=layer1, kernalshape=(3, 3, 16, 32), phase=phase, drop_conv=drop_conv, scope='down1')
    # layer2->convolution
    layer2 = conv_bn_relu_drop(x=down1, kernalshape=(3, 3, 32, 32), phase=phase, drop_conv=drop_conv,
                               scope='layer2_1')
    layer2 = conv_bn_relu_drop(x=layer2, kernalshape=(3, 3, 32, 32), phase=phase, drop_conv=drop_conv,
                               scope='layer2_2')
    layer2 = resnet_Add(x1=down1, x2=layer2)
    # down sampling2
    down2 = down_sampling(x=layer2, kernalshape=(3, 3, 32, 64), phase=phase, drop_conv=drop_conv, scope='down2')
    # layer3->convolution
    layer3 = conv_bn_relu_drop(x=down2, kernalshape=(3, 3, 64, 64), phase=phase, drop_conv=drop_conv,
                               scope='layer3_1')
    layer3 = conv_bn_relu_drop(x=layer3, kernalshape=(3, 3, 64, 64), phase=phase, drop_conv=drop_conv,
                               scope='layer3_2')
    layer3 = conv_bn_relu_drop(x=layer3, kernalshape=(3, 3, 64, 64), phase=phase, drop_conv=drop_conv,
                               scope='layer3_3')
    layer3 = resnet_Add(x1=down2, x2=layer3)
    # down sampling3
    down3 = down_sampling(x=layer3, kernalshape=(3, 3, 64, 128), phase=phase, drop_conv=drop_conv, scope='down3')
    # layer4->convolution
    layer4 = conv_bn_relu_drop(x=down3, kernalshape=(3, 3, 128, 128), phase=phase, drop_conv=drop_conv,
                               scope='layer4_1')
    layer4 = conv_bn_relu_drop(x=layer4, kernalshape=(3, 3, 128, 128), phase=phase, drop_conv=drop_conv,
                               scope='layer4_2')
    layer4 = conv_bn_relu_drop(x=layer4, kernalshape=(3, 3, 128, 128), phase=phase, drop_conv=drop_conv,
                               scope='layer4_3')
    layer4 = resnet_Add(x1=down3, x2=layer4)
    # down sampling4
    down4 = down_sampling(x=layer4, kernalshape=(3, 3, 128, 256), phase=phase, drop_conv=drop_conv, scope='down4')
    # layer5->convolution
    layer5 = conv_bn_relu_drop(x=down4, kernalshape=(3, 3, 256, 256), phase=phase, drop_conv=drop_conv,
                               scope='layer5_1')
    layer5 = conv_bn_relu_drop(x=layer5, kernalshape=(3, 3, 256, 256), phase=phase, drop_conv=drop_conv,
                               scope='layer5_2')
    layer5 = conv_bn_relu_drop(x=layer5, kernalshape=(3, 3, 256, 256), phase=phase, drop_conv=drop_conv,
                               scope='layer5_3')
    layer5 = resnet_Add(x1=down4, x2=layer5)
    # down sampling5
    down5 = down_sampling(x=layer5, kernalshape=(3, 3, 256, 512), phase=phase, drop_conv=drop_conv, scope='down5')
    # layer6->convolution
    layer6 = conv_bn_relu_drop(x=down5, kernalshape=(3, 3, 512, 512), phase=phase, drop_conv=drop_conv,
                               scope='layer6_1')
    layer6 = conv_bn_relu_drop(x=layer6, kernalshape=(3, 3, 512, 512), phase=phase, drop_conv=drop_conv,
                               scope='layer6_2')
    layer6 = conv_bn_relu_drop(x=layer6, kernalshape=(3, 3, 512, 512), phase=phase, drop_conv=drop_conv,
                               scope='layer6_3')
    layer6 = resnet_Add(x1=down5, x2=layer6)

    gap = tf.reduce_mean(layer6, axis=(1, 2))
    # layer7->FC1
    layerfc1 = tf.reshape(gap, [-1, 512])  # shape=(?, 512)
    layerfc2 = full_connected_relu_drop(x=layerfc1, kernal=(512, 512), drop=drop_conv, activefunction='relu',
                                        scope='fc1')
    # layer7->output
    classify_output = full_connected_relu_drop(x=layerfc2, kernal=(512, n_classify), drop=drop_conv,
                                               activefunction='regression', scope='classify_output')
    # layer7->deconvolution
    deconv1 = deconv_relu_drop(x=layer6, kernalshape=(3, 3, 256, 512), scope='deconv1')
    # layer8->convolution
    layer7 = crop_and_concat(layer5, deconv1)
    _, H, W, _ = layer5.get_shape().as_list()
    layer7 = conv_bn_relu_drop(x=layer7, kernalshape=(3, 3, 512, 256), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer7_1')
    layer7 = conv_bn_relu_drop(x=layer7, kernalshape=(3, 3, 256, 256), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer7_2')
    layer7 = conv_bn_relu_drop(x=layer7, kernalshape=(3, 3, 256, 256), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer7_3')
    layer7 = resnet_Add(x1=deconv1, x2=layer7)
    # layer9->deconvolution
    deconv2 = deconv_relu_drop(x=layer7, kernalshape=(3, 3, 128, 256), scope='deconv2')
    # layer8->convolution
    layer8 = crop_and_concat(layer4, deconv2)
    _, H, W, _ = layer4.get_shape().as_list()
    layer8 = conv_bn_relu_drop(x=layer8, kernalshape=(3, 3, 256, 128), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer8_1')
    layer8 = conv_bn_relu_drop(x=layer8, kernalshape=(3, 3, 128, 128), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer8_2')
    layer8 = conv_bn_relu_drop(x=layer8, kernalshape=(3, 3, 128, 128), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer8_3')
    layer8 = resnet_Add(x1=deconv2, x2=layer8)
    # layer9->deconvolution
    deconv3 = deconv_relu_drop(x=layer8, kernalshape=(3, 3, 64, 128), scope='deconv3')
    # layer8->convolution
    layer9 = crop_and_concat(layer3, deconv3)
    _, H, W, _ = layer3.get_shape().as_list()
    layer9 = conv_bn_relu_drop(x=layer9, kernalshape=(3, 3, 128, 64), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer9_1')
    layer9 = conv_bn_relu_drop(x=layer9, kernalshape=(3, 3, 64, 64), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer9_2')
    layer9 = conv_bn_relu_drop(x=layer9, kernalshape=(3, 3, 64, 64), height=H, width=W, phase=phase,
                               drop_conv=drop_conv, scope='layer9_3')
    layer9 = resnet_Add(x1=deconv3, x2=layer9)
    # layer9->deconvolution
    deconv4 = deconv_relu_drop(x=layer9, kernalshape=(3, 3, 32, 64), scope='deconv4')
    # layer8->convolution
    layer10 = crop_and_concat(layer2, deconv4)
    _, H, W, _ = layer2.get_shape().as_list()
    layer10 = conv_bn_relu_drop(x=layer10, kernalshape=(3, 3, 64, 32), height=H, width=W, phase=phase,
                                drop_conv=drop_conv, scope='layer10_1')
    layer10 = conv_bn_relu_drop(x=layer10, kernalshape=(3, 3, 32, 32), height=H, width=W, phase=phase,
                                drop_conv=drop_conv, scope='layer10_2')
    layer10 = resnet_Add(x1=deconv4, x2=layer10)
    # layer9->deconvolution
    deconv5 = deconv_relu_drop(x=layer10, kernalshape=(3, 3, 16, 32), scope='deconv5')
    # layer8->convolution
    layer11 = crop_and_concat(layer1, deconv5)
    _, H, W, _ = layer1.get_shape().as_list()
    layer11 = conv_bn_relu_drop(x=layer11, kernalshape=(3, 3, 32, 16), height=H, width=W, phase=phase,
                                drop_conv=drop_conv, scope='layer11_1')
    layer11 = conv_bn_relu_drop(x=layer11, kernalshape=(3, 3, 16, 16), height=H, width=W, phase=phase,
                                drop_conv=drop_conv, scope='layer11_2')
    layer11 = resnet_Add(x1=deconv5, x2=layer11)
    # layer14->output
    output_map = conv_sigmod(x=layer11, kernalshape=(1, 1, 16, n_class), scope='output')
    return output_map, classify_output

(2)、loss采用的混合损失函数,分别混合了二分类的dice函数和分类二值交叉熵函数。

代码实现如下:

代码语言:javascript复制
 def __get_cost(self, cost_name, Y_gt, Y_pred):
        if cost_name == "cross_entropy":
            # the function first calculate softmax then calculate the
            # cross_entropy(-tf.reduce_sum(self.Y_gt*tf.log(Y_pred))),
            # logits don't through the tf.nn,softmax function
            loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=Y_gt, logits=Y_pred))
            return loss
        if cost_name == "dice coefficient":
            H, W, C = Y_gt.get_shape().as_list()[1:]
            smooth = 1e-5
            pred_flat = tf.reshape(Y_pred, [-1, H * W * C])
            true_flat = tf.reshape(Y_gt, [-1, H * W * C])
            intersection = 2 * tf.reduce_sum(pred_flat * true_flat, axis=1)   smooth
            denominator = tf.reduce_sum(pred_flat, axis=1)   tf.reduce_sum(true_flat, axis=1)   smooth
            loss = 1 - tf.reduce_mean(intersection / denominator)
            return loss

具体实现可以参考Tensorflow入门教程(三十四)——常用两类图像分割损失函数。

(3)、整个损失结果,分割精度和分类精度结果如下图所示。

为了方便大家更高效地学习,我将代码进行了整理并更新到github上,点击原文链接即可访问。此外训练好的模型文件已经上传至百度云盘:链接:https://pan.baidu.com/s/15E-RvaqSLdDMBCUZoGTrDA

提取码:hszn。

如果大家觉得这个项目还不错,希望大家给个Star并Fork,可以让更多的人学习。

0 人点赞