TensorFlow之CNN

2020-12-30 16:11:40 浏览数 (1)

这是奔跑的键盘侠的第190篇文章

作者|我是奔跑的键盘侠

来源|奔跑的键盘侠(ID:runningkeyboardhero)

转载请联系授权(微信ID:ctwott)

接上一篇,我们继续……

CNN 就是convolutional neural network, 也就是卷积神经网络,是一种类似于人类或动物视觉系统结构的人工神经网络,包含一个或多个卷积层(convolutional layer)、池化层(pooling layer)和全连接层(fully-connected layer),总之,这些年他很火。

至于其他的就没啥好说了,反正说了你也不懂

直接上代码

代码语言:javascript复制
#!/usr/bin/env python3.6
# -*- coding: utf-8 -*-
# @Time    : 2020-12-19 17:26
# @Author  : Ed Frey
# @File    : CNN_study.py
# @Software: PyCharm

import tensorflow as tf
import numpy as np

## 3.2.1 数据获取及预处理:tf.keras.datasets
class MNISTLoader():
    def __init__(self):
        mnist = tf.keras.datasets.mnist
        (self.train_data, self.train_label), (self.test_data, self.test_label) = mnist.load_data()
        # MNIST中图像默认为uint8(0-255的数字)。以下代码将其归一化为0-1的浮点数,并在最后增加一维作为颜色通道。
        # [60000, 28, 28, 1]
        self.train_data = np.expand_dims(self.train_data.astype(np.float32) / 255.0, axis=-1)
        # [10000, 28, 28, 1]
        self.test_data = np.expand_dims(self.test_data.astype(np.float32) / 255.0, axis=-1)
        self.train_label = self.train_label.astype(np.int32)  # [60000]
        self.test_label = self.test_label.astype(np.int32)  # [10000]
        self.num_train_data, self.num_test_data = self.train_data.shape[0], self.test_data.shape[0]

    def get_batch(self, batch_size):
        # 从数据集中随机取出batch_size个元素并返回
        index = np.random.randint(0, np.shape(self.train_data)[0], batch_size)
        return self.train_data[index, :], self.train_label[index]

## 3.2.2 模型的构建,tf.keras.Model 和 tf.keras.layers
class CNN(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = tf.keras.layers.Conv2D(
            filters=32,  # 卷积核(卷积层神经元)的数目
            kernel_size=[5, 5],  # 卷积核(感受野)的大小
            padding='same',  # padding策略(valid 或 same)same表示要补0操作,卷积完,数据区大小不变。
            activation=tf.nn.relu  # 激活函数
        )
        self.pool1 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2)
        self.conv2 = tf.keras.layers.Conv2D(
            filters=64,
            kernel_size=[5, 5],
            padding='same',
            activation=tf.nn.relu
        )
        self.pool2 = tf.keras.layers.MaxPool2D(pool_size=[2, 2], strides=2)
        self.flatten = tf.keras.layers.Reshape(target_shape=(7 * 7 * 64,))
        self.dense1 = tf.keras.layers.Dense(units=1024, activation=tf.nn.relu)
        self.dense2 = tf.keras.layers.Dense(units=10)

    def call(self, inputs):
        x = self.conv1(inputs)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.pool2(x)
        x = self.flatten(x)
        x = self.dense1(x)
        x = self.dense2(x)
        output = tf.nn.softmax(x)
        return output


### 手写体数字识别,输出图片属于0-9的概率,即一个十维的离散概率分布,满足条件:1.向量中每个元素均在0-1;2.所有元素之和为1.
# softmax函数能凸显原始向量中最大的值,并抑制远低于最大值的其他分量。


if __name__ == '__main__':

    ## 3.2.3 模型的训练:tf.keras.losses 和 tf.keras.optimizer
    num_epochs = 5
    batch_size = 50
    learning_rate = 0.001

    model = CNN()
    data_loader = MNISTLoader()
    optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)

    num_batches = int(data_loader.num_train_data // batch_size * num_epochs)
    for batch_index in range(num_batches):
        X, y = data_loader.get_batch(batch_size)
        with tf.GradientTape() as tape:
            y_pred = model(X)
            loss = tf.keras.losses.sparse_categorical_crossentropy(y_true=y, y_pred=y_pred)
            loss = tf.reduce_mean(loss)
            print("batch %d: loss %f" % (batch_index, loss.numpy()))
        grads = tape.gradient(loss, model.variables)
        optimizer.apply_gradients(grads_and_vars=zip(grads,model.variables))

    ## 3.2.4模型的评估:tf.keras.metrics
    sparse_categorical_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
    num_batches = int(data_loader.num_test_data // batch_size)
    for batch_index in range(num_batches):
        start_index, end_index = batch_index * batch_size, (batch_index   1) * batch_size
        y_pred = model.predict(data_loader.test_data[start_index: end_index])
        sparse_categorical_accuracy.update_state(
            y_true=data_loader.test_label[start_index: end_index], y_pred=y_pred
        )
    print("test accuracy: %f" % sparse_categorical_accuracy.result())

代码其实跟MLP差不多,唯一的区别就是

model = CNN() 这一句中调用CNN模型换掉了,其他代码一模一样。

运行结果:

batch 0: loss 2.325883

batch 1: loss 2.365323

batch 2: loss 2.130912

batch 3: loss 2.158150

……

batch 5996: loss 0.003354

batch 5997: loss 0.000084

batch 5998: loss 0.007176

batch 5999: loss 0.000160

test accuracy: 0.991500

跑CNN代码还是要花点时间的,我这小air跑了好几分钟才完事。当然在上面代码的数据集上,它的准确率还是蛮高的。

-END-

© Copyright

奔跑的键盘侠原创作品 | 尽情分享朋友圈 | 转载请联系授权

0 人点赞