TensorFlow2.0 教程 (2)

2021-07-23 11:31:32 浏览数 (2)

1. 运行环境

强烈安利 Google的Colab,即使你没有一台很好的电脑,也能在这个平台上学习TensorFlow

2. 图片分类

2.1 简介

仍然使用mnist手写数字数据集。完成图片分类。以下代码在Colab中运行

2.2 代码

代码语言:javascript复制
from __future__ import absolute_import, division, print_function
# 安装tensorflow
!pip install -q tensorflow==2.0.0-alpha0

# 导入tensorflow
import tensorflow_datasets as tfds
import tensorflow as tf

# 从keras 导入致密层,平铺层,卷积层以及模型
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

# 导入训练数据以及测试数据
dataset, info = tfds.load('mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = dataset['train'], dataset['test']

# 定义图片灰度归一化函数
def convert_types(image, label):
    image = tf.cast(image, tf.float32)
    image /= 255
    return image, label

# 在训练数据集上面进行图片灰度的转化,进行shuffle,设置batch_size
mnist_train = mnist_train.map(convert_types).shuffle(1000).batch(32)

# 在测试数据集上面进行图片灰度的转化,设置batch_size
mnist_test = mnist_test.map(convert_types).batch(32)

# 定义模型,卷积层   平铺层   致密层   致密层
class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = Conv2D(32, 3, activation='relu')
        self.flatten = Flatten()
        self.d1 = Dense(128, activation='relu')
        self.d2 = Dense(10, activation='softmax')
        
    def call(self, x):
      x = self.conv1(x)
      x = self.flatten(x)
      x = self.d1(x)
      return self.d2(x)

# 创建模型
model = MyModel()

# 定义损失函数以及优化方法
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()

optimizer = tf.keras.optimizers.Adam()

# 定义训练误差,训练精确度
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

# 定义测试误差,训练精确度
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

# 定义训练过程,使用梯度下降法
@tf.function
def train_step(image, label):
    with tf.GradientTape() as tape:
        predictions = model(image)
        loss = loss_object(label, predictions)
    # 计算梯度
    gradients = tape.gradient(loss, model.trainable_variables)
    # 用优化函数,优化    
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    # 计算训练损失
    train_loss(loss)
    # 计算训练精确度
    train_accuracy(label, predictions)
    
# 定义测试过程
@tf.function
def test_step(image, label):
  predictions = model(image)
  t_loss = loss_object(label, predictions)
  
  test_loss(t_loss)
  test_accuracy(label, predictions)
  

# 训练模型,输出结果
EPOCHS = 5

for epoch in range(EPOCHS):
  for image, label in mnist_train:
    train_step(image, label)
  
  for test_image, test_label in mnist_test:
    test_step(test_image, test_label)
  
  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
  print (template.format(epoch 1,
                         train_loss.result(), 
                         train_accuracy.result()*100,
                         test_loss.result(), 
                         test_accuracy.result()*100))

2.3 运行结果

  • Epoch 1, Loss: 0.08975578099489212, Accuracy: 97.29750061035156, Test Loss: 5.826348304748535, Test Accuracy: 54.163330078125
  • Epoch 2, Loss: 0.06657583266496658, Accuracy: 97.99388885498047, Test Loss: 3.9032301902770996, Test Accuracy: 68.86006164550781
  • Epoch 3, Loss: 0.05327553302049637, Accuracy: 98.37999725341797, Test Loss: 2.9438602924346924, Test Accuracy: 76.17047119140625
  • Epoch 4, Loss: 0.04471389576792717, Accuracy: 98.63333129882812, Test Loss: 2.3692970275878906, Test Accuracy: 80.56377410888672
  • Epoch 5, Loss: 0.03830721601843834, Accuracy: 98.82722473144531, Test Loss: 1.9871737957000732, Test Accuracy: 83.50727081298828

2.4 结果分析

  • train Accuracy > Test Accuracy
  • 有过拟合现象

3. 补充说明

When you annotate a function with tf.function, you can still call it like any other function. But it will be compiled into a graph, which means you get the benefits of faster execution, running on GPU or TPU, or exporting to SavedModel.

使用@tf.function 修饰函数的时候,将会被编译成图,这意味着你将运行的更快,在GPU或TPU上。

0 人点赞