[Keras深度学习浅尝]实战二·CNN实现Fashion MNIST 数据集分类

2019-06-27 13:05:10 浏览数 (1)

[Keras深度学习浅尝]实战二·CNN实现Fashion MNIST 数据集分类

与我们上篇博文[Keras深度学习浅尝]实战一结构相同,修改的地方有,定义网络与模型训练两部分,可以对比着来看。通过使用CNN结构,预测准确率略有提升,可以通过修改超参数以获得更优结果。 代码部分

代码语言:javascript复制
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras

# Helper libraries
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
import numpy as np
import matplotlib.pyplot as plt

EAGER = True

fashion_mnist = keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

print(train_images.shape,train_labels.shape)


train_images = train_images.reshape([-1,28,28,1]) / 255.0
test_images = test_images.reshape([-1,28,28,1]) / 255.0


model = keras.Sequential([
    #(-1,28,28,1)->(-1,28,28,32)
    keras.layers.Conv2D(input_shape=(28, 28, 1),filters=32,kernel_size=5,strides=1,padding='same'),     # Padding method),
    #(-1,28,28,32)->(-1,14,14,32)
    keras.layers.MaxPool2D(pool_size=2,strides=2,padding='same'),
    #(-1,14,14,32)->(-1,14,14,64)
    keras.layers.Conv2D(filters=64,kernel_size=3,strides=1,padding='same'),     # Padding method),
    #(-1,14,14,64)->(-1,7,7,64)
    keras.layers.MaxPool2D(pool_size=2,strides=2,padding='same'),
    #(-1,7,7,64)->(-1,7*7*64)
    keras.layers.Flatten(),
    #(-1,7*7*64)->(-1,256)
    keras.layers.Dense(256, activation=tf.nn.relu),
    #(-1,256)->(-1,10)
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

print(model.summary())

lr = 0.001
epochs = 5
model.compile(optimizer=tf.train.AdamOptimizer(lr),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(train_images, train_labels, batch_size = 200, epochs=epochs,validation_data=[test_images[:1000],test_labels[:1000]])

test_loss, test_acc = model.evaluate(test_images, test_labels)

print(np.argmax(model.predict(test_images[:10]),1),test_labels[:10])

输出结果

代码语言:javascript复制
(60000, 28, 28) (60000,)
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        832
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0
_________________________________________________________________
flatten (Flatten)            (None, 3136)              0
_________________________________________________________________
dense (Dense)                (None, 256)               803072
_________________________________________________________________
dense_1 (Dense)              (None, 10)                2570
=================================================================
Total params: 824,970
Trainable params: 824,970
Non-trainable params: 0
_________________________________________________________________
None
Train on 60000 samples, validate on 1000 samples
Epoch 1/5
60000/60000 [==============================] - 64s 1ms/step - loss: 0.3806 - acc: 0.8619 - val_loss: 0.2797 - val_acc: 0.9010
Epoch 2/5
60000/60000 [==============================] - 63s 1ms/step - loss: 0.2495 - acc: 0.9090 - val_loss: 0.2647 - val_acc: 0.9000
Epoch 3/5
60000/60000 [==============================] - 63s 1ms/step - loss: 0.1987 - acc: 0.9255 - val_loss: 0.2725 - val_acc: 0.9000
Epoch 4/5
60000/60000 [==============================] - 63s 1ms/step - loss: 0.1630 - acc: 0.9388 - val_loss: 0.2852 - val_acc: 0.9010
Epoch 5/5
60000/60000 [==============================] - 63s 1ms/step - loss: 0.1314 - acc: 0.9514 - val_loss: 0.2704 - val_acc: 0.9140
[9 2 1 1 6 1 4 6 5 7] [9 2 1 1 6 1 4 6 5 7]

0 人点赞